One of the surprises I have had while commenting on various blogs is the claim that "science is not a communal enterprise". Normally this comes from global warming deniers who reject the judgements of the large number of scientists who feel that global warming is a serious issue. If most of the scientists in the world disagree with the deniers, then the deniers decide that the judgements of most of the scientists in the world don’t really matter. They claim that science doesn’t work by consensus, it relies upon an absolute standard of truth. Scientists can be wrong, you know. They base their claims on the episodes when scientists rejected ideas that later proved to be correct. What they don’t realize is that most of the cases that they cite are very old, and modern cases are extremely rare.
Let’s review some of the modern cases of the community of scientists dealing with radical new ideas. I’ll restrict myself to the 20th century. It started out with a bang with Einstein’s papers on relativity and the photoelectric effect. Both papers were revolutionary in their import. His work on the photoelectric effect was readily embraced because it made sense to most physicists. His theory of special relativity was truly weird and quite a few scientists regarded it with skepticism, but as others fleshed out his ideas and demonstrated that, however weird it was, it seemed to work, the tide of opinion came over to embrace the theory. There was never any serious opposition to the theory; most scientists simply took a "wait and see" attitude for a few years. Thus, science on this occasion demonstrated proper skepticism but ultimately embraced the idea.
Another good example comes from the theory of continental drift proposed by Alfred Wegener in 1912. His theory was roundly rejected, but not for reasons of obtuseness on the part of scientists. As it happens, there were excellent arguments against his theory. The most important was that his notion of continents drifting over the surface of the earth was, according to all the evidence, preposterous. The amount of energy required to move a continent is stupendous, and Wegener never proposed any explanation for whatever force was driving this movement. Moreover, a physicist showed that, if continents were freely drifting over the surface of the earth, dynamic forces would cause them to drift in the general direction of the equator. The fact that, after billions of years the continents are not all sitting on the equator was considered to be powerful enough to disprove Wegener’s theory.
What changed everything was the acquisition of new data. First came the discovery that the magnetization of adjacent strips of ocean floor showed a pattern of regular reversals. The only explanation that made sense was seafloor spreading combined with reversals in the earth’s magnetic field. This lent strong support to another idea, that of slow circulation in the earth’s mantle. In turn, the acceptance of such slow circulation lent support to Wegener’s theory. At this point, scientists were convinced rather quickly. The evidence was now in place and the most important objection to the theory (the lack of any mechanism to drive the continents) was put to rest.
This comes out as a positive score for science. Scientists weren’t wrong to reject the theory in the first half of the 20th century; the available evidence didn’t support it. Note that the rejection of the theory was never considered final; the problem was that there wasn’t enough evidence and no apparent underlying mechanism. To be relevant to our issue, we need to consider not a theory that was eventually accepted (such as special relativity or plate tectonics) but a theory that was embraced and later discarded. This would be the correct analog of the current situation with climate change, in which the great majority of scientists have embraced the theory that the earth’s temperature is increasing because of anthropogenic carbon emissions. Here’s the rub: I can think of no such theory developing at any time since 1900. Scientists are a skeptical lot; they don’t accept a theory until they have seen strong evidence for it.
Thus, the track record for the scientific community in the last century is perfect, at least as far as accepting incorrect theories. Yes, there has been tremendous changes in science during this time, but most of that change has been adjustments to old theories or new theories to explain newly discovered phenomena. This comes as a surprise to many people who are steeped in the lore of scientific revolutions, of scientists getting things horribly wrong, of courageous mavericks eventually overcoming the obtuse resistance of hidebound old fogies. This all makes very good drama, but the reality is never so exciting; indeed, true drama in science is exceedingly rare. The vast majority of scientific work is what I shall call “accretive”: work that adds to our knowledge base in a way that merely confirms our expectations. Despite its boring content, it is important work, because scientific progress is defined just as much by confirming what we expected as by discovering what we didn’t expect.
But some scientific studies reveal results that aren’t quite what we expect. Usually, the scientist’s first response to such results is to mutter “Gee, that’s odd” and then go through the data looking for a mistake. And in most cases the oddity is eventually explained as a mistake, an oversight, an instrumental error, or some other petty problem. But occasionally, one of these oddities turns out to be genuine. I emphasize, this doesn’t often happen, but when it does, its import is usually minor. The scientist writes it all up in a paper and the overall impact is an adjustment in existing theory.
Here’s an example of such a discovery. A tiny field of astronomy is the study of meteor showers. People have been observing meteors for centuries, but only recently have we been able to obtain reliable measures of the brightness of meteors. Right up until the last twenty years, the great majority of observations of meteors came from visual observers. The human eye is good enough for counting meteors, but when it comes to estimating how bright a meteor is, the eye just doesn’t do a good job. After all, the typical meteor flashes across the sky in half a second, starting dimly and brightening until it reaches maximum brightness just before disappearing. For years, the best alternative was the camera with sensitive film, but even the best cameras could see only the brightest meteors. However, in the late 1980s, the combination of image intensifier technology (night vision goggles) and more sensitive video cameras made it possible to record video of meteors almost as dim as what the human eye could see. Since then, we have seen improvements, and now such video systems are much more sensitive than the human eye. More importantly, they provide us with reliable measurements of the brightness of meteors.
This has led us to a minor discovery. There are bright meteors and there are dim meteors. One would expect that bright meteors are rarer and dim meteors are more common. Indeed, various mathematical analyses suggest the exact distribution we should expect of meteor brightnesses. But when we analyzed the brightnesses of meteors from different meteor showers, we discovered an oddity: each shower has its own unique distribution of brightnesses. These characteristic distributions still follow a common pattern; we can specify a single number that tells us exactly what to expect from a particular shower. So we have adjusted our theory of meteor showers from: “All meteor showers have the same natural distribution of brightnesses” to “Each meteor shower has a characteristic distribution of brightnesses specified by a single number”.
This is the most common means by which science changes. Things that we thought were simple turn out to be a bit more complicated. I have no doubt that someday, somebody will combine this knowledge of meteor brightnesses with some other new knowledge and come up with a new insight into how meteor showers evolve. It won’t be a revolutionary new idea; instead, it will be a small improvement in our understanding of the universe.
There is a third kind of scientific change that most non-scientists are aware of: revolutionary change. These things are dramatic and they are exceedingly rare. Moreover, they are seldom attributable to a single scientist, nor are they traceable to a single discovery. Mostly, they are communal efforts in which fragments of ideas are rearranged into a new structure. A good example of this process is the development of plate tectonic theory, which I discussed earlier. There was no single event that made plate tectonics, nor any single scientist who made it happen. There were individuals who pushed it forward, but no Einstein, Darwin, or Newton who did it all in one dramatic stroke. There were many crucial steps in the overall process: the discovery of magnetic striping in the ocean floor; the mapping of the mid-Atlantic ridge; the development of concepts about circulation of the earth’s mantle; more careful investigation of correlations of rock types and fossils on opposite sides of the mid-Atlantic ridge; better dating methods for ancient rocks; and on and on. The end result was a dramatic shift in thinking about the earths’ continents and oceans, but it is simplistic to think of that shift as the result of a single event. Instead, think in terms of a seesaw with weights piled up on both ends. Initially, one end of the seesaw is heavier than the other, and that end rests on the ground. However, one scientist discovers a single case of magnetic striping on ocean floors; that adds a small weight to the other end of the seesaw. More scientists measure other cases of magnetic striping, adding more weights to that end of the seesaw. Other scientists develop improved methods for dating rocks; this adds some more small weights. Some others work out the physics of how the mantle can have currents; even more weights are added. Then comes a long series of papers by many scientists showing match-ups between rock types and fossils on opposite sides of the mid-Atlantic ridge in many different latitudes. At about this point the seesaw begins to tip in the other direction. It doesn’t move suddenly; each added weight moves it a little more, until eventually the seesaw has completely changed its position.
This is the true way science progresses. It makes for lousy movies, boring news stories, and dull television shows. But that’s how it actually works. The key point I want to drive home is that science progresses as a community effort. There is no ultimate test of Objective Truth. I’m not denying the existence of Objective Truth; instead, my claim is that the community of scientists is our best guide as to what constitutes scientific truth. If not them, then who? You? Me? Somebody on a blog? The claim science can be decided by Everyman, that the evidence has to convince anybody and everybody, is egotistical bullshit. The evidence for and against any serious scientific hypothesis is complicated. Remember, scientists don’t use off-the-shelf equipment; most of their instruments are one-of-a-kind devices specially tweaked for maximum performance. In order to appreciate the significance of each piece of evidence, you must understand the procedure used to obtain that evidence -- and in many cases, that procedure is itself complex. Science is a specialty, just like medicine or law. Do patients contradict doctors’s diagnoses? Yes, some do, because they’re pigheaded. But reasonable people are willing to trust medical experts. The same thing goes for law. You’re welcome to contradict the judge in your trial, but you won’t get very far. In a large number of specialties, we defer to the specialists who have spent years mastering their field. Why can’t we do the same with scientists?