December 2nd, 2011
I am generally perceived to be strongly opinionated. I suspect that even my closest friends would agree with the statement “Chris Crawford is opinionated.” I’ve never given much thought to that until recently, and here I’d like to offer my thoughts on it, not as a defense or an apology (in the classic sense), but rather as a way of making a useful statement about the nature of thinking.
The word ‘opinionated’ bristles with connotations, some of which don’t apply to me. An opinionated person is certain of the truth of his opinions, and rejects the possibility that other opinions might have merit. There is a connotation of closed-mindedness, that the opinionated person has made up his mind and simply will not consider arguments to the contrary.
But consider this: what if the opinionated person has duly considered all the counterarguments and determined them to be devoid of merit? Obviously, nobody can ever know all the counterarguments, but what if the opinionated person has discussed the question with many different people and has reached the point where all the counterarguments raised by others are ones that he has already discussed, considered thoroughly, and rejected for good reason? If he keeps hearing the same discredited arguments over and over, is it not reasonable to infer that there probably aren’t any remaining counterarguments? I do not come to a firm conclusion until I have turned it over and over in my mind, addressing it from many different angles, and come to same conclusion regardless of the angle from which I approach it.
Consider further the nature of decision-making in the human mind. Susceptibility to groupthink is one of our greatest weaknesses: if everybody else says that the sky is red, an individual is likely to start believing that the sky really is red. In Nazi Germany, millions of otherwise well-meaning people managed to convince themselves that Jews were bad people deserving of harsh treatment. After the war, after the grip of Nazism on their minds was shattered, they recovered from their illusions. A famous psychology experiment some fifty years ago really demonstrated how easily our thinking is swayed by others. The subject had volunteered to participate in a psychology experiment in which he was supposedly to assist in the assessment of new techniques for learning. A second person was presented to the subject as another volunteer who was to be the learner. This second person was actually one of the experimenters. The format was simple: the learner was supposed to answer questions correctly. If the learner made a mistake, the subject was expected to apply a small electric shock to the learner – in truth, the button that the subject pressed did nothing. As the learner’s mistakes mounted, the voltage was steadily increased. At some point, the learner would start complaining that the shock was overly painful. The subject would often hesitate to apply the shock, but an experimenter in the same room with the subject would assure the subject that the protocols of the experiment required him to apply the shock. The shocks became worse and worse until the learner was screaming in faked pain, and eventually played dead. The experimenters found that most people would obey the experimenters and apply extremely painful shocks to their supposed victims. The experiment showed how easily social pressure can overrule people’s good judgement.
What happens, then, when I come to a conclusion contrary to commonly held beliefs? Most people believe that, if everybody else believes X, then X must be true. Therefore, if I reject X, they conclude that I must be wrong and my steadfastness is a measure of how opinionated I am. Their confidence in the reliability of the group collides with my confidence in the logic of my reasoning, and their only plausible explanation of my certitude is that I’m opinionated.
But there’s a more subtle aspect to this problem, and this essay is my attempt to explain that subtle point. The problem concerns how we test and certify our beliefs. The common mistake here is to divide all beliefs into two classes: proven and unproven. A proven belief is one for which a compelling logical argument has been presented; an unproven belief is merely an opinion. Sounds reasonable, doesn’t it? Reasonable, yes, but complete, no. This line of thinking misses a crucial factor in human thought.
First, proof is a mathematical concept and is only possible for mathematical statements. We cannot prove that the earth revolves around the sun; we can only amass a large amount of evidence supporting the hypothesis that the earth revolves around the sun. That mass of evidence is humongous; but it does not provide us with proof; there always remains the possibility that we are somehow failing to recognize some more important force at work. That possibility is so tiny as to be insignificant – but it is not zero.
While proof is unachievable in the real world, we can still amass such a large collection of evidence as to establish a very high probability of correctness for some hypotheses. We can still differentiate between those statements for which we have established a very high probability and those for which we have not acquired enough evidence to be confident. Juries are asked to assess the proof of guilt ‘beyond a reasonable doubt’, not ‘absolutely proven’. This approach is quite reasonable.
So we’re not dealing with black-and-white proof, but rather shades of gray of certitude. We generally hold that some claims have been supported to a very dark shade of gray, and we consider such claims to be worthy of certitude. Claims that have lighter shades of gray are not yet worthy of certitude – they are only opinions. Somebody who assigns certitude to a claim that has only a light shade of gray is opinionated.
But now let’s look more closely at the nature of evidence. Most educated people assign credence only to those claims for which a solid logical chain of reasoning can be presented. Most such people believe that, if you can’t proceed from A to B to C to the conclusion D, and demonstrate that A, B, and C are all trustworthy, then your conclusion D is merely an opinion. This belief is mistaken.
There’s another way to reach conclusion D without following a chain of reasoning. Philosophers call it induction, but I think that this doesn’t address the issue. I present for your consideration an image:
I ask you to identify this image. Of course, you have no problem; it’s George Washington on the one dollar bill. But now I ask you to prove that this is indeed George Washington on the one dollar bill. You can’t; without the lost portions of the image, you really can’t be sure. But you are nevertheless quite certain of your conclusion. Do you reach this conclusion by induction? Only in the broadest sense of the term. What’s really going on is this: your mind assimilates all the information available to it and recognizes a pattern. You can’t really be certain. Here’s another image:
Again, it’s impossible to be certain, but you can definitely recognize George Washington here. You have taken all the information in the picture and combined it in a pattern-recognizing process that yields the conclusion that this is indeed an image of George Washington. It’s not induction, it’s pattern recognition.
This pattern-recognition process is what drives my opinions. When I have looked over a pattern of information, and considered many interpretations of it, I settle on a conclusion as to the meaning of the pattern. My confidence in the in my conclusion depends on how clear the pattern is.
The argument against this kind of reasoning is that pattern-recognition is a subjective process that can’t be shared. If you want to claim that the image is actually Lafayette, there’s no way that the two of us can reason together to decide who’s right. There’s no arguing about pattern recognition; what one person sees, another can deny.
I spent most of my life revering the tradition of rigorous deductive logic; I was trained as a physicist and have been writing programs for decades. But in my 40s I realized that pattern-recognition can accomplish some cognitive tasks that rigorous deductive logic cannot. Whenever possible I prefer to rely on rigorous logic. But it is undeniable that there are problems that cannot be solved with rigorous logic, and in those cases, pattern recognition is our only resort. My opinions are the result of pattern recognition.
So why should anybody heed my opinions? I make no claims as to the infallibility of my opinions; I can only state them as clearly as possible and offer the best justification I can. In many cases the patterns that lead me to my opinions are so diffuse, so widely dispersed over such a broad range of material that it is almost impossible to articulate my thinking. That doesn’t make my opinion any weaker; it makes the presentation of my opinion less convincing.
Over the decades I have grown cynical about the ability of Homo Sapiens to exercise cognitive skills effectively. To put it more succinctly, people are stupid. Trying to convince them of the correctness of my opinions is a waste of my time. The best I can do is to present those opinions as best I can, and hope that some people understand them.