More on the AI Brouhaha

June 11th, 2023

I continue to read news stories either rapturous over the glorious future promised by AI or desperate over the extinction of humanity threatened by AI. Jeez, people, calm down! I have two main points for you to consider.

Generative Parrots
People certainly seem overwhelmed by the capabilities of generative AI; they are convinced that it will revolutionize everything. I am reminded of an anecdote about the program “Eliza”, created by computer scientist Joseph Weizenbaum in 1964. Eliza was a very simple natural-language emulator that interacted with the user by using a trick from Rogerian psychological therapy. It would simply copy and paste a salient word or phrase from the user’s input into a question, resulting in a conversation that might look like this:

ELIZA: WHAT IS TROUBLING YOU TODAY?
USER: MY MOTHER. I HATE HER.
ELIZA: WHY DO YOU THINK THAT YOU HATE HER?
USER: SHE’S ALWAYS YELLING AT ME.
ELIZA: WHY DO YOU THINK THAT SHE’S ALWAYS YELLING AT YOU?

I believe that this anecdote comes directly from Mr. Weizenbaum himself, but I may be wrong. It seems that one day, a friend, at Mr. Weizenbaum’s invitation, borrowed Mr. Weizenbaum’s office while he was away in order to use the Eliza program. When Mr. Weizenbaum returned to his office and entered, his friend, teary-eyed, left the office without saying anything. A moment later he returned, snatched the paper output (he had been using the teletype terminal in Mr. Weizenbaum’s office) and stalked out.

Sixty years ago, people took AI too seriously. They still do.

Generative AI is nothing more than a fast parrot with a huge memory. It knows everything and understands nothing. It’s great for producing variations on existing material. It can do nothing else.


General-purpose AI
The craziest nonsense is coming from people who simply extrapolate current technology far into the future. Their argument is not based on any careful analysis of that technology; it’s just mindless extrapolation. This kind of extrapolation would have, in 1940, predicted that cars in the year 2000 would be capable of speeds over 1,000 mph, would cost $1.50, and would travel 7,000 miles on a gallon of gasoline. That was in fact a mathematically correct extrapolation of past trends — but extrapolations seldom work. 

There are many reasons why extrapolations don’t work. The simplest reason is that technologies usually follow a simple progress curve looking like this:

At point A, nobody even notices the technology because it hasn’t done anything useful. At point B, everybody goes wild over the technology because it’s progressing so rapidly, and nitwits extrapolate its progress to make the absurd prediction in blue. At point C, the technology is old an boring, like hard disk drives today.

The most spectacular failure of the AI-enthusiasts, though, lies in a profound failure to understand some basic truths about knowledge and the human brain. This arises from something most generally known as “the mind-body problem”, an issue that has infected Western thinking at least since the time of the Greeks.

The basic concept arose from the realization that rational thinking precludes crime. No rational person commits crimes because a rational person knows that they’ll probably be caught, and the penalties for the crime greatly exceed the benefits of committing the crime. Rationally speaking, crime just isn’t worth it. You don’t need a PhD in logic to realize this; it’s obvious to everybody. Yet, people continue to commit crimes. How is it that human beings, equipped with the most powerful mind in the biosphere, can make such stupid mistakes? 

The Western answer, broadly speaking, is that we humans suffer from a kind of dual personality. We have a rational part and an irrational part. That irrational part is usually identified somehow with the body. Our bodies impose urges upon us that we fail to resist. We steal because we are hungry. We kill because we are angry. We get into all manner of messes because of our sexual drives. The moral imperative, then, is for the mind (or soul, in some versions) to achieve control over the body. This was the driving force behind Christian asceticism, the belief that spiritual success required denial of bodily needs. The notion persists today in the belief that fasting is good for the soul. Other echoes of the basic Western concept appear in Roman literature as contempt for the rabble who cannot transcend their bodily urges or the readiness with which a Roman was expected to anticipate death, even committing suicide when honor required it.

Descartes put all these notions into a strong philosophical framework by discussing at length the mind-body problem. Does the mind (soul) transcend the body, or are mind and body a single unified entity? Western thinkers have wrestled with the debate ever since. Nor should we forget Freud’s notion of superego-ego-id characterizing the battle between good and evil in his preferred psychological form. And how about Spock’s struggle with logic versus emotion? At its core, it's really the twentieth century expression of the mind-body problem.

This overall line of thinking is what lies behind the silly notion that computers will someday outthink people. The geeks call this “general-purpose AI” and disagree only over how soon this milestone will be reached. 

This makes for great philosophical speculation, but it’s all a bunch of ignorant hot air. It would help if these people learned some neurophysiology. You see, the brain is not a computer. It’s nothing like a computer. The operation of the brain is profoundly different from that of any computer. 

The crucial difference lies in the fact that the brain is not independent of the body. The brain and the body are intricately interconnected in many different ways. The simple act of walking changes blood chemistry in ways that alter brain function. Almost everything we do physically has effects on brain function. The steady stream of stimuli flowing into the brain — sensory information, bodily position, organ function — all of these factors are necessary to the function of the brain. If you were to isolate a brain from its body but somehow keep it alive with blood flow, the person would experience profound disorientation and would quickly lose awareness. 

You can easily verify the importance of all this somatic stimulus. Right now, as you read this essay, stop for a moment and take note of all the sensory information flooding into your brain. You’ve got many years of experience in suppressing this information, but imagine its sudden absence. Imagine not being able to feel your butt where you’re sitting, not noticing the feelings of your fingers, not knowing how your bodily posture, not being able to feel the air moving through your nostrils and into your lungs, not being aware of the pressure on your legs or feet, or the temperature of the skin of your arms versus that of your feet — being deprived of all the sensory information that is part of your existence. Do you really think that your brain could function in such a dramatically different environment?

Sure, sure, you could probably rig artificial stimuli to replicate the experience of the body — but how could you get a proper replication without duplicating the actual parts of the body in an actual physical situation?

Our brains are built to operate in a very specific set of coordinated inputs. For example, if you are in a car that is heaving left and right but you cannot see any visual indications of that heaving — in other words, if your eyes tell you that you are stationary but your inner ears tell you that you’re moving — then you’ll vomit. Our brains have all sorts of truths about physical reality built into them, and any violation of these complex relationships throws the brain off kilter. You could program a gigantic computer to feed exactly the right stimuli to the disincorporated brain to fool that brain into thinking that it is inside a normal body — but if you get any details out of kilter, you’ll throw that brain out of whack. If the brain thinks that it’s walking in bright sunlight, and your computer is showing it a bright outside scene but neglects to get exactly right the feel of warm sunlight on the lips, the brain will notice the discrepancy and be confused. Multiply this by all the other discrepancies, and you’ll end up with a brain incapable of functioning. 

AI can definitely accomplish things utterly beyond human capabilities. We don’t even need AI to do that: a pocket calculator can carry out arithmetic calculations in a flash that would take a human brain many minutes to pull off. But no computer can replicate human brain function without undergoing the same experiences that a human experiences. No computer will be able to successfully emulate empathy, because no computer will ever have its heart broken. No computer will ever be teased by schoolmates, or become embarrassed by acne, or be terribly confused by the hormones surging through it during puberty. No computer will ever suffer sexual abuse, or resent its low income, or be afraid of crime, or experience assholes on the Internet. Without those experiences, a computer cannot empathize with humans. This is why computers will never be good teachers. People worry that computers will eliminate all jobs; they’re wrong. Computers will free up enough workers to permit the proper teacher:student ratio — one to one.

Stop worrying about AI. It’s nowhere near as great a problem as climate change or the dangerously high value of the Gini Index