Artificial intelligence has been the Holy Grail of computing since the 1950s. In the early years, researchers were confident that they’d make rapid progress. They didn’t. After decades of work, many began to feel that AI was a dead end.
Two developments changed everything. The first was the Internet, which eventually grew into a monster. The Internet currently contains more than a thousand petabytes of information. That’s a billion gigabytes — an enormous amount of information!
The second big development was the steep dive in the price of computing power. My current computer is at least 3 trillion times more powerful than my top-of-the line computer of thirty years ago. If you forego the display, keyboard, mouse, and all the other paraphernalia of a full personal computer system, preferring instead raw computational power, you can get an enormous amount of power for a few million bucks.
Together, these two developments made possible a new AI strategy called “deep learning”. This could be described as turbo-charged neural networks. The idea of a neural network goes back many decades, but the tiny neural networks that computer scientists made in the 1980s and 1990s were just too small to accomplish much. Moreover, teaching such neural networks required tediously training them with thousands of examples — and even then the systems didn’t perform well. With all the data on the Internet, computer scientists didn’t have to spoon-feed their neural networks — they just hooked their monster neural networks directly into the firehose of data that is an Internet connection.
The results are astounding. Deep learning systems now do a pretty good job of speech recognition and are excellent at recognizing images. They are being applied to a huge array of problems; the limiting factor right now is the number of trained people who can set up and train these deep learning systems.
Amidst all this excitement, there’s a huge irony that nobody seems to have noticed: deep learning systems aren’t logical. Computer scientists cannot figure out the internal details of a deep learning system. If such a system makes a mistake, it’s impossible to identify the precise source of the mistake. The only way to fix the error is to apply more training.
In other words, the most advanced form of computer intelligence does not use sequential logic; it relies on a pattern-based system!
Deep learning torpedoes our naive certainly that logic is the only means to solve problems. Sequential thinking gave us science and technology, which have worked miracles for the benefit of humanity. Logic and math are magnificent discoveries, and their value is truly incalculable. But we must recognize the limitations of logical thinking along with its enormous strengths. Logical/mathematical thinking can successfully handle a narrow range of precisely-articulated problems. But they will never solve a huge range of messy problems, especially almost anything to do with human behavior.
You may find it almost hypocritical of me to state that logic/mathematics cannot tackle human behavior — I’m the guy who spent much of his career attempting to build interactive storytelling by applying logic/mathematics to dramatic behavior. But there’s a huge difference between a character in a story and a real human being. Storytelling is not merely a recitation of real events. A long time ago, in a galaxy far, far away, there never was a Luke Skywalker or a Darth Vader. Frodo never took the One Ring to Mount Doom. Neo never fought Agent Smith. Captain Kirk never outsmarted a computer by challenging it with a logical paradox. These are all stories!
We can mimic human behavior but we cannot determine it.