What-is-a-Computer?

(Title modified on July 7, 2005 to save Googlers the false hits on the title)

It is often instructive to ask simple, fundamental questions, and so I propose to address this most fundamental question.

The most obvious approach is to provide a conventional engineering definition. A computer is a device with a CPU, some RAM, an address bus, data bus, and so forth. We could construct an elaborate definition or a simple one. But any such definition is easily shattered. I could readily build an absurd machine to meet any such specification. I could bollix up the ALU with inverted arithmetic relations, or scramble the address lines so that it could not retrieve data reliably. I could create an Alice in Wonderland operating system (I once did just that for the Atari 800) that defies all reason -- and it would probably still meet a technical definition of a computer.

A conventional engineering definition is bound to fail because the computer is first and foremost a tool, not an object. That is, it is defined not by what it is but by what it does. Like any tool, the computer is best described by its function rather than its construction.

So what is the function of the computer? We don’t know. This really shouldn’t bother us -- after all, any new technology takes a while to seep into the consciousness of a civilization. Gutenberg saw the printing press as a better way to make Bibles. The early automobile was seen as a touring vehicle for the well-to-do. How could Otto Daimler have foreseen naked lady hood ornaments?

Actually, in the early days we thought we knew what a computer was. A computer, back then, was a machine used to carry out elaborate mathematical calculations. You programmed it to perform the desired calculations, submitted the program to the computer, and waited. The computer crunched your numbers and vomited the results onto several reams of wide green-striped paper. The computer, in short, was a batch-oriented number-cruncher.

But then came the revolution. At first they were called microcomputers, and the term seemed apt. They were, after all, tiny versions of the big mainframes, so we treated them that way. The software we wrote in those days felt like micro-versions of the stuff the Big Boys played with.

It is a universal law of revolutions that they never work out the way the revolutionaries intend, and we were no exception. What we thought was the Microcomputer Revolution turned out to be the Personal Computer Revolution. The shift was not the work of any individual or company -- we just bumbled into it. Once we had machines with a modicum of power, we began to stumble into their real uses.
Visicalc was the first such discovery. Bricklin offered it to Apple Computer and was politely told that nobody could see any use for the thing. After all, nothing like it existed in the "real world" of mainframe computing. But users, real people not prejudiced by mainframe-think, got their hands on the product and realized its immense utility.

Why was
VisiCalc so successful? I attribute its phenomenal success to a single factor: interactivity. It is one thing to submit a pile of numbers to a mainframe and wait for it to pass judgment. It is entirely another matter to put in your numbers, see a result, and then play with them in real time. The user could change a number and see the effect of the change instantly. The responsiveness of the system encouraged the user to play with the numbers, to explore the behavior of the financial system in the spreadsheet, and to find the best solution.

There were other areas of experimentation with personal computers. Word processing was an early success borrowed from the mainframe world, but it rapidly evolved away from the mainframe approach. While WordStar with its mainframe-style embedded control characters enjoyed great initial success, it represented an evolutionary dead end. What really caught on was the WYSIWYG style. The first hardware-software combination to get it right was
MacWrite on the Macintosh. The WYSIWYG style of word processing has never been challenged on the Macintosh and is making inroads into the IBM world.

Why has WYSIWYG word processing enjoyed such success? Again, I attribute it to the interactivity offered by WYSIWYG systems. The mainframe-style word processor requires the user to print out the document before seeing the results of his work. The WYSIWYG word processor shows it directly, so that when the user makes a change to the document, the effects of the change are immediately apparent. This encourages the user to experiment more freely, to try variations, to play with the wording. This is the source of WYSIWYG’s superiority.

Another field in which personal computers are used is simple programming. The dominant language during the early days was BASIC. Everybody knew that BASIC was a naughty language, but it remained the most popular language for personal computers. So de rigeur was BASIC that the Macintosh was initially castigated for failing to include the language in ROM.

Why did BASIC maintain such a firm grip on the personal computer, despite its many flaws? The answer is the same as with spreadsheets and word processors: interactivity. BASIC was the only available interpretive language. All of the righteous languages (Pascal, C, etc) were compiled, and the additional step of compilation obviated interactivity. Only BASIC allowed the user to enter a line of code and immediately see its effect. Only BASIC allowed the user to play with the program’s structure and content. BASIC retained its popularity until the advent of fast compilers such as Turbo Pascal restored interactivity to the programming process.

I could go on with other examples but I think my point is clear: the central focus of the personal computer revolution is interactivity. The very essence of personal computing has a human using a computer to explore a messy problem. The human uses his creativity and values to propose a variation on the existing scheme. The computer uses its computational resources to work out the ramifications of the variation and presents the resultant order of things to the human. The computer’s response is so fast that the human is encouraged to try again with another variation, to explore all the possibilities, to play with the situation.

There’s that verb again: play. Did you notice that it cropped up in the discussion of each of the three major applications? Spreadsheet users play with the numbers; word processor users play with the words; and BASIC users play with the programs. Play and interaction are closely coupled concepts. Rich, deep interaction is indistinguishable from play. Is joyous lovemaking a form of play or interaction? Is a child’s first encounter with a kitten play, or is it interaction? When humanity lofts one of its own into space, are we playing with the universe, or interacting with it?

The notion of play brings us to games, the fourth great application area of personal computers. Sadly, it is an area some would deny. Last week there was a computer show in Anaheim proudly proclaiming "No Games Here!" Most computer magazines would rather not talk about computer games. Apple Computer seems to wish that they didn’t exist (’not on our computers, you don’t!’)

What is so perverted about this attitude is its denial of the fundamentals of the personal computer revolution. For games are not merely a valid part of the revolution; no, I claim that computer games come closer to the very essence of computing than anything else we have created.

To make this case, we need not appeal only to the intrinsic playfulness of games and some mystical connection between play and good computing. No, the connection is closer and tighter than that. Go back and reread the paragraph presenting "the very essence of personal computing". Does it not describe the process of playing a computer game? Indeed, does not the playing of a computer game carry out this process better -- more richly, more deeply, more intensely -- than any spreadsheet, word processor, or programming language? Will not the user of these latter three programs pause at times to wait for the computer? Will he not occasionally break the interaction to look up some obscure command or capability? Will his attention not drift away from the tedious task at hand?

Not so the user of the game. Look at him: he sits entranced, locked in deep interaction with the computer. No pauses or daydreams compromise the intensity of his thinking. Nor does the computer waste millions of cycles in idle wait loops; it rushes to process the most recent key press or mouse click, to update the screen or move a sound to the speaker.

I am not talking here about action games. My argument does not require that all good games be frantic exercises in sub-cerebellar function. What I am talking about here is not the velocity of interaction but its intensity. Action games move tiny amounts of information through the interaction circuit at high speeds. The same result can be obtained by moving larger amounts of information at lower speeds, and many good games do just that. The crucial observation is that the total flow of information -- volume times speed -- remains higher for games than for other applications. Games offer more interaction than spreadsheets or word processors.

That is why games come closer to the essence of computing than any other field of software.