Contents
Guest Editorial: Conference Lessons
Dave Menconi
Software Reliability Testing
Cem Kamer
What is a Computer?
Chris Crawford
Bringing Characters To Life
David Graves
A Mickey Mouse Approach to Game Design
Jeff Johannigman
Real Life in a Box or, How to Design and Implement a Computer Vehicle Simulator
Gordon Walton, John Polasek, and Karen Hunter — Digital Illusions, Inc.
Editor Chris Crawford
Subscriptions The Journal of Computer Game Design is published six times a year. To subscribe to The Journal, send a check or money order for $30 to:
The Journal of Computer Game Design
5251 Sierra Road
San Jose, CA 95132
Submissions Material for this Journal is solicited from the readership. Articles should address artistic or technical aspects of computer game design at a level suitable for professionals in the industry. Reviews of games are not published by this Journal. All articles must be submitted electronically, either on Macintosh disk , through MCI Mail (my username is CCRAWFORD), through the JCGD BBS, or via direct modem. No payments are made for articles. Authors are hereby notified that their submissions may be reprinted in Computer Gaming World.
Back Issues Back issues of the Journal are available. Volume 1 may be purchased only in its entirety; the price is $30. Individual numbers from Volume 2 cost $5 apiece.
Copyright The contents of this Journal are copyright © Chris Crawford 1988.
_________________________________________________________________________________________________________
Guest Editorial: Conference Lessons
Dave Menconi
[Dave is a member of the Conference Committee.]
Now that Computer Game Developer’s Conference II is over, it is time to sort out its lesssons .
I noticed that a large number of lecture attendees seemed intent on giving a mini-lecture from the audience. In a number of lectures, the lecturer lost control of the information flow.
If you have something that adds to the lecture, say your piece briefly. It is inappropriate to prattle on. Once the speaker gives you the floor, it is hard for him to cut you off. So you will have to be the one to cut yourself off.
I saw one case of audience digression in which a member of the audience took strong exception to the points made by the speaker and spoke passionately about his own point of view. The speaker listened attentively and then, after a few brief comments, gave the floor back to the challenger! This happened near the end of the lecture period and so the organized lecture disintegrated into a jumble of ideas.
In another incident, it was the speaker who wandered far afield. An employee of a publisher, he took us through the first half of the speech right on track. But then he wandered off into a fifteen-minute commercial for one of his products! He later claimed that he was trying to use this product to illustrate a point. This thinly-veiled excuse would have been tolerable if the product had some relation to the topic, but this was simply not the case.
This is the risk we take in granting publishers a platform to speak at our conferences: they may spend a lot of time — our time — patting themselves on the back. In all fairness, this was the only case of back-patting I saw.
There were numerous positive incidents that I won’t go into because they indicate that we already understand appropriate behavior. One event worthy of note occurred when a lecturer, with only a few minutes left, offered to take some questions from the audience. Earlier there had been some off-topic squabbles between members of the audience and it might have been these that prompted someone to say, “Please don’t call on someone. I’d rather hear from you.” This is a case of someone taking responsibility for making sure that he hears what he came to hear.
This brings me to something I noticed in my own panel. Someone asked if other developers used several playtesters. None of the panelists knew, so we found out. I asked for a show of hands, gauged the response, and declared, “Yes, other developers do.” This got a hearty laugh, but it wasn’t intended as comic relief. The point is that the conference is not for the purpose of hearing from the industry greats. We had lecturers as a focus for communication, not as the end purpose of the conference. The real wealth of information is in the audience!
So, how do we solve some of these problems? What could we have done to prevent diatribes, to curb over-zealous publishers, and to support timid speakers? We need to police ourselves and, to some degree, police each other.
Personally, I am more willing to speak up and politely interrupt diatribes and attempt to get the talk back on track. After all, I came to hear the views of the speaker on the topic, not the ideas of some hothead. And as for publishers letting me pay so that they can brag about their products... let’s just say that I won’t stand mute again!
Each person at the conference must be responsible for seeing that everything goes smoothly and that the conference works for everyone. But we must be careful how we effect this solution. If everyone feels that this responsibility gives him the right to interrupt, shout down, and hassle speakers, then the cure is worse than the disease. So take responsibility for yourself as well.
If your only reason for coming to the conference is to find out what other people have to say please: STAY HOME! We need people at these conferences who will take an active part, both in communicating their thoughts and in making sure that the thoughts of others are communicated. I am pleased to say that we had people like that at this conference and that this is what made the conference a success!
See you again in May!
_________________________________________________________________________________________________________
Software Reliability Testing
Cem Kaner, Ph. D.
Software Testing Manager, Creativity Division, Electronic Arts
[This paper is based on the talk that Cem gave at the Computer Game Developer’s Conference.]
Frequently, discussions of testing focus on the recruiting and use of playtesters. These are important people but their function is limited: playtesters’ function is to improve playability.
In line with that, we’ve heard that it pays to hire people (or use volunteers) who are computer naive or who are inexperienced with the program. I agree. We’ve also heard that a paid expert who has a strong grounding in the user interfaces of games is also valuable. I strongly agree.
But many developers also rely on their playtesters for reliability testing — finding the bugs in the product. Are the playtesters adequate for this?
Yes: if the program is relatively simple and most users will follow similar paths through it. The playtesters will follow the same paths and find the same bugs as the market would, so all is well.
No: if the game is rich and complex. The customer base will explore many more paths than a few playtesters can.
So if you’re developing a complex game, how do you go to get the bugs out? Here are three alternatives:
The Traditional QA Group
I don’t know why we call these people “Quality Assurance.” They do no such thing. You do the quality assurance. You design the game, you decide how to structure the code, you decide how much sweat and time to put into the graphics, how the story will go, what level of humor there will be, all that stuff. Your standards go inside the box and your name goes on it. The tester helps you posish the user interface and s/he helps you find bugs. This is important assistance in a small, but very important, aspect of quality. It is not quality assurance.
A common model for software QA testing, called the “waterfall model,” is often presented a something serious. It doesn’t work, and most of us know that, but the professional literature in Software Engineering constantly tells quality assurance staffs to hold us to the waterfall.
QA staff try to do good jobs. They read professional standards and try to follow them. Those standards tell them to become very conservative, and very paperwork-oriented. Standards followers need detailed specifications from you and they spend vast amounts of time creating their own. They discourage latestage innovation. Realize that you’ve set these people up: you’ve told them to do “QA” but you only let them do 5% of a true QA job. On the inside, that makes them insecure. On the outside (depending on how you treat them), they can be arrogant, frustrating, and nitpicking.
Hire a Flock of Playtesters
You can pay $6 or $8 or $10 per hour and hire a playtester or a flock of them. Playtesters will work for the pleasure of it, but in reliability testing you get the skill set you hire for, at the level you pay for. It’s hard work, and the fun of working whith a new game goes away after they’ve hammered at this part of the game for the umpty-fifth time.
Playtesters who don’t understand the methodology of software testing are not effective bug finders.
Use Software Testers
This is the right approach.
You might find software testers easily or you may have to train them yourself. Some playtesters become excellent reliability testers. You may use the same people for playtesting during early development and for reliability testing later. However you find them, it’s important for you to clarify your objectives for their work, to yourself and to them.
Reliability tester’s key task is to find and report bugs. You want them to do this quickly, to report the problems reproducibly, and to cover the program thoroughly.
The first problem that reliability testers face is that you’ve already tested the program pretty thoroughly. We hear estimates of the number of bugs in a program when it’s passed to a tester: industry average is about 1 bug per 100 lines of code. These are called public bugs, the bugs that are still there when you show the program to someone else. There are also private bugs, which you find before showing the program. The few small, informal studies of private bugs suggest theat the average private bug rates are about one bug per line of code.
After making a mistake in every line of the program, you proofread it, fix it, compile it, fix it, LINT it, fix it, test it, fix it, analyze it, fix it, and finally you give it to testers with one mistake per hundred lines.
No wonder you think you’ve tested this code thoroughly! You’ve fixed 99% of the bugs!
The reliability tester has to find the remaining 1%. S/he has to look where you haven’t looked, or to find test cases that you haven’t tried. This is a highly personal, psychological challenge. Of course, there are methods for testing the game “completely,” but if the tester follows those blindly, s/he can hold your game up for months or years.
A good tester understands standard testing methodology well, so well that s/he understands how to take shortcuts, how to prioritize the tasks, how to guess what can be skipped or postponed, and how to go for the throat quickly, so s/he can find the important bugs as early as possible. A producer at EA complains of a senior tester who reminds him of a neighborhood bully, picking on this poor defenseless little program. The poor little bugs just don’t have a chance. This is what you want.
Programmers are trained to do “path and branch” testing. Your testing is based on a listing of the code. You make sure that you execute every statement, and try every branch in the program. Some books and even some software testing consulting companies call this “complete testing.” It is not.
These are some of the holes you’re leaving for testers to find bugs in:
WHAT CODE PATHS DON’T TELL YOU
1. Timing-related bugs
2. Unanticipated error conditions
3. Special data conditions
4. Validity of displayed stuff
5. User interface consistency
6. User interface everything-else
7. Interaction with background tasks
8. Configuration/compatibility
9. Volume, load, hardware fault
What you have to find in a reliability tester is someone who can quickly abstract the internal limits of variables in the program, and test the program at those limits, and who can find the kinds of bugs listed above. You need someone who can do it quickly and thoroughly, and report bad results compellingly.
There are lots of ways to find these people, or to find people who have potential and can be trained, but it’s up to you to find them. For complex software, for reliability testing, voluntary playtesters and untrained, underpaid amateurs just don’t cut it.
_________________________________________________________________________________________________________
What is a Computer?
Chris Crawford
It is often instructive to ask simple, fundamental questions, and so I propose to address this most fundamental question. Just what is a computer?
The most obvious approach is to provide a conventional engineering definition. A computer is a device with a CPU, some RAM, an address bus, data bus, and so forth. We could construct an elaborate definition or a simple one. But any such definition is easily shattered. I could readily build an absurd machine to meet any such specification. I could bollix up the ALU with inverted arithmetic relations, or scramble the address lines so that it could not retrieve data reliably. I could create an Alice in Wonderland operating system (I once did just that for the Atari 800) that defies all reason — and it would probably still meet a technical definition of a computer.
A conventional engineering definition is bound to fail because the computer is first and foremost a tool, not an object. That is, it is defined not by what it is but by what it does. Like any tool, the computer is best described by its function rather than its construction.
So what is the function of the computer? We don’t know. This really shouldn’t bother us — after all, any new technology takes a while to seep into the consciousness of a civilization. Gutenberg saw the printing press as a better way to make Bibles. The early automobile was seen as a touring vehicle for the well-to-do. How could Otto Daimler have foreseen naked lady hood ornaments?
Actually, in the early days we thought we knew what a computer was. A computer, back then, was a machine used to carry out elaborate mathematical calculations. You programmed it to perform the desired calculations, submitted the program to the computer, and waited. The computer crunched your numbers and vomited the results onto several reams of wide green-striped paper. The computer, in short, was a batch-oriented number-cruncher.
But then came the revolution. At first they were called microcomputers, and the term seemed apt. They were, after all, tiny versions of the big mainframes, so we treated them that way. The software we wrote in those days felt like micro-versions of the stuff the Big Boys played with.
It is a universal law of revolutions that they never work out the way the revolutionaries intend, and we were no exception. What we thought was the Microcomputer Revolution turned out to be the Personal Computer Revolution. The shift was not the work of any individual or company — we just bumbled into it. Once we had machines with a modicum of power, we began to stumble into their real uses. Visicalc was the first such discovery. Bricklin offered it to Apple Computer and was politely told that nobody could see any use for the thing. After all, nothing like it existed in the “real world” of mainframe computing. But users, real people not prejudiced by mainframe-think, got their hands on the product and realized its immense utility.
Why was VisiCalc so successful? I attribute its phenomenal success to a single factor: interactivity. It is one thing to submit a pile of numbers to a mainframe and wait for it to pass judgment. It is entirely another matter to put in your numbers, see a result, and then play with them in real time. The user could change a number and see the effect of the change instantly. The responsiveness of the system encouraged the user to play with the numbers, to explore the behavior of the financial system in the spreadsheet, and to find the best solution.
There were other areas of experimentation with personal computers. Word processing was an early success borrowed from the mainframe world, but it rapidly evolved away from the mainframe approach. While WordStar with its mainframe-style embedded control characters enjoyed great initial success, it represented an evolutionary dead end. What really caught on was the WYSIWYG style. The first hardware-software combination to get it right was MacWrite on the Macintosh. The WYSIWYG style of word processing has never been challenged on the Macintosh and is making inroads into the IBM world.
Why has WYSIWYG word processing enjoyed such success? Again, I attribute it to the interactivity offered by WYSIWYG systems. The mainframe-style word processor requires the user to print out the document before seeing the results of his work. The WYSIWYG word processor shows it directly, so that when the user makes a change to the document, the effects of the change are immediately apparent. This encourages the user to experiment more freely, to try variations, to play with the wording. This is the source of WYSIWYG’s superiority.
Another field in which personal computers are used is simple programming. The dominant language during the early days was BASIC. Everybody knew that BASIC was a naughty language, but it remained the most popular language for personal computers. So de rigeur was BASIC that the Macintosh was initially castigated for failing to include the language in ROM.
Why did BASIC maintain such a firm grip on the personal computer, despite its many flaws? The answer is the same as with spreadsheets and word processors: interactivity. BASIC was the only available interpretive language. All of the righteous languages (Pascal, C, etc) were compiled, and the additional step of compilation obviated interactivity. Only BASIC allowed the user to enter a line of code and immediately see its effect. Only BASIC allowed the user to play with the program’s structure and content. BASIC retained its popularity until the advent of fast compilers such as Turbo Pascal restored interactivity to the programming process.
I could go on with other examples but I think my point is clear: the central focus of the personal computer revolution is interactivity. The very essence of personal computing has a human using a computer to explore a messy problem. The human uses his creativity and values to propose a variation on the existing scheme. The computer uses its computational resources to work out the ramifications of the variation and presents the resultant order of things to the human. The computer’s response is so fast that the human is encouraged to try again with another variation, to explore all the possibilities, to play with the situation.
There’s that verb again: play. Did you notice that it cropped up in the discussion of each of the three major applications? Spreadsheet users play with the numbers; word processor users play with the words; and BASIC users play with the programs. Play and interaction are closely coupled concepts. Rich, deep interaction is indistinguishable from play. Is joyous lovemaking a form of play or interaction? Is a child’s first encounter with a kitten play, or is it interaction? When humanity lofts one of its own into space, are we playing with the universe, or interacting with it?
The notion of play brings us to games, the fourth great application area of personal computers. Sadly, it is an area some would deny. Last week there was a computer show in Anaheim proudly proclaiming “No Games Here!” Most computer magazines would rather not talk about computer games. Apple Computer seems to wish that they didn’t exist (‘not on our computers, you don’t!’)
What is so perverted about this attitude is its denial of the fundamentals of the personal computer revolution. For games are not merely a valid part of the revolution; no, I claim that computer games come closer to the very essence of computing than anything else we have created.
To make this case, we need not appeal only to the intrinsic playfulness of games and some mystical connection between play and good computing. No, the connection is closer and tighter than that. Go back and reread the paragraph presenting “the very essence of personal computing”. Does it not describe the process of playing a computer game? Indeed, does not the playing of a computer game carry out this process better — more richly, more deeply, more intensely — than any spreadsheet, word processor, or programming language? Will not the user of these latter three progams pause at times to wait for the computer? Will he not occasionally break the interaction to look up some obscure command or capability? Will his attention not drift away from the tedious task at hand?
Not so the user of the game. Look at him: he sits entranced, locked in deep interaction with the computer. No pauses or daydreams compromise the intensity of his thinking. Nor does the computer waste millions of cycles in idle wait loops; it rushes to process the most recent keypress or mouseclick, to update the screen or move a sound to the speaker.
I am not talking here about action games. My arguement does not require that all good games be frantic exercises in sub-cerebellar function. What I am talking about here is not the velocity of interaction but its intensity. Action games move tiny amounts of information through the interaction circuit at high speeds. The same result can be obtained by moving larger amounts of information at lower speeds, and many good games do just that. The crucial observation is that the total flow of information — volume times speed — remains higher for games than for other applications. Games offer more interaction than spreadsheets or word processors.
That is why games come closer to the essence of computing than any other field of software.
_________________________________________________________________________________________________________