Contents
A Grain of Sand, A Gust of Wind
Chris Crawford
The Kitchen Sink Factor and Self-Tuning Games
Noah Falstein
Multi-Player Games
David Joiner
What Would You Say if We Sold a MILLION of these?
Forrest Walker
A Case for Scaled Interactivity in Game Design
Doug Clapp
Color Versus Resolution
Chris Crawford
Editor Chris Crawford
Subscriptions The Journal of Computer Game Design is published six times a year. To subscribe to The Journal, send a check or money order for $30 to:
The Journal of Computer Game Design
5251 Sierra Road
San Jose, CA 95132
Submissions Material for this Journal is solicited from the readership. Articles should address artistic or technical aspects of computer game design at a level suitable for professionals in the industry. Reviews of games are not published by this Journal. All articles must be submitted electronically, either on Macintosh disk , through MCI Mail (my username is CCRAWFORD), through the JCGD BBS, or via direct modem. No payments are made for articles. Authors are hereby notified that their submissions may be reprinted in Computer Gaming World.
Back Issues Back issues of the Journal are available. Volume 1 may be purchased only in its entirety; the price is $30. Individual numbers from Volume 2 cost $5 apiece.
Copyright The contents of this Journal are copyright © Chris Crawford 1992.
_________________________________________________________________________________________________________
A Grain of Sand, A Gust of Wind
Chris Crawford
Leo Christopherson was one of the earliest computer game designers. He published Android Nim. in late 1978. In terms of depth and substance, Leo’s game was nothing to write home about: just plain old nim. But its graphics were sensational. Leo turned the stacks sideways and replaced the static pieces with animated robots. Their little heads constantly moved back and forth, the eyes wandered, and they shifted stance. This was genuine animation on a TRS-80!
The reviewers went wild. This game was fabulous, it was magnificent, it was glorious. Leo basked in the approbation of the world. He was honored and admired.
And then, a grain of sand, a gust of wind, and Leo Christopherson was gone. He designed one or two more games, but they were straightforward repeats of Android Nim.. People lost interest in his cute little animations. I never heard anything more of Leo Christopherson.
Bob Bishop was one of the first Apple II programmers. He worked at Apple from the beginning, and he became one of the pioneers of graphics techniques on that machine. Bob used games to show off his graphics techniques. They weren’t very impressive in terms of gameplay, but boy were they snazzy in the graphics department. Bob’s games did things that nobody had ever seen before.
Apple II owners loved his stuff. They bought everything he produced. They loved him. The magazines and reviewers gushed with praise. Awards showered down upon him. Bob Bishop was the darling of the Apple II community. And he was rich, too.
And then, a grain of sand, a gust of wind, and Bob Bishop was gone. His games were never much fun, and there were other games that offered more substance. Other people were learning some of Bob’s tricks. His work no longer had the same sizzle. Bob drifted away. I’ve heard tell that Bob is somewhere near Santa Cruz these days; I don’t know what he’s doing.
Nasir Gebelli picked up where Bob Bishop left off. Nasir developed advanced graphics techniques for the Apple II. He was fast and prolific, grinding out game after game on a time scale of months. An entire publisher, Sirius Software, was founded on Nasir’s output. And what output it was! Nasir had developed dozens of tricks for squeezing the fastest animations out of the Apple. His games boasted fast, full-screen animations that nobody else could match. He raked in the royalties; wealth was his in a matter of months. His games were on every store shelf; they were reviewed in glowing terms in every magazine. Nasir Gebelli was a one-man gold mine. A game need merely have the simple tag line “By Nasir” to be assured of massive sales figures.
And then, a grain of sand, a gust of wind, and Nasir Gebelli was gone. Sometime around 1983 or 1984, in the general collapse of the games industry and the specific collapse of Sirius software, Nasir Gebelli disappeared from the scene. I don’t know where he is now.
Greg Christenson was a high school student when he burst upon the scene. Bright, shy, and quiet, Greg put together just one game: Caverns of Mars for the Atari. It was a simple vertical scrolling game, not too different from Defender. After all, Greg was only a high school student, new to programming, and using the Atari Assembler/Editor cartridge as his development tool. He really didn’t know much about game design per se. He simply started with Defender, made it vertical, and then added interesting bits and pieces until he had a game.
But the graphics were fantastic. It used many of the graphics capabilities of the Atari, and the result was impressive. Caverns of Mars sold a zillion copies. Greg earned a ton of money. The press loved him. Here was a high school kid programming a hit game in just 8 weeks. Talk about a Cinderella story! Atari gave him a $25,000 award for the best game published by the Atari Program Exchange. Everybody wondered excitedly what this wunderkind would accomplish in coming years.
But then, a grain of sand, a gust of wind, and Greg Christenson was gone. I don’t know what ever became of Greg. He just disappeared.
Jon Harris was another wunderkind. I remember he came to one of my training seminars for the Atari computers in 1981; he didn’t make much of an impression on me. But a year later, Jon unloaded Jawbreakers on the world. It was a Pac-Man clone, pure and simple. Jawbreakers was a beautiful game, better than the Pac-Man that Atari itself produced. It had lovely music, beautiful animations, great sound effects — everything about this game was excellent. Of course, the design itself was a complete nothingburger — it was just plain old Pac-Man with a few minor embellishments. But who cared when the graphics were so great?
Jawbreakers generated quite a legal row between Atari and Sierra. The legal battle dragged on for some months, ending in a pyrrhic victory for Sierra. Jon wrote another game for Sierra, I believe. He was celebrated in Steven Levy’s book Hackers, and there were of course the adulation and favorable reviews that go with creating a hit game.
But then, a grain of sand, a gust of wind, and Jon Harris was gone. I’ve been told that he went to work for an advertising company, but that was years ago.
Jonathan Gay and Mark Stephen Pierce were a hot pair. Together, they created Dark Castle and Beyond Dark Castle, two of the hottest Macintosh games ever created. The games could not boast much in the way of creativity: they were, after all, straightforward running, jumping, climbing games. But they bristled with animations and digitized sounds at a time when such things were considered sinfully luxurious. And Macintosh players loved these two games. They bought a huge number of copies, dumping bushels of money all over Silicon Beach Software. The games collected every award around.
But now, a grain of sand, a gust of wind, and nothing is to be heard from Jonathan Gay and Mark Stephen Pierce. I don’t know where they are now or what they’re doing.
There are an uncountable number of grains of sand. The wind will never stop blowing. There are still those in our industry who follow the paths taken by these earlier stars. Some of them even now bask in acclaim and wealth. Their time will come.
[My thanks go to the unknown orator from whom I stole the lovely phrase used in the title. Indeed the structure of this essay is modelled upon his speech.]
_________________________________________________________________________________________________________
Announcements
Good news for you myriad IBM Siboot fans, so cruelly crushed by the news that the IBM port of Siboot had been aborted. Kevin Gliner rose to the challenge and wrapped up the port over the last few weeks. We now have a working version of Siboot for MS-DOS machines. While some of the animations weren’t ported, this is more than made up for by the addition of color to the game. So now at last you can buy Siboot for your MS-DOS machine. It’ll cost you $25 plus $5 shipping. The complete Turbo Pascal source code plus designer’s notes (several hundred pages of material) costs $150.
Anybody who wants to be listed in the 1992 Home Computer Software Industry Sourcebook should contact Jack Thornton at 2650 Center Road, Novato, CA 94947. The final deadline for listings is April 1, 1992, so contact Jack well before then.
_________________________________________________________________________________________________________
The Kitchen Sink Factor and Self-Tuning Games
Noah Falstein
The true measure of a game ultimately boils down to a simple question: Is it fun? The answer to the question is also often a simple yes or no. But the reasons for the answer, as well as the answer itself, vary widely from person to person. Fun is a very subjective yardstick. The game designer is often faced with design decisions that will make a game more fun for some people and less fun for others. The designer has to compromise, to pick an audience and aim for it, to narrow the sights. You can’t please everyone.
Or can you?
Some years ago I began to notice a pattern in some best-selling computer games. The first example (and still one of the best) was Richard Garriott’s Ultima IV. I consider myself a recovering roleplayer of the old school, and have had little interest in the plethora of mindless, random hack and slash clones that have filled the market since the early days of computer games. But I was curious about the Ultima series because of its impressive financial success. Ultima IV hooked me with its magic system. There was still a lot of combat (too much for my tastes) but I found that I put up with it because I found the idea of discovering and concocting magic spells very appealing. I also was drawn in by the unfolding story, the process of barter and trade, and the sense of exploration. Then one day I spent some time on a BBS where the game was being discussed. The bulk of the comments were centered on how to kill monsters, how to find better weapons to kill monsters, and how many monsters each player had killed. Was this the same game I was enjoying? Didn’t these people get the idea of the Avatar’s Virtues? Obviously not. But they were still enjoying it a lot.
I realized that Ultima IV was a game composed of many interlocking subgames — the story, the dungeons, fighting monsters, finding spell ingredients, making money, casting spells, finding important clues, and becoming an Avatar by demonstrating various virtues. The subgames interlocked in such a way as to make it possible to almost ignore (or at least de-emphasize) one in favor of another. For example, if you didn’t enjoy combat you could run from many monsters, or find a balloon to travel over them. You could make money by killing monsters, selling items you bought in one place for a profit at another, or by stealing. You could spend money on essential items needed for fighting, magic, or advancing the story. There seemed to be, if not something for everyone, at least something for a lot of different gaming tastes. I coined the term “Kitchen Sink Factor” as a way of designating a game that seemed to include everything but the kitchen sink.
The next great example of this kind of game came from Sid Meier of Microprose, who is in my opinion the master of the interlocking subgame design. His Pirates was derivative of some earlier works such as Bunten’s Seven Cities of Gold, even as his Railroad Tycoon is derivative of Wright’s Sim City, but in both cases Meier has identified and extracted the best parts of those games, added his own concepts and distinctive twist, and came up with hit games that transcend traditional genres. Pirates combines a little of an arcade game, a little simulation, a little strategy/wargame, and a little roleplaying game. You can emphasize one part of the game and avoid others simply by play style — for instance if you’re a great arcade gamer you can capture enemy ships with brute force and bravado against tough odds, but if you prefer strategy you can maneuver and plan your way into crushing strategic advantages and leave little to reflexes and chance. What’s more, you don’t need instructions to change the game in this way — just do what you like to reach the ultimate goals of power and wealth, and avoid the parts you dislike. You won’t be able to completely bypass one part of the game, but you can minimize it. Meier’s Railroad Tycoon takes this art to an even greater height, allowing you to automate the parts of the game you’re not interested in (such as running the railroad switches, or monitoring the stock market for takeover attempts) in favor of the parts you DO like. I’ve gradually come to realize that it these kinds of games go beyond a “kitchen sink” approach, and enter the realm of self-tuning games. The games reach a wide audience by automatically tailoring themselves to the player’s tastes. It’s a very powerful design approach that has often paid off in sales that run across traditional hard boundaries between less flexible game genres.
Some recent LucasArts (formerly Lucasfilm) Games’ efforts show new ways to make self-tuning games. Ron Gilbert’s The Secret of Monkey Island 2: LeChuck’s Revenge takes the explicit, user selectable approach similar to Meier’s and lets the player choose one of two game modes up front. The beginning adventurer who has never played (or played and never finished) a graphic adventure gets “Monkey Lite”, a streamlined, easy version of the game, with a condensed version of the full game’s story and locations (but a full dose of its humor). Hard-core gamers can try the “Monkey Classic” version, with more locations, surprises, and many more and tougher puzzles. The “Lite” version is designed so it can be played as a warm-up, and doesn’t ruin the “Classic” version for later play. It is hoped that this design will be a viable solution to the corner we designers often paint ourselves into: making more complex games for a specialized (and small) audience at the expense of the less avid and vocal (but much larger) general audience. If one caters to the avid gamers one runs the risk of overspecialization and shrinking markets, and if one caters to the general audience one runs the risk of alienating the core audience that’s critical to a game’s success.
Another new LucasArts game is Hal Barwood’s Indiana Jones and the Fate of Atlantis. The fact that this game also incorporates the lessons I’ve set down here is perhaps due to my presence as co-designer. In this graphic adventure, the player is challenged at an early stage of the game to enter a theater. There are three ways in, of roughly equal availability and difficulty. The player can find his way through a maze of boxes to climb a ladder that leads in, or beat up a guard, or talk his way past that guard. It is assumed that the player will find whichever way most appeals to him — maze lovers will go for the boxes, verbally inclined puzzlers will choose to talk their way through, and the action gamers will duke it out. Choosing one of these ways selects one of three paths through the game, each of which emphasizes the chosen method of problem solving. For example, the player that chooses fists will have many more opportunities to try his skill at action sequences. There’s also a mechanism for overriding the automatic choice built into the game and integrated into the story, to confirm that the player really wants the chosen path. This also gives the game a level of replayability unusual in a graphic adventure, by letting the player choose to play the game through on another path. The game manual is explicit about how to accomplish this, but it is assumed the vast majority of players will simply do what comes naturally the first time through and not even be aware that the game has tuned itself to their sensibilities.
What are the pros and cons of making a self-tuning game? If you do it well you increase your potential audience. But it’s easy to fail. Indy Atlantis has taken us a long time to develop because of the complexities of the three paths. The Pirates/Railroad Tycoon model is difficult to balance correctly. In order to be truly self-tuning the game must give approximately equal weight to each play style, so that the individual differences in each player’s interests and skills make the difference in winning or losing. Often a game appears to be balanced, but is actually biased in favor of the preferred style of the designer or implementors. Having trusted consultants with different tastes than your own helps a great deal here. The chief pitfall in the Monkey Island II dual difficulty levels method is in avoiding doing twice the work of a single mode game (to keep it cost effective), and giving players an interesting experience in each mode. In my opinion, games like this that aim at pleasing both the enthusiast and the beginner are more important to our future success than games that simply try to please different groups of enthusiasts. The market of computer owners that do not currently play games is huge, and if any of the new CD based formats take off, it could approach the Nintendo audience in numbers. By definition most of these people will be new to computer gaming, and our current specialized games are not well adapted to beginner’s tastes. Self-tuning games can be a useful tool to help us catch the attention of these potential game players.
_________________________________________________________________________________________________________
Multi-Player Games
David Joiner
“OK, turn your face away from the screen while I enter my move.”
How many of us have heard that sad story? How is it that players can possibly put up with the inconvenience of having to “take turns” at the computer, when there are so many other games that allow you to interact directly, 100% of the time, without waiting? The answer lies in the fact that multi-player games offer a unique challenge, unavailable in more conventional computer games. Many of us game designers have long felt that it is a lot more fun to play against (or with) a person than it is to play against a opponent.
Most non-computer games are, in fact, multi-player games. Most of our traditional board-games like Monopoly or Scrabble are based around the assumption of multiple participants. Single player games, such as crossword puzzles, solitaire or mazes, are something of an anomaly, and often categorized as puzzles rather than games. Unlike the typical computer game, the rules and materials of most board games provide an environment for play to take place, rather than an opponent to play against.
Part of the added value of multi-player gaming stems from the fact that the human players are together in a social context that allows them to interact through a channel other than that of the game, in a way that allows emotional nuances to be carried across between players. For example, in a game like MIDI Maze, (which runs on an the Atari ST using the built-in MIDI ports as a rudimentary network), you can actually see the expression on your opponent’s face as you blast their critter. This “emotional bandwidth” is, I feel, an important element in games like this. It provides a human factor which is missing in computer games, a type of interaction that humans instinctively seek.
Well, if multi-player games are so fun, why aren’t there more of them? The problem is that most computer owners not only do not have two computers, many of them don’t have two people! In many cases, the problem of finding another person who is 1) interested in computers, 2) available at the same time you are, 3) Interested in playing the same game as you are, and 4) Has a compatible computer that can link up to yours, is often insurmountable.
Thus, the tragedy is that although multi-player games are fun, there is no market for them. Computer game designers are forced to invent games which are fun when played by a single person. There are, however, a number of ways to side-step this problem:
On-Line games: The solution here is to pay money (lots and lots of it) to someone else to provide the technical foundation for players to play games together. Unfortunately, the face-to-face interaction component is missing, but the use of “chat” facilities helps to make up for this deficit. Also, the limited, text-based interfaces are not terribly spectacular to watch, and the delays of tranferring packets over the network make many forms of real-time interaction impossible.
LAN Games: A number of fun, but not “high production value” games are available that can be played over a local area network. Unfortunately, the number of such networks is limited, and the amount of time available for gaming on such networks is even more limited.
Multi-Player option: A number of games are available now that will play in either single- or multi-player mode. In some cases (particularly in combat-strategy games), you can see how the designer strained to make a single-player version -- the game play is rather weak play in single-player mode, but becomes really exciting and interesting when additional players are added. In cases like this you can tell that game was clearly intended as a multi-player game — the single player option was added only to as a way of getting around the problem of marketing a multi-player game.
Many of these games require that either a modem or direct serial connection be used to link two computers, although many Macintosh games take advantage of the built-in AppleTalk facility, and as we have already mentioned, games on the Atari ST can take advantage of the MIDI ports.
However, with the exception of the AppleTalk solution, many of these linkages lack important features of a true network. Some of them only allow two players; some have no provision for recovery if one player’s machine crashes; some require that all the players enter the game at the same instant. Also, there is often no provision for more than one type of linkage to be used simultaneously — for example, if there were three machines, A, B and C, and A <—> B was connected via a serial link, and B <—> C was connect via AppleTalk, many games would have no provision for dealing with this situation. Another problem is that often, even though the game is available on multiple platforms, you can only link two of the same type of platform together. This further limits the availability of the multi-player experience.
We recently ran in to many of these issues during a recent contract. Our company (The Dreamers Guild) was hired by Maxis to port their multi-player game, RoboSport, from the Macintosh to the Amiga. The Mac version supported direct serial, modem, and AppleTalk. We needed to decide what networking options would be available on the Amiga. While there is a company that does make an AppleTalk adapter for the Amiga, the installed market of those units is very small. Instead we elected to make the game support the networking solution currently being put forward by Commodore: Ethernet with TCP/IP. (This decision was made with the encouragement of Commodore engineer Dale Larson, who managed to “liberate” two beta Ethernet cards for us to develop the product with). Adapting the original AppleTalk code to Commodore’s version of the Berkeley socket library proved straightforward. Theoretically, the game could now be played over the Internet (although no has tried this that I know of).
We were also able to keep the protocols for modem and serial play identical (this proved difficult, because a lot of the protocol for the game assumed a Mac-like environment on the other end) so that players wouldn’t both have to have the same kind of machine.
One of the problems we ran into is that the protocols and methods for each of the different communication channels is very different and has a separate set of code in the program. In my opinion, a game shouldn’t need to be aware of the technical details of the communication channels linking different invocations of the game, or even how many channels there are. A better solution would be a standard “game network” device driver, for each platform, which would know how to send packets from one machine to another, regardless of how they were connected. Ideally, this driver would support:
+ some form of simple timestamping and error checking, for synchronization.
+ fast, real-time messages
+ extra, low-priority channels for chat or other background data
+ ability to add and subtract players at any time, if the game supports it.
+ ability to have named players
+ support for both real-time and batch/turn type games.
+ ability to insure that the different platforms are running the correct version of the game.
+ ability to single-cast, multi-cast or broadcast message packets.
+ ability to have a local link, which allows two instances of the same game to be played on a single machine, if that machine supports multiple processes.
+ ability to connect a variety of platforms together, with more than one communication method.
+ software guidelines for using the driver — what kind of information to transfer, at what level”the networking should take place, etc. (For example, Lucasfilm’s Habitat sends “conceptual objects” across the connection, while MicroIllusions’ Firepower sends raw joystick movements).
All of this makes life easier for the game designer, but what about sales? That’s what will or will not make multi-player games a viable market. One solution that we have come up with is to seed the market with low-cost networking hardware. This starts by creating a simple hardware box that allows a few computers (say 6-8) to be linked together via serial cable. With the number of cheap, programmable microcontrollers available today, the parts cost shouldn’t be more than about seven to ten dollars. Another solution would be to use a combined network of both serial and parallel cables to link more than two computers together, perhaps in a ring structure. The full schematics for this system are then published, both in magazines and on the electronic networks, and include with those plans a few freeware games that are very simple, and the device driver mentioned above. The same units can be sold assembled for a higher price.
The early adopters of this system will of course be the BBS users, hardware hackers, and hard-core gamers. With the availablity of free or nearly-free hardware, however, these technologically sophisticated users will rapidly evangelize this technology to the general gaming community.
You might question whether game players will really take the trouble to lug their machines away from their desks so that they can be in physical proximity. In my opinion, the added extertainment value of face-to-face interaction makes it worth while for the player to go through the trouble of doing this, rather than falling back on the more convenient but less personable modem option.
Once enough units are out there to make a viable market (keeping in mind that the device driver allows us to network games together using the existing connection methods, even if they don’t have the fancy new hardware), we can start selling commercial games that can take advantage of the situation. Essentially, we are pulling the old trick of giving away the razor and making money selling the blades.
An example of what can be done: I currently have two of my Amigas linked together with a peer-to-peer network. Basically, this freeware package consists of a filesystem driver “NET:” which is mounted just like any other hard disk, and a schematic for creating a special cable to plug the parallel ports of the two computers together. While the speed is nowhere near Ethernet speeds, it is decently fast, and I can access any of the other computer’s files or devices as if they were local. We currently have a CDTV and an Amiga 3000 linked together this way, allowing the A3000 to access the CD-ROM drive of the CDTV.
The point that I’m making is: Low-performance networking solutions don’t have to be expensive, and if made cheaply enough they might be able to provide a catalyst for creating the kind of games we want. While this technology wouldn’t solve the problem of finding another player to game with, it would increase the number of available players by making those players more compatible. Perhaps user groups or even game groups will get together on weekends and connect their computers together in a mass frenzy of game playing. One can only hope.
_________________________________________________________________________________________________________ Advertisement
Programmer/Partner
C & Assembly Expert
Multiuser games, telecomm
Paradise
PO Box 5324
Kansas City, MO 64131
(816) 941-3927
_________________________________________________________________________________________________________
Announcement
Anybody attempting to establish an account on GEnie to access the JCGD RT there should NOT send their free flag request to GM. Instead, that request should now go to HORO.
_________________________________________________________________________________________________________
What Would You Say if We Sold a MILLION of these?
Forrest Walker
Why has there never been a game with sales figures comparable to those of Print Shop?
Computer games have always canvassed narrow niches. A game like Red Baron does not compete for the same over-taxed dollar as, say Martian Memorandum. It obviously competes with Knights of the Sky and Secret Weapons of the Luftwaffe, but it probably does not even compete with other non-simulator war games like Patton Strikes Back, or non-war simultators like Stunts. FRP’s and D&D games compete with each other in their niche, as do mystery/cop games. They compete for that small segment of gamers who like that particular type of game.
I think I have answered the question I started with. The answer is that each game seems to be narrowly targeted at a mini-segment of the home computer market. A product such as Print Shop crosses over into all those niches. Now remember, we have established these niches for ourselves.
Ah, but there is also another answer. Print Shop has legs! The original Print Shop , and the New Print Shop and whatever future Print Shops, continue to sell month-after-month, year-after-year. A typical game has a shelf life of months, a year if it deserves it, or more often, a few weeks. Sequels tend to give a Quest game or Wing Commander another year on the charts, but they too pass into the bargain bins in time.
Is it reasonable to expect anyone to be able to conceive of a game that crosses our self-imposed niche boundaries? Is it reasonable to expect anyone to design a game that will continue to sell, year after year? (Advances in technology aside of course) Another question might be more reasonable, are there any niches left unexploited today? Perhaps another question is advisable, is attempting to exploit a new niche, or crossing over niches, a fiscally sound enterprise? The only answer to all of those questions is “nothing ventured, nothing gained”. The glut of flight simulators, car simulators, adventure games and the rest will eventually deplete the reccessionary wallet of the most ardent occupiers of these niches. So, it stands to reason that a successful excursion outside of these niches would net a vast gain. Instead of breaking a hurdle at a hundred thousand sales, imagine if the goal became a million copies. To do so is going to require rethinking what we do.
Sharing ideas for chart-busting new genres should not be done in this forum, they are as proprietary as code. But giving examples of recent successes at this venture may help explain the benefits of looking outside and beyond what has already been done. The best example I can cite is Sim City. It isn’t really a simulator, is it? It isn’t really a fantasy game. It appeals, in my opinion, to a set of computer owners that the game industry has ignored, but can no longer be counted out. These people do not want to emulate “wizards whack-whack-whacking in the forest” (Thanks Chris, I LOVE that line), they want to solve real world puzzles. Some people’s idea of fun is different. Entertainment is a religious issue. You keep to yours and I’ll keep to mine. We have to find out who has never bought a game for their home PC, and design one for them, because it is central to my thesis here that a million unit seller can only be achieved by uncovering this very large group of people.
So, who are they? Who spends one to two thousand dollars on home computer? People who need them to work at home, that’s who. Not people who only want to play games. Those people buy Nintendos, Genesis or perhaps Amigas, but not IBM clones. Our untapped market buys a home computer because they need to crunch numbers, write reports, or telecommute to the office. They usually do not buy $100+ sound cards. These people will purchase and use Print Shop because they can use it without any fancy peripherals. Remember who I am talking about here, the people who are not yet our customers. They are not our customers because no one has created a genre of games with a broad enough appeal.
When someone does, those SPA Gold awards will be for a million sold, not a hundred thousand.a
_________________________________________________________________________________________________________
A Case for Scaled Interactivity in Game Design
Doug Clapp
In the December issue, our esteemed editor scorned what he called “low interactivity” games. Forgive the one word summary of Chris’s well-reasoned position.
I disagree — if you’ll allow me to expand the playing field.
True, low interactivity (from now on LI) games aren’t much fun — if they remain LI! But ramping up from LI to high interactivity (HI, naturally) is a blessing to game players — especially common folk who may never ascend the highest levels of prowess.
Here’s an example. I recently rented a Sega Genesis to check it out. I picked up a “shooter” (Strider), Sonic the Hedgehog (Sega’s attempt to one-up Mario) and John Madden Football.
Of the three, I expected to enjoy John Madden Football the most. After all, I know something about football. Notice the well-worn couch? No problem.
Being a prudent soul, I perused the manual. And was stunned. “Jeez, listen to this, dear!” I bellowed at my wife. “You can call audibles at the line of scrimmage, pick defensive formations, blitz, change the weather, choose from...it just goes on and on and on!” Furiously I’m flipping the pages—lots of pages —finding feature after feature after feature. Amazing.
Overwhelmed, I start to play. Immediately, I’m totally confused. Looks like fun, I think, if you invest a year or two figuring it out.
But then it hits me. (If I’d read the manual carefully, instead of flipping pages, I’d know this already...)
I don’t have to do everything!
Say I’m the offense. If I wish, I can merely choose a play (easily done from the on-screen graphics) and click a button to run the play. My guys do their best; they play, I watch. Pick another play, click, try again. If it’s a pass play, my quarterback passes. He passes, the receiver catches (maybe), I watch.
It’s kinda fun. I’m getting it. It’s easy.
Now, emboldened by my success, I decide to take a bit more control. After the snap, I quickly select the ball carrier and do the running myself. After a while, I get it. Now, I take advantage of another feature: A last-minute extra effort. I push the button right...now! I can also try a last-minute spin to avoid a tackle. That’s another button combination. For now, I skip spinning and concentrate on running.
You get the idea. I can be involved or passive. I remain a mere play-caller — which is fun. It’s interesting to watch the algorithms at work. Or I can step in and take control, incrementally. If I understand the defensive sets well enough to call an audible, I can. I can choose how much I interact with the game.
Another example? Chris’s Patton vs. Rommel (still my favorite game). Here, the interactivity is more explicit: levels of play. I played at the lowest level for a long time. Only when I consistently won did I move up.
Still, I never achieved the finesse of a friend who plumbed the game: twiddling every knob to make gameplay supremely difficult. Thoughtfully, Chris made the game parameters customizable, which truly deepens the game for true devotees. At the highest level, you — the player — are creating the game.
Those are two examples. Both ramp from low to high interaction. Here’s one more: RPG games where you gain spells, abilities, and items as you advance. Again, interactivity goes up, hopefully at a pace with your understanding of the game. If the game is well-designed, you’re never overwhelmed, only increasingly more skilled, confident and interactive. As you know more, you can do more. As you do more — as the interactivity goes up — the game becomes (hopefully) more engrossing; more fun.
But you’re not overwhelmed at the beginning.
The best games pull you in. They begin easy and become complex, at a pace. The art is both in the game itself and pace between the game’s growing interactive possibilities and the player’s increasing skill and understanding.
Chris rightly faults Defender of the Crown and other games which never advance beyond their low interactive beginnings.
Let’s mull on interactivity. A game can be physically interactive, intellectually interactive, or both. Look to the arcades or cartridge game machines for sheer physical interaction.
Mental interaction? Chess is the ultimate example. Each single, simple move produces ever expanding waves of possible positions and outcomes.
In either case, the best games are easy to begin playing, yet challenging to master.
Ramping from LI to HI doesn’t apply only to games. I’ve never written an Excel macro or Microphone script from scratch, but...one of these days...when I get time, I may. It’s nice to know the capabilities are there. Interactive? You can’t get more interactive than the blank page (or unwritten macro or script). There, at the end, high interaction becomes creation.
Advice from a layman? Don’t disdain low interactivity. Make the game playable by the most casual passerby. Ramp it up as the player progresses — or allow ramping at the player’s discretion, as John Madden Football does. Don’t make it necessary to learn everything to play and enjoy. Don’t make it necessary to know everything to win.
Let the player choose the level of interaction.
If it’s too hard, it’s no fun. If it’s too easy, it’s no fun. All you need to do is make sure it’s easy enough (low physical and/or mental interaction) and hard enough (high physical and/or mental interaction). When it should be.
But you know that.
_________________________________________________________________________________________________________ A Reminder
I always need articles for the Journal. And next issue will be a special one. If all goes as planned, I’ll be giving away free copies of the Journal to the 500+ attendees of the Computer Game Developers’ Conference. If your article makes it into the Journal, it will be read by just about the whole cotton-pickin’ industry. So go ahead — make a fool of yourself in front of the whole world. Expose yourself in public. Be an intellectual flasher, a streaker of ideas. Yowzah!
_________________________________________________________________________________________________________
Color Versus Resolution
Chris Crawford
A fascinating theoretical problem is posed by the choice between color and resolution in screen design. We all prefer, of course, to use as many colors and as much resolution as possible, but in a world of finite resources, we must choose. The current choice is between the two best display modes of a VGA board: 640 x 480 x 4 bits and 320 x 200 x 8 bits; but there are other options as well and we can be certain that, while the exact parameters may change, the basic choice between resolution and color will remain with us for awhile.
Subjectivity versus Objectivity
I shall begin my analysis with a few snide remarks at the anarchic position. This school argues that it’s all utterly subjective and that therefore there can be no rational basis for making a choice. A black & white drawing by Michaelangelo surpasses a multi-colored crayon piece by a four-year-old. (To which another anarchist might reply, “Who said Michaelangelo is better than a four-year-old?”)
This school confuses the instance with the principle. It sees everything in terms of individual items rather than statistical ensembles. I readily concede that any individual image might be better or worse than any other individual image, but I am talking in statistical terms. In general, all other things being equal, will a higher-resolution image be better than a higher-color image? To put it another way: if we were to gather a thousand representative high-resolution images, and a thousand high-color images, would we be able to make an overall statement as to their relative collective quality?
There really is such a thing as objective truth, even in matters of aesthetic. The objective dimension of the arts lies in the physical mechanics of human perception. We may argue endlessly and futilely whether Beethoven is better than Bo Diddley, but there can be no arguing that a CD presents the music better than an analog Philips casette. The human ear is capable of perceiving sounds in a certain range; the fraction of that range that a musical medium can capture, and the fidelity with which it does so constitutes an objective measure of the quality of the medium.
The same thing is true of the human eye. The eye is not a linear transducer; it is a vastly complex organ whose response to visual stimuli is still the object of scientific research. For example, we know that the eye has a particularly strong response to scenes with large spatial or temporal derivatives (i.e., sharp edges or fast motion.) There are also many indications that the eye is not two-dimensionally uniform: its side-to-side perception differs from its top-to-bottom perception. Moviemakers know this: that’s why movie screens are wide and short.
Artists as Sources of Information
This last observation regarding movie screens suggests a novel strategy for learning about the characteristics of the eye. Do you think that movie moguls hired a bunch of perceptual psychologists and asked them to determine the optimal dimensions of a movie screen? I doubt it. My guess is that the dimensions of the movie screen arose from the urgings of the directors and cinematographers — the artists. They may not have any formal training in physiology, but they have innate knowledge of how the human visual system works.
This suggests to me that we might be able to learn a thing or two about the human eye by examining the work of artists. We must be careful to look at many artists, to statistically compile a great deal of work, for artwork is so idiosyncratic that we could easily fall prey to errors if our sample were too small. In order to explain the next step in my reasoning, I must now step aside and take a circuitous digression. Bear with me.
Information Content
The amount of information in an image is simply the number of pixels multiplied by the number of bits per pixel. Thus, a 320 x 200 x 4 bit image will require 32,000 bytes to display. A 640 x 480 x 8 bit image will require 307,200 bytes. But this calculation does not reveal the information content of the image, only its information capacity — how much information it could optimally convey. I could present you with a 640 x 480 x 8 bit image that contains, in 72-point text, the single word “Hi”. Although my computer would require 307,200 bytes to present you with that image, the amount of genuine information in that image is far less that 307,200 bytes. This is true of all images. Every image falls short of its medium in that it cannot possibly utilize every ounce of information capacity. Thus, we seem to be stuck. We cannot measure the actual information content of any image, only its information potential, how much information it might convey if it were a perfect (whatever that is) image.
Now, you might be wondering what significance this has. Who cares about information content or information potential? My answer is sweet and pithy: truth is beauty. I assert that information (truth) is intrinsically beautiful. The more information content an image presents, the more visual substance it has, the more it delights the eye. An image of random pixels is not beautiful, because it has no information content. An image of uniform color is also not beautiful, because it too has no information content. But if the image is organized, if it has elements that fit together, suggesting meaning, then it has information content — and beauty. Our problem is, how can we measure the information content?
Measuring Information Through Compression
How efficient is English as a medium for conveying information? The answer lies in text compression. English uses 26 letters in its alphabet. Ideally, each letter would be used with equal frequency, but we know that is not the case. Some letters — q, x, and z, for example — are used rarely, while others — e, t, and i — are used heavily. This manifests itself in high compression factors for English ASCII text. If you take some English text in ASCII form and compress it with a file compressor, you will get a file that is perhaps only 35% of the size of the original. This 35% figure I will refer to as the compression factor.
The compressed filesize is a measure of the true information content of any expression. In other words, if you take a raw textfile, image, or sound, and measure the size of the file, you do not necessarily have a reliable measure of the information content of the expression. The image could be a screen dump of a blank white screen, yet still consume 307,200 bytes. The sound could be 10 seconds of silence at 22 KHz sampling rate, yet would still eat up 220K of space. However, if you compress that sound file with a competent compressor, it will squeeze down to almost nothing, which is a much better statement of its information content. If you compress the blank white screen, it too will shrink to almost nothing in size. Thus, the compressed filesize is a measure of true information content.
So here we have a tool for measuring the information content of our images. Suppose we measure the information content of many actual game screen images. We cannot directly compare information content, for that depends on image size. But we can directly compare the compression factors. If we discover that these compression factors are higher for 4-bit images than for 8-bit images, then we can conclude that artists are able to squeeze more information content into those 4-bit images.
A Test Drive with Font Sizes
As a simple test of the basic principle, I carried out a somewhat different test. I set up a large essay in MicroSoft Word, formatted in 9-point text. Then, using a screen dump utility, I captured the image of the text window as a straight bitmap. I compressed this bitmap and recorded its filesize. Then I returned to MicroSoft Word and reformatted the text to 10 points. Once again I captured the image — but I captured the same window, not the same text. Because the text was larger, I captured fewer characters even though I captured the number of pixels. Again, I compressed the resulting bitmap and recorded its filesize. Then I repeated this process for other text sizes and for another font. The results:
This graph clearly shows that information content is highest with the 10-point Geneva font and the 12-point Times font. It should come as no surprise that these two sizes are the most popular sizes for word processing on the Macintosh screen. However, the reader might wonder why the graphs peak. As the font grows smaller, we pack more pixels onto the screen. Shouldn’t the graph continue to rise as we move from larger point sizes to smaller point sizes?
The answer lies in the artists who designed the fonts. They realized that the tiny fonts were hard to read, so they inserted additional leading (white space between lines of text) to make it easier for the eye. From 24 points down to 12 points, the lines are packed together with minimal leading, but below 12 points the additional leading takes its toll on information content.
This little excursion shows how the information content analysis can be used to discover artistic truths that are otherwise hard to make explicit. Every Macintosh user knows that the 10-point and 12-point font sizes are the best sizes to use, but if you demand to know why this is so, the user will glance about helplessly and shrug, “It just looks better.” This graph shows why.
The Screen Image Data
Aaron Urbina and I captured a total of 47 different screen images from 13 different games. There were Macintosh games and IBM games, games with 1-bit, 4-bit, and 8-bit graphics. We exercised some judgement in the images we chose. For example, we excluded games with emphasis on fast animation (mostly skill & action games and flight simulators) because such games often require simple backgrounds, and such simple (i.e., low information content) backgrounds would bias the sample. We also excluded images from such games as Loom and Balance of the Planet, because both products made deliberate use of very clean, simple graphics styles. Again, this would have biased the data. For opposite reasons, we ruled out scanned photographic images. Finally, we excluded screen images that contained mostly text. We concentrated our attention on games with extensive hand-drawn artwork; I believe that such screen images come closest to satisfying the reasoning presented earlier in this essay.
The resulting compression factors are presented in graphical form:
What does this mean? My analysis of this graph is complicated. My first consideration is with the unavoidable sources of error — they are subtle and significant. One such is the fact that most of the 4-bit data comes from games that are older than those providing the 8-bit data. The significance of this lies in the fact that the cost per bit of storing data on floppy disks has fallen rapidly. Thus, much of the 4-bit imagery was created at a time when publishers were more sensitive to the production costs of large, elaborate imagery, and so tended towards simpler images. The 8-bit data, being more recent, should be less sensitive to this concern. The upshot of this is that the 4-bit data is pushed towards lower compression factors relative to the 8-bit data. To put it bluntly: the 4-bit data is probably lower than it should be.
Another problem arose from the intermixture of text and graphics. Almost all game screens mix the two, but how much text can I accept and still have a valid test of a hypothesis whose primary concern lies with graphics? This is a really tough issue, because text is just as important as graphics. Every game has some text, yet text tends to bias this analysis towards resolution instead of color. You only need two colors to show text — foreground and background — but you need as many pixels as you can get for good resolution. Moreover, restricting the data sample to images that contain no text whatsoever would drastically reduce my sample size. I decided to reject screen images that are primarily text, and to accept any image with more than about 50% of its space devoted to graphics. I could argue that this deliberate exclusion of a signficant portion of the screens from real-world games biases my results against resolution, but in the end I decided to err in the direction of conservatism.
Another potential source of error arises from the compression overhead. Most compressors have a certain amount of overhead. The filesize is equal to some constant (the overhead) plus the actual information content. I examined the effect of this problem by starting with a large 16-color image (the map from Patton Strikes Back), compressing it into a GIF file, calculating its compression factor, and then repeating the process for smaller chunks taken from the same image. I present the results in tabular format:
Buffer GIF Compression
Size Size Factor
144,000 100,279 70%
46,200 31,589 68%
20,400 13,846 68%
6,400 4,470 70%
5,600 3,693 66%
If the overhead were a significant factor in this analysis, then the compression factor would increase as the buffer size decreased. This does not appear to be the case. I conclude that overhead is not a significant consideration.
Conclusions
It’s clear that 1-bit images have far more information content than either the 4-bit or the 8-bit images. We get more bang for our buck, more picture for our pixels, with black and white images than with color images. I think this is because the choices an artist faces with a black and white image are simpler than those with a color image. An artist decides only whether a given pixel should be black or white. This simpler level of decision-making encourages, I think, a greater expenditure of artistic effort on each pixel. Artists can afford to lavish more attention on black and white images — and they must do so if they are to obtain decent results. The result is that, pixel for pixel, black and white images carry more visual punch, more information content, than color images.
The situation with color images is not so clear. If we compare the 4-bit results with the 8-bit results alone, we discover that the difference is not statistically significant. There is no statistical basis, from the 4-bit results and the 8-bit results alone, to say that 4-bit images have more information content than 8-bit images.
However, the strong showing of the 1-bit results cannot be ignored. If we do a linear analysis on all three bit depths, then we get a significant difference. There really is something going on here: that graph is too steep to dismiss. When we recall that the primary sources of error tend to push the 4-bit data downward, I think it safe to say that the initial hypothesis is probably confirmed. At the levels of resolution and color with which we work, resolution is more informative than color. 4-bit images communicate more to players than 8-bit images.
As times change, this conclusion will lose its force. As we move to higher resolutions with SVGA and other displays, the value of color will rise. The difference between 4-bit images and 8-bit images is already small; I expect it to disappear as we make the next step upwards in resolution.
_________________________________________________________________________________________________________