Volume 5, Number 1 October 1991 

Contents

Editorial: It Ain’t Art
Chris Crawford

Quality Assurance is Worth the Investment
Forrest Walker

History of Computer Games: The Atari Years
Chris Crawford

Plot Automation
David Graves

Computer Games Versus Video Games
Chris Crawford

Ads and Announcement Page

Editor Chris Crawford

Subscriptions The Journal of Computer Game Design is published six times a  year.  To subscribe to The Journal, send a check or money order for $30 to:

The Journal of Computer Game Design
5251 Sierra Road
San Jose, CA 95132

Submissions Material for this Journal is solicited from the readership.  Articles should address artistic or technical aspects of computer game design at a level suitable for professionals in the industry.  Reviews of games are not published by this Journal.  All articles must be submitted electronically, either on Macintosh disk , through MCI Mail (my username is CCRAWFORD), through the JCGD BBS, or via direct modem.  No payments are made for articles.  Authors are hereby notified that their submissions may be reprinted in Computer Gaming World.

Back Issues Back issues of the Journal are available.  Volume 1 may be purchased only in its entirety; the price is $30.  Individual numbers from Volume 2 cost $5 apiece.

Copyright The contents of this Journal are copyright © Chris Crawford 1991.

_________________________________________________________________________________________________________

Editorial: It Ain’t Art
Chris Crawford

One of the idle vanities of our profession is the belief that we are artists, that we game designers make art as we build our games. There are, of course, as many variations on this theme as there are game designers. Some designers insist that game design is art, but programming is not. Others define art more in terms of traditional art, and so restrict “art” to the graphics and music that accompany a game. Others use a definition of art so liberal as to insure that everybody associated with game development is an artiste (“programmers are artists! testers are artists! duplication machine techs are artists!”)

There are larger issues here than personal vanity, issues that involve the essence of what we do. The resolving distinction, the acid test that reveals the true character of our industry, lies in the question of whether we make art or entertainment.

Now, before I give my answer to that question, I’d like to characterize the difference between art and entertainment. I can’t forge a precise definition, but I think that I can describe that difference extensively enough to give you a clear idea of my meaning.

One Man’s Definition
Art happens when lotsa middle-aged wimmen in long gowns and horned helmets sing real high-pitched, while dozens of guys in tuxedos saw on their fiddles, and one guy taps real light on a kettle drum. Art has pictures of plump nekkid ladies floating in the clouds, swathed in lotsa silk, with little cherubs holding the corners. Art has lotsa neat French words mixed in every now and then, words like “poulet chevalier”.

Both entertainment and art are meant to evoke emotion, but art goes deeper and further into the human soul than entertainment. Art conveys joy where entertainment makes you laugh. Art has tragedy, where entertainment makes you sad. Art has passion; entertainment has sex. Art goes boom where entertainment goes pop.

Art requires a stronger sense of context than entertainment. Now, both art and entertainment work within a context; the conventions of any medium create an expressive shorthand that both artist and entertainer rely on. The crassest stand-up comedian knows that his audience understands what a punch line is, how soon they expect to reach it, and how simple it must be to be effective. But the contextual requirements of art are more stringent than those of entertainment. To really appreciate art, you need more background, more training. When was the last time you saw a college course listing for an “entertainment appreciation course”?

Thus, art tends to be more elitist than entertainment. It appeals to a smaller, more educated audience. Entertainment is for the masses and art is for the snobs.

Art is an expression by the artist; entertainment is an expression for the audience. The artist is primarily concerned with himself; the entertainer is primarily concerned with his audience. The artist seeks to express the truth within himself; the entertainer seeks to give the audience what it wants.

Entertainment is therefore intrinsically commercial and art is just as intrinsically noncommercial. You can sell entertainment and you can’t sell art. (It’s true that snobs will pay a lot of money for your art, but only after you’re dead.)

Having said this, I can now state that almost nothing in the computer game industry comprises art. We are an entertainment business. We give people what they want. I have never heard a publisher or producer say, “But Chris, is this change true to your vision for the product?” [Not strictly true; there was one publishing house executive who said such things, but she’s no longer employed in the industry. Does that surprise you?]

We make entertainment; we do not make art. 

Yes, there have been many impressive moments in computer games, moments that tempt us to apply the label “art” to our games. The moment in Planetfall when Floyd the Robot sacrificed himself to save you. The similar experience in Wing Commander when your sidekick bid you goodbye and blew herself up in the midst of the Kilrathi. The magnificent artwork in King’s Quest V, the powerful music and storyline of Loom. Yes, there have been impressive moments in computer games, moments that do evoke genuine emotion, but that doesn’t make them art. My eyes dabbled up when little E.T. died, and I cheered when he recovered, but the movie wasn’t art. There are plenty of movies that are art, but their ability to evoke emotion is not what makes them art.

So What?
Here’s a possible counterpoint: “Who cares if it’s not art? After all, we’re in the business of making money and having fun, and the labels we use are without meaning. If we make people happy, enjoy ourselves, and make a good living, why do we need to worry about whether it’s art? Why can’t we just do our jobs?”

My answer to this arguement is that there is indeed operational significance to the role of art in entertainment. Art is the wellspring of entertainment, the font of basic material that all entertainers use. Star Wars was based on a body of myth that had been polished over the course of centuries. The untrained rock guitarist may not know it, but the melodies, rythms and basic structures that he learns by listening were developed and refined by artists such as Telemann, Vivaldi, and Bach. The cinematography of modern movies traces its heritage all the way back to the great classical painters of the Renaissance.

Entertainment needs art to provide it with fresh new ideas, new themes, new approaches. Art is the spawning ground for entertainment. Art is, in effect, the “R” side of “R&D” for our business. Indeed, we have already seen this process in the relationship between the videogame industry and the disk-based game industry. (See “Computer Games Versus Videogames” elsewhere in this issue.)

A good rule of thumb in any fast-moving industry, and especially the high-tech world, is that 10% of industry revenues should be spent on R&D to maintain healthy growth. Now, most of this money should be on the “D” side — what we developers do. But the industry still needs to devote a few percentage points to the “R” side of the business. But we don’t spend that much money on the artistic side of our business. We have no Xerox PARC, no Bell Labs, no university endowment for the pursuit of fundamental research in games. We don’t even fund projects that are too unconventional. For us, research is a matter of figuring out how to implement yesterday’s games on compact disk, only with more graphics, sound, and animation. That’s not research; that’s new and improved soap flakes.

It therefore behooves us to take a more serious look at our industry and our own design goals. Vainly declaring ourselves artists in the absence of any real artistic effort or achievement is an egoistic exercise that can only blind us to the need to correct our failures. Dismissing artistic endeavor as irrelevant to the needs of an expensive business is cynical and short-sighted. We are living off of our artistic capital now; unless we as an industry can muster the gumption to start generating some artistic income by devoting some fraction of our efforts towards truly artistic products, we will stagnate. And in any entertainment business, stagnation spells doom. 

_________________________________________________________________________________________________________

Quality Assurance is Worth the Investment
Forrest Walker

As our industry matures, as development costs skyrocket and products become more and more sophisticated, developers can no longer afford to save money by skimping on QA. In fact, spending money wisely now will save so much money later, that a well-staffed and -equipped testing department, along with a controlled and planned QA process, can actually turn out to be free!

I manage the QA department at Dynamix in Eugene, Oregon. In the last year, the company has made an impressive investment in the department and an equally impressive commitment to quality. These are two separate items. A well-run and -staffed testing department is not the definition of a quality assurance process. Let me explain. 

First, let’s look at and define some terms: “beta testing”, “testing”, “Quality Control” (QC) and “Quality Assurance” (QA).

Beta testing is the process of handing an incomplete game to an experienced gamer and asking for comments on everything from plot to play balance. For many good reasons, beta testers should come from outside the company. They should sign a non-disclosure agreement and receive a final copy of the game in compensation for their services. In the course of doing so you can be assured that they will find bugs, but that is not their job. Therefore, do not confuse this with testing. 

Testing is what most developers do now. They may call it Quality Assurance, but it is not. Testing is supposed to be a managed, directed attack on the code. It is supposed to create well-written bug reports that are easily understood by the people who need to fix the problem. Testers should be game enthusiasts but primarily they should be cunning. They should like to break things, and they should have the innate intelligence to discern the limitations of products and find the weak spots in code. The best testers I have hired are students taking a break from college. They are simultaneously irreverent and professional. They are quite able to communicate verbally and on paper. Those qualifications and a love for our products is all I look for. From that raw material I constantly create excellent testers. 

A word to the wise here. If you have a group of good testers, please do not let other people in your company go around referring to them as “playtesters”. The term is demeaning. We should know better than most people that the medium is the message, and if you continue to refer to your “QA” department as “playtesters” you will never take them seriously or treat them professionally. They are not playing your game, they are testing it. They are not playing, they are working. Treat them that way.

Quality Control is a level few developers are practicing today. We practice it here at Dynamix because management is committed to getting the best product possible out the door. QC involves, in part, quantitative analysis of the bugs found on the last product. You need to group bugs into “genres” and get to the root cause of the problem. Any time you find essentially the same bug in different parts of the product, and in product after product, you can bet that the root cause exists not with the coding process, instead with your tools. If you are lucky enough to be working with a set of tools which you will use on the next project, and you find the root cause of these bugs in the tool, you have just saved yourself from the entire cycle of finding, reporting, fixing and regression testing the same bugs in your next product. Your testing hours, and time spent fixing bugs will drop dramatically, and quality is starting to get to the point where it is free.

Quality Assurance is our goal at Dynamix, and should be yours as well. QA is a process which starts at the top of the company and becomes a mind set all the way through the development cycle. It requires that everyone be involved in, and desire to, improve the way things are done. For instance it would involve Tom talking to Dick about what went wrong last time around. Not pointing fingers, mind you. Instead it involves Tom saying “Dick, my effort was held back because your group did/did not do such-and-such”, and Dick being able to say “I hear that, let’s take a look at my process”. When this QA thing really starts pumping, Dick will say “Tom, I feel that if we had done so-and-so earlier or better, your job would have been easier or had better results, what do you think?”. When your development group has reached this point voluntarily, and when they know that you are committed enough to the quality process, then you will spend less of your schedule and budget testing product, very much less. At this point you will realize that an early investment in QA has paid off many times over. 

So, in conclusion, let me urge developers and publishers to not just spend more money testing, but spend more time setting up a QA process in your company. I would not encourage adherence to most of the QA systems currently being used by the likes of Ford, they are way too formal for our industry, but the gist of the process is the same. Study Edward Demming, Phil Crosby and Pat Townsend. Take the best of their ideas and implement them in such a way that will not disturb what you like about your company culture, while improving the way things get done. We will all benefit from the individual success of each company that does. Our customers deserve quality product. 

_________________________________________________________________________________________________________Computer Game Developers’ Conference 

CALL FOR PAPERS

The Sixth Computer Game Developers’ Conference will begin Saturday,April 25, 1992 at the DoubleTree Hotel in Santa Clara, California. The conference board is soliciting proposals for lectures, panels, and round tables to be delivered at the conference.  A one-page proposal should outline the content of the proposed presentation in several paragraphs.

In addition, if you would like to speak, but don’t have an appropriate proposal, send a list of your areas of expertise and you will be matched to a topic, if possible.  The board is also interested in additional topics for lectures or round tables which you would like to attend.

The deadline for all proposals is October 15, 1991. 

Send proposals to:

Computer Game Developers’ Conference
5339 Prospect Road, Suite 289
San Jose, CA  95129-5020

All proposals and other information must be in the board’s hands by October 15 to receive consideration.

_________________________________________________________________________________________________________History of Computer Games: The Atari Years
Chris Crawford

1980 marked the beginning of a new decade, the decade of Reaganism, a decade of rampant greed and materialism, a decade in which America searched for its soul and came up empty-handed. The new decade also marked two major developments in gaming: the videogame explosion and the coming of age of computer games. 

Atari and Videogames
I joined Atari in September, 1979, and at that time the success of the Atari 2600, the original videogame console, was very much in doubt. I recall an engineering department meeting that fall, in which the entire engineering staff of Atari gathered in a single room to hear the Vice President of Engineering present a summary of the company’s position. Sales of the 2600 were adequate to justify continued software development; the 400 and 800 had just been introduced and the company had high hopes for these machines. Fortunately, the coin-operated games group was making steady profits that could  keep us afloat. And our parent company, Warner Communications, had deep pockets.

The creative source for game designs in those days was the coin-op business. If a game did well in the coin-op business, it was ported over to the cartridge environment. All of the big hits of those days were originally designed as coin-op games. For example, Space Invaders started life as a Japanese coin-op game. Atari bought the rights to the game, and Rick Maurer designed an absolutely brilliant port to the 2600, where it became a big hit. Sometimes, however, this process didn’t work. When Atari ported Pac-Man to the 2600, the result was so bad that critics dubbed it “Flicker-Man” and the customers were deeply disenchanted with Atari.

Although the aforementioned games were both Japanese designs, Atari Coin-Op created quite a few hits on its own. Missile Command, Centipede, Battlezone and Tempest were some of their more successful games. Some of these were later ported to both the videogame and the computer game environments.

It’s difficult to realize just how much the coin-op mentality dominated game design. Coin-op games were all designed to last for three minutes, for perfectly sound economic reasons. This was carried over into both cartridge games and disk-based games, even though the value of a three-minute termination on these platforms was nil. Designers just couldn’t conceive of a game as anything other than a coin-op game that you played at home.

Computer Games
Meanwhile, computer games were enjoying growth that was rapid, although not as spectacular as that of coin-op games and cartridge games. More important, computer games were already showing greater creative diversity and a more mature form of game. Automated Simulations was just starting on its fantasy role-playing games, culminating in the Temple of Apshai series. Scott Adams released his adventure game series on casette tapes. SSI created quite a stir when it released Computer Bismarck at a price of $59.95. This was an absurdly high price, especially in 1980 dollars, but enough people bought it to keep the company going, and they released a series of wargames over the next few years, all on the Apple II. At about the same time, Ken and Roberta Williams were selling their graphic adventures, starting with Beneath Apple Manor. Doug Carlston was designing, programming, and selling his own games, also on the Apple II. 

Which brings me to the subject of hardware platforms. With the benefit of hindsight, and a healthy dose of historical revisionism, it is easy to see that the Apple II was destined to surpass its competitors. But it didn’t look that clear in 1980. The Apple II was an expensive machine, a favorite of hobbyists with money to spend, but many computer junkies preferred the less expensive TRS-80 and Commodore PET. The Apple’s big advantage in the early days was its color display, but this advantage was shattered with the appearance of the Atari computers, whose graphics far outshone that of the Apple. The Apple had a better disk drive (by dint of violating FCC regulations) and a larger software base. But the really big break came when Visicalc appeared on the Apple in 1980. Visicalc gave the Apple II a big advantage. The program was eventually ported to other machines, but it took about a year, and in that time the Apple II established a huge lead.

Despite all this activity, the fact remained that  computer games were still in a primitive state in 1980. There were only a handful of developers and publishers. The supply of games was desperately short; the release of any new game was eagerly awaited by computer users. The games available were not very good. Almost all were written in BASIC. They were slow and used almost nothing in the way of graphics. Moreover, since most had to run on machines with about 8K of RAM, they were quite limited. The distribution and retail outlets were primitive. Many games were sold by mail order; a typical game would be lucky to sell more than a thousand units.

1981: Rolling
The computer games industry perked up dramatically in 1981. Many of the basic problems had been solved; there was now a sizable group of people who knew how to create, program, publish, and distribute computer games. Game production acoordingly took off. There were now a number of publishers who released multiple products in 1981: Automated Simulations, SSI, Avalon-Hill, Broderbund, Adventure International, and several companies in southern California whose names I have since forgotten. 1981 was also a banner year for me: four of my programs were published in that year: Energy Czar, Scram, Tanktics, and Eastern Front (1941). (Note, however, that those four products represented two years’ work.)

Meanwhile, videogames and coin-op games continued their own steep growth. The Atari 2600 was enjoying sensational success, but it was growing from a small base. The emphasis on coin-op games continued, although there were a few nontraditional videogames. The most striking of these was Warren Robinett’s version of Adventure, (programmed in 1979 but not released until late 1980) an astounding achievement on the 2600 that gave rise to many derivative games.

1981 also saw the birth of Computer Gaming World. We were starting to become a real industry, and now we had our own magazine. Still, 1981 was a year of waiting, of gathering momentum.

1982: Anno Mirabilis
Everything exploded in 1982: videogames, coin-op games, and computer games. Time Magazine put videogames on its cover. It’s difficult to convey the wild goldrush feeling that pervaded the industry that year.

The most sensational developments were in the videogame field. Atari sold $2 billion worth of hardware and software that year; Atari’s sales were doubling every 9 months. A wild frenzy set in; everybody was working on videogames. There were scores of companies publishing videogame cartridges. Some were good, many were bad, but it didn’t seem to matter. The public bought whatever was on the shelves. Companies that had gotten in early made sensational profits. Atari released Pac-Man for the2600 that year. Despite the fact that it was a poor implementation, Atari sold 10 million copies of the game at $20 wholesale, with a cost of goods of just about $5, and a development cost of perhaps $100,000. You figure the profits.

The most important requirement to make videogames was to find a programmer who knew how to program the Atari 2600. This machine was hell to program. There was no display buffer — the display was created on the fly by the 6502 CPU. There was a video display chip that displayed one scan line at a time. To get a display, you wrote a program that frantically stuffed bits into the display chip at just the right time. If you were really good, you knew how to change the display registers on the fly, allowing more sumptuous graphics. But this took exquisite timing. The 6502 in the 2600 ran at 1 MHz; at that speed, you had exactly 77 machine cycles during one scan line. Your main display loop had to execute each scan line in 77 cycles or less. Really good 2600 programmers knew the instruction cycle counts of the 6502 by heart; they tweaked and tweaked their code trying to squeeze one last cycle out of the code. The game logic itself had to be executed during vertical blank, when there was no display to manage. This gave about 3,000 machine cycles every 60th of a second, as I recall.

Great 2600 programmers were worth their weight in gold, and publishers realized this quickly. There was intense competition for the old pros. Activision was most successful at this, sending limousines to pick up their programmers, featuring them in their promotional campaigns, and making them feel like kings.

The coin-op business enjoyed a parallel boom, only not as lucrative or sensational. Still, those were good times to be in the coin-op business. Good programmers were earning very high salaries, royalties on their work, and all manner of other perks. Those were the days.

In the computer games business, 1982 was also a great year. The fabulous success of videogames carried over to computer games, but along with that success came a sudden emphasis on skill-and-action games. It was as if all the computer owners suddently decided that they wanted to be in on the excitement of the videogame field. The serious computer games that had been developing during 1981 were overshadowed by the more graphically intense but intellectually inferior shoot-em-ups.

Still, everybody prospered. You could make a great deal of money out of a program that took very little time to develop. An extreme example of this was Greg Christensen, a high school student who hacked together a variation on Defender. He did it with the Atari Assembler cartridge over the course of several months, working nights and weekends. When he was done, it was published by the Atari Program Exchange as Caverns of Mars and it sold about 50,000 copies, earning Greg something like $80,000. My own Eastern Front (1941) enjoyed similar success; it was developed at home, nights and weekends, over a six month period. I put about three months of full-time effort into it and earned something like $90,000 for the product. Even more sensational success stories can be related about games for the Apple II during 1982. Programmers like Nasir Gebelli, Bill Budge, and Bob Bishop made huge sums on games that they hacked together in months. Nasir Gebelli was particularly productive. He ground out a series of Apple II games that enjoyed high sales figures. Most of his games played poorly, but each one sported some neat new graphics effect. People loved it and plunked down their money.

Yes, 1982 was a fabulous year.

1983: The Crash Begins
The storm clouds gathered in December of 1982. Atari executives briefing Wall Street analysts admitted that sales for that Christmas were off slightly. Realizing that the boom was over, they dumped their stock in Atari’s parent company, Warner Communications. Sales that Christmas were still good, better than the previous Christmas, but it was obvious that the boom was over. In any other industry, it would have been a simple matter to retrench slightly, cut costs, and weather the lean times. But the videogame industry had a boomtown mentality; when the word got out that the boom was over, everybody started looking for lifeboats.

Things grew streadily worse all through 1983. The market was glutted with product, much of it junk. Atari was just as guilty as everybody else. Their E.T. cartridge was a piece of crap thrown together in six weeks by a programmer who boasted to Stephen Speilberg, “This is the game that will make the movie famous!” Ray Kassar, Atari’s CEO, had paid $20 million for the license. In the end, hundreds of thousands of unsold E.T. cartridges were bulldozed in a landfill in Albuquerque.

The dozens of opportunistic cartridge publishers that had sprouted liked weeds in 1982 died just as quickly in 1983. The fiscal carnage was on a scale just as great as the boomtime profits.

The home computer industry started off 1983 with high hopes. Everybody believed that the troubles of the videogame industry would only lead consumers to move up to home computers. The TRS-80 and the PET were long since dead, and the field had narrowed to the Apple II Plus and the Atari 800. There were other challengers, of course: the Radio Shack Color Computer, the Commodore 64, the Coleco Adam, and the looming IBM Peanut. But these other machines did not have the market share or software base to make them major competitors. At the beginning of 1983, it looked like a simple head-to-head competition between Apple and Atari — and Atari was steadily gaining ground. Its software library was  nearly the equal of Apple’s and growing faster. Moreover, developers had learned how to use its advantages to their fullest, so we were starting to see software for the Atari that was clearly superior to anything running on an Apple.

Then Jack Tramiel at Commodore began a price war, steadily ratcheting the price of the Commodore 64 downward. Atari elected to follow suit; Apple disdained to do so. All through the spring and summer of 1983 the prices marched downward, much to the delight of consumers. Atari, desperate to keep up with Commodore, moved its manufacturing overseas. The disruption in supplies was not repaired quickly, and when Christmas 1983 came, the only machine on the shelves in quantity was the C64. This was the death-blow to Atari; the company collapsed seven months later.

Apple’s refusal to lower its prices proved to be the right move. Even though its machine was patently inferior to both the Atari and the C64, it maintained an aura of respectability from its high price. The Atari and the C64 were seen as toys, while the Apple II Plus was perceived as a personal computer. This may have been one of the reasons why Apple later refused for so many years to lower the price of the Macintosh.

1984: Death and Birth
The videogame industry died in 1983; home computer sales boomed in Christmas 1983. Nevertheless, entertainment software sales for disk-based machines died in 1984. At the time, it made no sense. Those of us in the home computer industry had thought that, with the death of videogames, the mantle had been passed to a new generation: us. Instead, videogames dragged us down with them. Computer games were too closely identified in the public mind with videogames. The dramatic collapse of the videogame industry convinced everybody that this had been just a passing fad. They turned their backs on everything.

The damage was greatest in those areas of computer gaming that were closest to videogaming. Most of the smaller publishers, and all of those who had specialized in skill-and-action games, went out of business. Those that did survive did so by cutting costs and having something other than games, or at least something more serious, to keep them going. Broderbund had Print Shop; Electronic Arts had moved quickly to the C64; Sierra just barely eked by; SSI kept its head down and later moved to the D&D license. Many more publishers simply disappeared.

A great many good people lost their jobs and their careers in the collapse. A few, like Ann Kelsey, Jim Dunion, and Bill Carris lost their lives shortly afterwards, and I will always believe that it was the stress of seeing everything collapse that shattered their lives. An entire generation of game people was blown away in 1983-84. Only a fraction of that generation hung on, largely by fierce determination. There certainly wasn’t any money.

There were some positive notes during those grim years. Anybody who had C64 product did well. One publisher, Human Engineered Software, prospered during 1984-85 largely on the strength of their C64 line. Epyx also did well, for much the same reason. Infocom, publishers of high-quality text adventures, also did well during those years, largely because their products were so clearly distinct from videogames.

Atari had created the boom and it died in the collapse. The layoffs began in earnest in 1983. From a peak of 10,000 employees in December 1982, Atari fell to just 200 employees in July 1984. I was one of the lucky ones; I lasted longer than most., not getting the ax until March 1984.

In January of 84 Jack Tramiel resigned from Commodore. Realizing that Commodore had been the chief cause of our woes, I had posted a note on our department bulletin board:  “The good news is, Jack Tramiel has left Commodore. The bad news is, he’s coming here!” Little did I know how right I was. In July, Tramiel bought Atari from Warner Communications for a song and laid off most of the remaining staff. 

A New Age
Out of the ashes of the industry collapse of 1984, a new entertainment software industry began to emerge. Because the videogames business had been so thoroughly discredited, Nintendo was able to enter the market with no competition and rebuild the videogame industry in its own desired image. The C64 died within a few years, its place taken by the IBM.

The industry that emerged by the late 80s was a more sober, more conservative one. Chastened by the catastrophe that had struck down so many, the survivors went about their work with less self-assurance and a heightened sense that bankruptcy is just around the corner. But the story of those years must await another article.  a

_________________________________________________________________________________________________________

Plot Automation
David GravesCopyright © 1991 David Graves

An automated playwright would allow interactive fiction plots to be developed on the fly in the player’s computer. At first look, it seems impossible to develop software that could generate plot on the fly — it seems the stuff of science fiction. The holodeck on Star Trek, for example, is a computer-controlled interactive fantasy that boasts automated plot generation. One might think that the hardware and software technology required to achieve this goal are decades away.

However, after having worked on plot automation for several years, I am convinced that the hardware and software technology required to develop an automated playwright exists today. Why then, don’t we all have one? One major limitation is our approach in writing interactive fiction. I’m not saying that the existing form of IF is wrong, but I do feel that there is room to extend the model for IF. After all, the IF genre has hardly changed since it first appeared in the mid-1970s. Traditional interactive fiction, for example, has been plagued by the “plot branch tree” concept — few IF works have risen above this old paradigm.

In addition to changing the model we use for IF, I propose that we need to extend our concept of story to make room for new modes of writing. I believe that even if we (the development community) had a working software playwright right now, we would be at a loss to develop story materials to feed into it. This article addresses the paradigm shift required to develop and use an automated playwright. Perhaps we can overcome some of the mental limitations that keep us from realizing an automated playwright in our published works.

Why even bother developing an automated playwright? Interactive fiction attempts to draw the reader/participant closer, by allowing him some choices within the story. We have seen, though, how difficult it is for the IF author to relinquish control to the reader/player and still end up with an experience that tells a story. An automated playwright would bring new depth to our IF creations by fulfilling the will of the author in a place where he cannot be: within the home of reader as he experiences the story.

This brings us to virtual reality, a newborn technology with much potential.  I see some parallels with the beginnings of the computer game industry.  Today, you can stand in a virtual environment and play catch with another person using a virtual ball.  Doesn’t that sound familiar?  Will virtual reality be a bigger, better environment for PONG?

I think we should be using an automated playwright in our current product, right now in 1991. Clearly, works of interactive fiction that utilized plot automation would have a great increase in replayability. Modern IF works may have a rich interactive story, but they are typically a one-shot experience.

One more reason to work towards plot automation: because it’s the natural next step in computer game evolution. Remember Aristotle’s elements of drama? Starting at the bottom of the hierarchy, you have spectacle (everything that is seen),then music (everything that is heard), diction (the selection and arrangement of words), thought (the processes leading to choices and actions), character (patterns of choice; actionable attributes), and plot (combinations of incidents making the whole action.) Early computer games allowed the player to manipulate objects in a physical world.  The focus was on the lowest elements of drama. Later we could simulate thought: the computer opponent would make choices leading to action. In recent years we have seen the emergence of character in games. Artificial Personality supports the illusion that there is a character who shows recognizable patterns to his choices; that he has attributes of his personality that are revealed to us by his actions. This illusion is so intoxicating that we willingly suspend all disbelief that the character and situation is fictitious, and thus we are drawn in. The only step left in game evolution is to allow the participant to have an influence in the plot, while an automated playwright ensures that the combinations of incidents create a well-formed action. When you look at it this way, the creation of the software playwright seems inevitable, doesn’t it?

To re-iterate, the accepted model for interactive fiction is preventing us from making progress towards plot automation. What, then, are the limiting concepts that we seem to be locked into?

One: graphics depicting only physical world spectacle.This means that graphics are used to show you what the fantasy world looks like. Unfortunately, using graphics this way causes the plot to focus on the physical world. It becomes too easy for the story developer to focus on geography, because a “fork in the road”is the predominant example we see for decision trees in real life. This typically leads to a “travel-resistant” plot.

Two: plot as a decision tree. In a plot tree for most games today, there are failures at each of the “dead ends” of the tree. In early interactive fiction, the protagonist would die at one of these nodes.  In our more enlightened times, the player does not necessarily die when he digresses from the “true path”, but the plot dies.

Three:  viewing plot as a static construct. In traditional stories, the plot is static.  It has to be: the media is static.  Words on a printed page cannot change.  Pick-a-path books allow us to make decisions, but the work itself is still static, so the experience very soon becomes insipid and uninteresting. Computer technology allows us to create works that are not static, but our concept of “story as a static construct” leads us to create interactive fiction in the form with which we are familiar. This leads to over-scripting in our interactive stories. Then we must write a separate sub-story for each branch. Clearly, the IF author would have to write much more story than thetraditional author if a work of IF is to have significant variance in plot.

Let’s look at each of these concepts in the traditional paradigm and ask how we might make a shift in our thinking.First: graphics depicting thought and character. This would allow works of IF that focus on characterization and personality rather than focusing only on the physical world. Recall Aristotle’s dramatic elements, with thought and character on the higher levels.While you cannot see “thought” in another person, their thoughts can be partially revealed to you by their face.  Their expression, nuance of eye, lid, brow, jaw, and lip; each of these can provide a wealth of information, some of which may be conflicting or ambiguous. Similarly, you can gain insight into the “character” of a person by observing their expressions as they react to a situation. This is critical to the artistic advancement of our works.  We cannot have a rich interpersonal fantasy experience in a world which depicts only objects in a physical world, or a world which treats the actors as objects to be manipulated.

Next: Releasing more control of the plot. At the heart of interactive fiction is interactivity.  We present the player with choices, as a means to draw him into the story.  The player gets great pleasure from making these choices. A skillful IF designer will give the player the illusion that he has tremendous freedom of choice. However, it seems that we give the illusion of freedom to the player, then take it back again by using heavy scripting. In order to ensure that the player experiences the full emotional impact of the story, the author ensures that event A must precede event B. If we wish to give some freedom to the player and not take that freedom back again via scripting, then we must release some of the control over the plot of the story. Plot trees are the perfect model for organizing a heavily scripted work of IF, so this suggests that if you wanted to create a work of IF that was not heavily scripted, then a plot tree would not be the best structure for organizing your story. I suggest that plot trees are useful as an intermediate step in the development of a work of interactive fiction, just as you would write an outline as a step in composing a paper. From this tree, you would then develop a plot network.  Now the player may have a number of plot experiences, rather than a single plot path. There will still be some plot events that must precede others,but there will be more freedom than before.

However, a plot network is still a static construct.  You can draw a diagram of it on a single piece of paper.  It presents plot paths that the player may traverse, but the structure of the paths is itself static. When we look at a story this way, we are still seeing plot as data, and static data at that. If we could view plot as a process, rather than as data, then we could begin to rise above static plots. This is where the author begins to regain some control. Assuming that the author has released control over the strict sequenceof events in a story, he can still influence the plot through the rules he defines for a given story’s plot *process*. The author crafts the story by controlling it’s content, not it’s plot. She selects the scope of the theme and the overall “message” of the work, which are exposed through individual plot events. The process by which these “plot units” are assembled is defined by the author’s rules. The rules are applied at run-time, in the player’s computer, taking into account all that has taken place in the story’s progress so far, which includes the choices made by the player. In effect, the player and the author write the work of IF together.

So, if we are able to view plot as a process, rather than as a static construct,then we are freed from seeing plot as a *sequence*. Plot emerges from a broad set of plot potentials. These potentials are loosely defined in terms of plot units (which are static data), and plot rules (which can interact dynamically).To exploit the potential of interactive fiction, it is important to not nail things down. The nature of a specific plot is unknown to the player, just as it is unknown to the author.  The ambiguity (in terms of the many variables and many rules) is what creates the opportunity for different experiences in the same “story” definition. The player gets to have a significant impact on the plot. The great increase in replayability is the icing on the cake.

At this point it sounds like the automated playwright must be a huge program taking up vast resources.  How can we program that, let alone fit it into a microcomputer? It turns out that the playwright does not need to make decisions about each detail that happens in the story. In fact, the playwright can sit back and watch the story go by tossing in a plot change once in each five to ten minutes. You can get a tremendous amount of plot springing forth from personality state.  That’s what Artificial Personality is designed to do. It is pretty much accepted that Artificial Personality is achievable today.  Many story-games on the market show characters who can make simple plans and display simulated emotions. Thus, the playwright can focus entirely on high plot, since the Artificial Personality logic focuses on Aristotle’s thought and character. You can even allow conflict between the playwright and the artificial personality logic, which will allow for the generation of interesting plot conflicts. This represents the philosophical argument of free will versus determinism. The playwright represents determinism; it wants to see the plot go in a limited number of directions.The Artificial Personality module represents the thought and character of each of the agents in the story; thoughts and patterns of choice which might be in conflict with each other, and in conflict with the playwright. This is alright, though. Conflict is at the center of drama.

Okay, so if an artificial playwright existed now, how would people develop material for it? Here is my proposed process: Write a story.  Turn it into a plot tree. (In traditional IF, you would be done with the design at this step). Cut up the tree, such that pieces can be reassembled in a variety of ways. You might develop a plot network as an intermediate step.  Eventually, though, you remove most of the static paths connecting plot units, replacing them with rules that suggest how plot units might fit together. In a heavily scripted story, the plot units would fit together only one way.  Speaking metaphorically, this is the equivalent of a jigsaw puzzle. In a loosely scripted story, the plot units would fit together in a variety of ways.  This is the metaphorical equivalent of building blocks. Next, examine your set of plot units and plot rules, looking for dead ends, and fill these in with additional plot pieces.  It’s important to do “path folding” so that a dead end leads you back into the productive mainstream of the story’s theme.

It is not technology that is keeping us from integrating the player’s actions into a computer assembled plot.  We are limited by the mindset we apply to this new area of opportunity.We cannot expect to move rapidly forward carrying the baggage of the traditional interactive fiction genre. By challenging our basic assumptions about the interactive fiction model,we can exploit new technologies such as plot automation.

_________________________________________________________________________________________________________

Computer Games Versus Videogames
Chris Crawford

Ten years ago, computer games and videogames were closely linked. Any big hit on the Atari 2600 was quickly ported over to the various personal computers. But then came the crash of 1984, and all that changed. Anything associated with videogames died. The only computer games to survive the crash were those that were clearly differentiated  from videogames. Computer game makers learned a hard lesson: steer a wide course around videogames.

But times have changed. Nintendo, Sega, and NEC are the deep pockets of the business. Where we skitter along, happy to sell 25,000 units and overjoyed to sell 100,000 units, Nintendo doesn’t even take much notice unless it can sell a million units. Profits for a single Nintendo game run into millions of dollars; profits in the computer games business are often paper-thin.

Another change is that the technological gap between home computers and videogame consoles has narrowed. The Apple II was far superior to the Atari 2600; the IBM PC was far superior to the 8-bit Nintendo; but an IBM PC with 1 meg of RAM, a hard disk, and a 286 is not a whole lot better than a 16-bit SuperFamicon.  Clearly superior, yes, but not as far superior as the earlier cases.

So computer game designers have started to blur the dividing line between videogames and computer games. The allure of all that money is just too enticing for the underpaid workers of the computer games industry.

The videogames people are all too happy to have our work. The truth of the matter is, they need it badly. The videogame industry is in dire need of fresh new ideas. This has nothing to do with the creative talents of the videogame designers; it’s not as if they’re all a bunch of uncreative dullards looking hopefully to the brilliant and creative computer game designers. It’s really just a matter of economics. The cost of producing and marketing a videogame is enormous. They can’t afford to take chances on unproven products. The computer games industry, on the other hand, is more experimental, more freewheeling, more open to odd ideas. The cost of developing and publishing a computer game is far lower than the comparable cost of a videogame. Hence, computer gaming has a more experimental feel to it. We produce a wide variety of games every year, and we have to admit that there are a great many turkeys in that collection. The videogames people can pick and choose the rare winners that come out of the stew.

So we appear to have a happy situation that works for both sides. The videogames people provide the big money that keeps our industry alive, and we computer games people provide the fresh ideas that keep their industry going. Everybody wins, right?

Not quite. If the deal were that simple, then it would indeed be a great deal for all concerned. Unfortunately, the way the deal works in practice is a bit more complex. There are second-order effects here, effects that can ruin things for both industries.

The most important second-order effect is the way that computer games people anticipate the videogames business. If this were some sort of hermetically sealed double-blind scientific environment, in which videogames people delivered money and took ideas without the computer games people ever knowing what was happening, all would be well. But the fact is that we computer games people know that those videogames people are out there, checkbooks in hand. They’re just dying to give us money, if only we have a suitable design to sell them. And that realization affects our thinking. We start to think how we can make our games more commercially viable in the videogames market. We adjust our designs to be closer to what we perceive to be the videogames ideal. 

This, of course, is exactly the reverse of what we need. Our value to the videogames people lies in our freewheeling, experimental style, our willingness to try new and different things. I know this sounds screwy, but the more we try to please them, the less our value to them.

This is why I have ardently resisted proposals to increase the representation of videogames at the Computer Game Developers’ Conference. It’s not that I think that videogames are bad or that videogame designers are cretins. My fear is that, if videogames, coin-op games and computer games merge into some giant whole called “electronic games”, then computer games will be swamped by all the money and power that courses through these other industries.

That would be a catastrophe for both computer gaming and videogaming. Computer gaming would suffer because more time, energy and money would be devoted to making games that sell well in the videogame world.

Don’t dismiss this possibility: it is already happening. Publishers such as The Software Toolworks, Electronic Arts and Accolade are moving away from computer games and toward videogames. This is particularly striking because both EA and Accolade at one time declared that they had no intention of developing videogames. Now both companies are working hard to produce such games.

When publishers spend their development dollars on videogames, there are fewer development dollars for computer games. Many of us will be squeezed out of the business or transformed into videogame developers.

The sad thing is that the videogames people will suffer from this, too. They need our inexpensive R&D capabilities, but if most of the R&D money shifts over to videogames, the field of computer games from which they can pick and choose is narrowed. 

Thus, we reach a surprising conclusion: the best way to make the most money from the videogames industry is to maintain our distance from them. We want to be close enough to be able to make deals, but far enough away that our industry retains its experimental style. A tough and tricky balancing act it is, but if we err in either direction, we stand to lose a lot of money. If err we must, I would prefer to err in the direction of too much distance. Erring in that direction, we lose a few deals; erring in the opposite direction, we lose our heart and soul.

So what should we do — segregate computer games people from videogames people? Require separate bathrooms for the two kinds of designers? Keep them blissfully apart and ignorant of each other? No. The two sides need to be aware of each other’s existence. Videogames people need to know the latest games coming down the pike in the computer games business, and computer games people need to know enough about the videogames business that they can deal with its representatives intelligently. People from the videogames industry should read the Journal and speak at the conference, But they should come as honored guests rather than enfranchised constituents. I would certainly like to publish an occasional article about the video-games industry in these pages.

_________________________________________________________________________________________________________

If you’re a talented programmer, we want to speak with you.

We’re a financially solid company with some of the most talented IBM, 68000, & 6502 programmers around and we’re looking for a few more special people.

If you have that creative spark, call us. You’ll be glad you did.

Bethesda Softworks
(301) 963-2000 / 926-8010 [fax]

_________________________________________________________________________________________________________Translucent SoundKit

a full set of development tools for music support on the PC. Runtime-loadable background drivers, data file translation from standard midi files, linkable play and query routines; Clean API, optional links to PKWare compression lib. Write one set of code for multiple devices! Full timbre & voicing support for each device. Currently supports AdLib and compatibles, Roland, internal speaker. Call for current pricing.

David Rosenbloom, Translucent
(718) 965-1245
(718) 965-1372 [fax]

_________________________________________________________________________________________________________Music Composition

by an internationally recognized composer. Expressive and experimental work a specialty. Standard midi file output, or can target specific devices, e.g. AdLib, Roland, PC speaker, Amiga voices, etc. Re-orchestration of existing music for secondary targets. Compressed code integration also available on PC platforms. Discography, reviews, samples available.

David Rosenbloom, Translucent
(718) 965-1245
(718) 965-1372 [fax]

_________________________________________________________________________________________________________