Volume 4, Number 6. August 1991

Contents

Editorial: Another Year, Another Dollar
Chris Crawford

Letter
Greg Costikyan

CGDC Survey Statistics
Evan Robinson

Porting Techniques
Dave Menconi

The Role of Intention in Game Design
Dave Menconi

The Shape of Games to Come
David Shapiro

Annual Reminder to Get on to Genie

Modern Times
Chris Crawford

 Editor Chris Crawford

Subscriptions The Journal of Computer Game Design is published six times a  year.  To subscribe to The Journal, send a check or money order for $30 to:

The Journal of Computer Game Design
5251 Sierra Road
San Jose, CA 95132

Submissions Material for this Journal is solicited from the readership.  Articles should address artistic or technical aspects of computer game design at a level suitable for professionals in the industry.  Reviews of games are not published by this Journal.  All articles must be submitted electronically, either on Macintosh disk , through MCI Mail (my username is CCRAWFORD), through the JCGD BBS, or via direct modem.  No payments are made for articles.  Authors are hereby notified that their submissions may be reprinted in Computer Gaming World.

Back Issues Back issues of the Journal are available.  Volume 1 may be purchased only in its entirety; the price is $30.  Individual numbers from Volume 2 cost $5 apiece.

Copyright The contents of this Journal are copyright © Chris Crawford 1991.

_________________________________________________________________________________________________________

Editorial: Another Year, Another Dollar
Chris Crawford

This issue closes the fourth year of publication of The Journal of Computer Game Design. Can you believe it? What was once a flimsy experiment has become an Industry Institution. 

Not much has changed with the Journal in the past year. The subscriber base has held steady at about 300. I’ve held the line on subscription rates at $30, despite increases in postal rates and printing costs. The basic format and layout haven’t changed. And, as always, I can never get enough articles. 

There will be some changes coming over the next few issues. I’ll be asking Aaron Urbina to try his hand at improving the look of the Journal. We can certainly afford a spiffier page layout. I also want to put together an index of back issues, something many of you have requested. We’ll also become a bit more commercial in several ways. First, we’ll be slowly expanding the JCGD Library of resources you just can’t get anywhere else. The second change is bigger:

Ads
I’ve decided to permit ads in the Journal — but with restrictions. I will permit only those ads that, in my sole and exalted judgment, are good for the industry. What in blazes do I mean by a fuzzy phrase like “good for the industry?” I can’t say with clarity. I can say that ads for job openings and tools and resources are good for the industry. Ads by people looking for work are good. On the other hand, corporate image ads and ads for games are definitely off-limits.

All ads must be submitted in plain ASCII. Sorry, no corporate logos or camera-ready art. I will print them in Journal format; the look and feel will be that of classified ads. 

Ad Rates
The ad rate will be $30 per column inch. That’s five lines of text, plus a header. (This paragraph and its header fill one column inch.) The minimum fee is $30. I will prune long-winded ads. Payment must accompany the ad submission. 

Miscellaneous News, Gossip, and Opinion
Cinemaware has apparently flickered out. After the big layoff last spring the word was that Bob Jacob would try to keep the company going in abbreviated form. But it now appears that even that was unsustainable.

Sierra has fired up The Sierra Network. It still has not come up to full speed, but this bold experiment should prove interesting. Sadly, the stock market does not seem to share Sierra’s optimism: Sierra’s stock price has slipped badly over the last six months.

Sierra also managed to ruffle a few feathers with its parodies of other games in the latest Space Quest. Some of the jibes were a tad harsh, and a few people in the industry were irritated. Now, Sierra and LucasFilm have been trading friendly jibes in their games for some time now, but this time it seems that Sierra went a little too far and cast the net a little too wide. Some apologetic phone calls seem to have smoothed the waters. I hope that the industry learns a lesson from this microscopic brouhaha: let’s cool it with the inside jokes and jibes, OK?

Speaking of LucasFilm, Secret Weapons of the Luftwaffe still hasn’t shipped. LucasFilm must be angling for some sort of vaporware record. LucasFilm itself is going through some big changes. Look for the company to move away from disk-based games and put more effort into videogames.

Speaking of videogames, sales of Nintendo cartridges have been falling since last Christmas. Our industry short-timers who view Nintendo as the Rock of Gibraltrar are about to experience something exciting: the earth will move. It’s unlikely that the demise of Nintendo will mean a collapse like that of 1984 — Sega is moving to take up the slack. Much market chaos will ensue. Those who live by videogames die by videogames. 

_________________________________________________________________________________________________________

Letter
Greg Costikyan

In Dave Menconi’s recent Journal letter, he maintains that, “The simple truth is that programmers won’t be put out of a job by any technological whizzbang.”

It is true that no advance in technology —better programming languages, CASE tools, and the like — will ever make programmers obsolete. But that doesn’t mean that programmers will ever and always be necessary for the specific task of creating new games. They’ll have plenty of other things to do.

Consider. No one suggests you need a programmer to write on a computer; a word processing program will do. And you don’t need a programmer to create music with a computer; MIDI devices and appropriate software is all the composer needs. Why is game design intrinsically different from these other artforms?

At present, games push the envelope of what’s possible with personal computers. Decent graphics require assembly language. So much data manipulation occurs in many games that they slow noticeably on some machines. At that, circumstances have improved dramatically in the last few years: there was a time when any decent game had to be written entirely in assembly language, while most are written primarily in higher-level languages today.

But will games always be pushing the envelope? As computers become faster and more flexible, will we always need programmers to squeeze out adequate performance? Conceivably, the answer is yes: As computers become more sophisticated, consumers may demand more out of the games they buy. Ever-better graphics, sound, interface sophistication, and depth may make the need for programming eternal.

But I don’t believe it. A game is not a program, as I can attest: I have designed more than twenty games that required no programming at all. A game is a work of art — or a bit of entertainment, if you prefer — that involves a player or players in contest, either with each other or an underlying system. You can create games in paper, or you can create them in cyberspace, just as you can write on clay tablets or phosphor dots.

At some point, computing technology will reach a critical stage. Computers will become sufficiently fast and software tools sufficiently sophisticated that designers will be able to craft games that run fast enough, look good enough, and involve players well enough to be indistinguishable from games programmed from scratch.

Why not?

_________________________________________________________________________________________________________

CGDC Survey Statistics
Evan Robinson

Survey Methodology
Seven hundred copies of the 1991 CGDC Survey were sent out at the end of January 1991. As of 6 March 1991, 57 had been returned (8%). Included in those 57 surveys were 162 individual evaluations of 54 separate publishers. We divided the data into two groups: developer data and publisher evaluations. At the 1991 DevCon, Stephen Friedman presented the developer data and Evan Robinson the publisher evaluations. This follow-up article is intended to acquaint those who did not attend the two lectures with the basic data and to address issues raised during those lectures.

How Good Is The Data?
This was the main question raised at the lecture on publisher evaluations. There is no question that there are problems with the sampling regarding publisher evaluations. First, the sample is entirely self-selected. Second, with the exception of Electronic Arts (and perhaps Broderbund), the number of responses for any given publisher was so small as to make any projections from the data suspect. These two factors alone should make us suspicious about drawing broad conclusions from this data. However, the raw data itself is useful in the same way that asking other members of the game community is useful — it provides data points. So long as you view the data in that light (as though you’d talked to a bunch of people and listed their responses), the data is meaningful.

With 57 surveys returned, the number of responses for developer data is much better than that of publisher data. The sample remains self-selected, and we must therefore be concerned that the responding sample may not be representative of the development population as a whole. It’s entirely possible that this data is skewed toward successful or unsuccessful developers, or some much less obvious trait that could effect the data.

How Can We Improve The Data?
An obvious improvement for the developer data would be to eliminate the self-selection process. I plan to accomplish this by calling a randomly selected sample of the population for next year’s survey instead of relying upon people to respond via mail. This will help to remove concern about the developer data.

Improving the publisher data will be considerably more difficult. Because every member of the population hasn’t worked with every publisher, getting reliable data by random sampling would be problematic. I believe that our current method is close to the best we can achieve considering our limited resources. One possible improvement would be to find a way to encourage more respondents to the publisher survey.

So Is It Worth Reading?
Absolutely — so long as you recognize the limitations of the process. Statistical analysis is an attempt to predict the characteristics of a large population by examining a small sample. Generally speaking, the better selected the sample, the better the prediction. Our sampling on developer data is reasonably good, and can be improved. Our sampling on publisher data is not as good, and probably cannot be greatly improved. So I suggest you consider the publisher data as though you had talked to a whole lot of people and asked them what they thought of the publishers they had worked with.

Overall Publisher Impressions
The following are loose categorizations based purely upon visual examination of the distributions. As can be seen by the general categories below, overall the ratings were higher rather than lower.

Publishers scored above average on the following: Trustworthiness, Overall Quality of Software Publisher, Marketing & Sales, Testing & QA.

Publishers scored about average on the following: Developing In-House, Producers, Advances, Royalties, Accounting/Royalty Reporting, Speed of Payment (both Advances & Royalties), Overall Contract.

Publishers scored slightly below average on the following: Tools & Technical Support, Contract Negotiation, Willingness to Alter Contract.

It’s clear that the responding developers generally feel that publishers overall are doing a better than average job, with the small exceptions (remember that those are slightly below average) of Tools & Technical Support and Contract Negotiations. Collectively, I believe the publishers deserve applause for the current feelings of developers toward them. Of course, there is always room for improvement...

So What About Developers?
The following is collected  from 57 responses representing something more than 57 people (some response sheets had data from more than one individual). In addition, it’s not always clear whether respondents completely understood some of the questions (it was not uncommon for the percentage totals on the two expense questions to total significantly less  than or more than 100%). However, the sampling on this data is better than the publisher data, and our main concern is the self-selection.

What Do We Do and Who Are We?
Twenty-six respondents described themselves as programmers, 6 as artist/musicians, 29 as designers, 13 as managers, 2 as quality assurance, and 6 as other. Obviously, many people consider themselves to perform multiple functions. Twenty-three respondents described themselves as (co-)owners, 26 as independent contractors, and 3 as employees. Fifty respondents said they or their company developed original games, 23 said they did conversions of games developed by others, 10 said they published games, and 7 said they distributed games.

The mean age of respondents was 35, with a distribution covering a range from 23 to 57.

Twenty-six respondents said FRP games were their favorite, 16 combat simulations, 9 sports games, 20 action games, 19 non-combat simulations, 28 strategy, 10 other, and 1 person doesn’t like to play computer games.

The data regarding expense and income percentages is rather suspect, since some responses had total expenses ranging from 85-120% of total, and income from 90-105% of total. And the distributions are widely variable, but the mean expense percentages were: 17% Design, 43% Programming, 18% Graphics, 6% Music, 8% QA, and 12% Management. Mean income divisions were: 17% Design, 39% Programming, 16% Graphics, 6% Music, 9% QA, and 28% Management.

Forty-six respondents said they distributed on floppy disks, 18 on ROM cartridges, 4 in Coin-Op games, 2 in Handhelds, 6 on CD-ROM, 11 Online, and 4 Other.

If there were a “mean” developer somewhere out there, s/he would have made about $35,000 from electronic entertainment in 1990. It would have taken 2 1/2 months to sign a contract with a publisher. That contract would have included about $42,000 in advances against an 11% royalty. Those advances would be paid out about 2 weeks after they were due. The royalties would be paid quarterly, between 30 and 45 days after the end of the quarter. It’s probable that our developer would get some royalty payments.

The contract would probably specify that the developer got to keep the copyright, but the publisher would almost certainly not have to meet any sales quotas to keep the license to publish the product. Source code would probably be provided to the publisher.

If our hero(ine) were to sign a contract to convert a program written by someone else, s/he would receive about $24,000 to do the work, and would probably get a royalty.

_________________________________________________________________________________________________________

Porting Techniques
Dave Menconi

In 1985 I went to work at CRI, a small software engineering firm that had a relational database manager on the HP3000 (a minicomputer). The HP3000 had reached the end of its useful lifetime and so CRI wanted to port the product to a new computer. Not content to merely port it to a single new platform, they resolved to port it to Ada and thereby port it to any number of computers. The program had 200,000 lines of Ada and we eventually ported it to five different minicomputers. After the first machine, the four subsequent machines required the conversion of only 200 lines of code (only 0.1% of the total system).

Compare this with the process of porting in our industry. For game developers, porting is a matter of writing a completely new program. Almost 100% of the code will be rewritten during the porting process. This makes porting an expensive process.

Of course, there’s a reason for the difference between the two industries. Relational databases do not require high-speed animation and sound and therefore can use generic, easily ported graphics and sound.

Nonetheless, I was impressed by the efficiency of the porting process at CRI and came away convinced that the process of porting games could be made more efficient.

There are two common approaches to porting games. The simplest one has the ports implemented serially by the same person. After you’ve written the Macintosh version, you write the IBM version., then the Amiga version, then the Atari ST version, then the Sega version...you could spend the rest of your life doing one game! The flaw with this scheme is that it requires one person to master the intricacies of each target machine. This non-specialized approach is just not efficient enough.

The second common approach is to turn the source code over to a port specialist. Presumably the specialist can do the job more efficiently than a general-purpose coder. There are several problems with this. First, the port specialist needs stable code, so the porting process cannot begin until after the original program goes final. A few years ago, the publisher would release the original version and then release the ports as they became available. This doesn’t work anymore, because the higher costs of marketing games make it prohibitively expensive to dribble them out. You’ve got to hit the streets with all versions simultaneously.

Another problem with serial porting is that ofttimes a trivially simple feature on Platform A is hell to implement on Platform B. With a little feedback from the port programmer, the game designer could have avoided that boondoggle in the first place, but by the time the port programmer sees the code, it’s too late to change the design.

My approach
My opportunity to put my money where my mouth was came in the form of an offer from Chris Crawford Games to port a Macintosh game to MS-DOS.

Chris and I decided to try something different: a more closely parallel process whereby my work lagged Chris’ by about a month. This gave us an instant benefit. As Chris designed subsystems for the game, he consulted with me as to the practicality of implementing them on PC-clones. Sometimes I would steer him away from a course that would crash my code into the rocks. This tight feedback loop increased overall project efficiency.

Harkening back to my days at CRI, I proposed a semi-automated approach to porting Chris’ Mac code. Chris was skeptical but gave me a green light.

There were two main problems I had to solve. First, the system had to isolate those sections of code that could be completely compiled without human intervention from those areas that a programmer would have to change. Second, a way had to be found to identify the specific lines of code in the non-automated part that had most recently changed in the original source code (i.e. those lines that would have to be modified in the target source).

The automated translation was accomplished by the combination of a program I call “Mogle” and a set of routines that emulate the simpler functions of the Macintosh Toolbox on an MS-DOS machine. All Mogle did was translate some of the keywords (e.g “&” in Think Pascal translates to “AND” in Turbo Pascal) and comment out those lines that contained calls that the rest of system couldn’t handle. Fortunately my “toolbox” did not have to be as extensive as the Macintosh version because Chris didn’t use everything that could be used in it. And it didn’t have to have all the functionality that the Mac version had. For example, the Mac has a heap in which blocks of memory could be relocated to free up space (garbage collection). This is certainly a good thing but not absolutely necessary — I simply left garbage collection out.

 The major things that I couldn’t translate were calls to COPYBITS (and other calls dealing with raw graphics such as those controlling the color system) and the calls to the Control Manager as well as scattered calls to other subsystems. Many things, such as windows, were not supported but were simulated. So, for example, you could open a window with OPENWINDOW and it would look to the software like it was open but there wouldn’t really be a window there — drawing in the window would just draw on the screen.

 Obviously this system would not port an arbitrary Mac program (or even a large subset of possible Mac programs). What it would do, however, is to port a game that opened a single window and generally concerned itself with doing “gamey” things (as opposed to “Mac-ish” things).

 The second requirement was met by Chris reorganizing the code he had written to date (he had a program running — but not quite a game yet — before I was even hired) into “generic” modules that had no calls to the Macintosh toolbox and “Mac” modules that had such calls. In the end, the division put about 35% of the code into the generic modules — the modules that could be processed automatically. There are two reasons for this low percentage. First, we made no attempt to analyze exactly which Macintosh toolbox calls were responsible for pushing the most routines into the untranslatable modules. Had we done so we might have simulated those calls or at least managed to isolate them more. Second, Chris’s code was peppered with COPYBITS calls. This little gem does everything you could ask of a graphics moving routine — it scales, it translates, it XORs and so forth. I could do everything graphically that I needed to make the game work but not the way COPYBITS did it. I could not support all the functionality of COPYBITS so I had to rewrite each use of it by hand. Chris’s code has 210 COPYBITS calls. Had we eliminated COPYBITS as a problem we would have at least halved the hand processing required by the porting.

 The third requirement — a way to identify changed code — was met by a file comparison utility. At first we tried to use a system whereby Chris marked each change he made to the code. This system stumbled over the inevitable unmarked changes, so we abandoned that and I used a file comparison utility. Each time Chris sent me a new set of source code I compared the new source code to the last set of source code. Presumably the literal differences between one version of his code and the next would be analogous to the functional differences between the current versions of the Mac and the MS DOS games. This process has worked pretty well but now, at the end of the project when much of the target source has drifted away from the original source, it can take many hours to figure out the appropriate change to make based on the difference between one version of original code and the next. Comparing the Mac code to the MS DOS code was out of the question because that would generate so many differences that it would be difficult to tell which, if any, were functionally different and which were merely syntactically different. I’m reasonably certain that I tracked the Mac code at least 90% of the time. But I’m worried about that other 10% — we’ve introduced bugs into the system that can only be found by Quality Assurance. Creating problems for QA is not my idea of good software engineering.

 All in all, although we didn’t meet my expectations, I think we more than paid for the effort of automatic porting. The MS-DOS version will be finished about 4 weeks after the Macintosh version. The cost of the port was a bit higher than normal, but some of this cost can be attributed to startup costs. Moreover, the greater expense of this process can be justified in the greater quality of the result. It’s a high-quality port, having almost all of the features of the Macintosh version.

What I would change next time
There were some things we could have done better. Most important among them would be to organize the Macintosh code in such a way that the non-portable code is isolated. The best example is our old friend, COPYBITS. Although there are 210 calls to COPYBITS, there are only about 10 things that we need to do in the game that require moving graphics to/from the screen. If each of these 10 functions had been put into a separate routine then the code that calls the routines would be translatable even though the code of the functions themselves would not be.

 Additionally, there were many cases, even toward the end of the project, when code was written from scratch that should have been built from smaller, more generally useful routines. This process of building on general routines is vital to any programming endeavor but it is all the more valuable to an automatic translation system because, once the low level routines are ported, the rest of the code slides right into place automatically.

Lag Time
One of the crucial decisions that must be made in a project like this concerns when to start the port. Too late and the value of feedback is lost. Too soon and the porter begins to stumble over the original programmer. There were times when I was treading on Chris’s heels in this way. Chris’ code had not stabilized when I began working on it, and my code suffered from trying to follow the gyrations of Chris’ code at an insufficient remove.

I started working at Chris Crawford Games about two months after Chris started writing code. We allocated 12 weeks for me to learn the MS-DOS operating system and PC-compatible hardware; in practice, I was writing code that eventually found its way into the game after only 6 weeks. So practically speaking, I started the project about 15 weeks behind Chris. I will end the project about 4 weeks behind him. 

But there is another factor that enters into the equation. Each hour spent making the porting system more efficient (i.e. making more of the original source translatable) is worth many hours later in the project. So an extra week spent early on the porting system (instead of on the porting itself) will save several weeks throughout the project.

So what is the ideal lag time? I think that six months is probably the best lag time, and I think that I could catch up by four months over a nine month project. Shorter than this and the MS-DOS product will hit QA without being all together.

 On the next game I hope to attain my original goal of 75% translatable code. I have several advantages. First, I have several major parts of the Macintosh toolbox either functionally duplicated or simulated. Second, we know where the weak points are and can more effectively isolate them. Third, Chris and I have hammered out a better system of teamwork for porting. Most importantly, I know where my lag time will be best spent and so can make the best use of it. I expect the next project to go much more smoothly and, as a consequence, result in a much better product. And, ultimately, that’s the goal of any porting effort.

_________________________________________________________________________________________________________

The Role of Intention in Game Design
Dave Menconi

“Games should be fun!”

“Simple, Hot and Deep!”

“Easy to learn, difficult to master.”

These platitudes may well tell a publisher how good a game is after it’s been created (or at least thought of) by the game designer, but I submit that they provide little guidance to the designer who wishes to create a good game. For example, given a choice between a fast-paced, frenetic game and a, slower, more thoughtful game, these words of wisdom give us little help. Both styles could fit within all three sayings. But how does one determine what choices to make for a particular game design?

One might let one’s personal taste be a guide. I may prefer slow, thoughtful games and therefore, I should choose that option. But this implies that a given designer can only design one game — his preferences will inevitably cause him to develop his favorite game. This is OK for a hobbyist, but a professional needs more breadth to his skill than this strategy allows. Besides, most of us may have strong opinions about some aspects of game but few of us have firm preferences about everything. What if you like slow games sometimes and fast games other times?  And we haven’t even considered the possibility that not all choices fit all situations — a frenetic pace may be out of place in some games.

So how DOES the designer make decisions about a game? I believe that, in order to make consistent choices — and therefore have a game that is self-consistent — one must have a clear purpose in mind for the game. Furthermore, the purpose must be carried throughout the design of the game. I call this process of sticking to the purpose “intention.” If you have strong intention, the game will consistently move toward a purpose, all parts working in harmony. Weak intention results in games that seem fuzzy or have elements that were added to the game “just because other games have them.”

What do I mean by “purpose.” You might imagine that I am speaking of “Social Responsibility” or some other lofty reason for making a game. Or perhaps you think I mean “making money” which, as we all know, is the REAL reason we make games. All of these and more. I’m talking about the effect the game has on the most important part of the software-hardware-user system: the player. Everything comes down to the user. He is the one who gives you money (or doesn’t), who will have fun (or won’t), who will be transformed and instilled with social responsibility. In short, whatever effect you hope to achieve it can only be achieved in the player. So the purpose of a game must be to give a user a particular experience.

But wait a second. By that definition, “a game should be fun” is a perfectly fine purpose. And so it is; but it’s awful vague. It’s like saying that a car should get you from here to there. By that characterization, a VW Bug is the same as a BMW — or a Mac truck! Let’s be more precise. HOW do you plan to give the player fun? Will he be excited, interested, frightened, amused? Will he laugh or cry or be confused or WHAT?

Once you have answered this question you have a good idea of where you are going with the game. Now you must carry through with your intention. Each time you make a decision, each graphic, every line of code must reflect that purpose. A sad game should not have happy graphics; an exciting game should not be slow. If you keep the games purpose before you at all times it will guide you through the most difficult parts of a project. Somewhere in the middle a developer goes into a deep dark tunnel. All he knows is the code he is working on. All he sees are algorithms. Without a guiding intention how will he know when he is finished; how will he see which features to add and which to throw away? But intention can form the light at the end of the tunnel and guide him safely out of the dark middle of a project into the light of completion.

The complimentary concepts of purpose and intention are useful tools to assist us in designing and developing games that are self-consistent and that put foremost the most important element of any game: the player. 

_________________________________________________________________________________________________________

The Shape of Games to Come
David Shapiro

Copyright © 1991 by Dr. Cat

I first met Dan Bunten at the Boston Computer Museum.  They were holding a celebration of the 25th anniversary of Space Wars.  We made an interesting contrast, serving as extreme examples of two different types of gamers.  I am a person who plays games primarily for the intellectual challenge of figuring out how to win.  Dan, on the other hand, played games mainly as a form of social interaction with friends and family.  This was a new idea to me at the time.  While wargamers might tend more towards the strategists, I think amongst the general public social gamers are far moreprevalent.

In a game designed for a single computer, it’s easy to put in various kinds of intellectual challenge.  Any attempt at simulating social interaction, though, comes across as very shallow when compared to the real thing.  On a multi-user system, however, interaction with other users can be the primary focus of a game.  I believe this will prove attractive to many people who simply weren’t interested in computer games before.  Even primitive chat systems can be very addictive to some.  Like telephones,computer networks don’t  just serve as a new forum for socializing, but also provide for new kinds of interaction that didn’t exist before.  Just as a 50 channel cable system makes entertainment instantly available at any time,networks let you turn social contact on and off, like something that flows out of a faucet.  There’s always somebody logged on somewhere to chat or play games with, and “leaving” abruptly doesn’t carry the same social stigma that it does in real life.  Also, the lack of visual contact forces people to find different cues to base their first impressions of each other on.  It’s a whole new social environment.

What kinds of games will we see on the networks?  Eventually I think much of the gaming that goes on will consist of fairly simple 2-4 player games, many of them adaptations of popular board and card games.  These can serve as a focal point, something to do while chatting rather than an end in themselves.  These kinds of games aren’t extremely popular now because the current networks have populations of users whose tastes are very different from that of the general public.

Another popular kind of game will be “genre simulators”, kind of like what was envisioned in the movie Westworld.  Current gamers are heavily biased towards fantasy and science fiction, but eventually we’ll see all the genres of popular fiction represented - detective stories, romances, spythrillers, comedies, etc.  Current games also have a strong focus on combat, like the role playing games they evolved from.  These will continue to be central elements in some games, but others will focus on other forms of human interaction.  Besides fighting, people in real life have been known to chase, race, flirt, trade, cooperate, compete to achieve more, trick, play jokes on, do favors for, and try to impress each other, just to name a few. Games that provide a rich context for the expression of these forms of social interaction will do quite well.  We may even see the development of some new forms of social interaction that don’t make sense in the physical world, suchas the phenomenon of “lurking” in chat systems.

There have been games, such as the MUDs (multi-user dungeons) on Internet, and some of the games on Genie, that allow a player to describe any conceivable action in words, and prints it on the screen of all the other players.  These are an important early step in the evolution of multiplayer games.  But the free form games will ultimately prove less popular than those with a certain amount of structure imposed on people’s actions, for the same reason people play D&D and not some kind of paper and pencil version of cowboys and indians.  Rules systems provide for a sense of accomplishment by making some tasks significantly more difficult than others, and also resolve the old “Bang you’re dead - No I’m not!” dilemma.

There will be several other kinds of games that prove popular as well.  When the technology is good enough to support it (and it’s gettingclose), multiplayer action games will probably do quite well.  Imagine a 10 player free-for-all in Wing Commander!  But there’s one element of design philosophy that most network games will have in common.  In the single player game market, there’s a dichotomy between the storytelling games, such as Sierra’s adventures, and the process intensive games, things like flight simulators or wargames.  People argue about the merits of allowing more user control and interaction versus creating a game with more emotional flavor and believable characters.  But this is like arguing whether soap operas or football games are a better use of television.  Clearly there’s a demand for both, and one should just work in the genre one has a personal preference for.

In multiplayer games, however, the storytelling game as it exists on home computers is a misfit.  Setting up a murder mystery doesn’t work very well when you have 43 Sam Spades running around through town.  If there are computer controlled characters, they have to fade more into the background, as the interaction between real people becomes the primary source of interesting events and story elements.  So the design of any kind of multiplayer game does not involve the concoction of story lines, quests, and characters, but rather focuses on creating settings, objects, and rules of interaction - the stage and props with which the players will create their own drama when you invite them in.

Most games will stop at providing a setting and letting people do as they please with it.  Where there will be a role for the writers and storytellers, it will lie not so much during the design and implementation of a game, as it does now, but afterwards, during the actual play.  Some games already depend on their designers as constant sources of fresh ideas, actingin symbiosis with the program to provide the players with a hybrid man-machine gamemaster.  These are some of the first of the electronic centaurs, a new breed that will dominate the world in the 21st century,because a man (or woman) and a machine working together can accomplish things that neither could ever do alone.  For a role playing game, the people increase the depth of the game, and the computers provide the breadth.  As the user-interfaces available to the online gamemasters develop, we’ll see that games of this type are among the most ambitious in scope.  How they will fare commercially in comparison with simpler multiplayer games remains to be seen.a

_________________________________________________________________________________________________________

In Memoriam: Tod Zipnick

On Friday, July 5th, Tod Zipnick succumbed to Hodgkin’s Disease. Tod was the founder and CEO of ICOM Simulations, an entertainment software developer and publisher in Wheeling, IL. ICOM Simulations first burst upon the scene in 1985 with Deja Vu, a graphic adventure with a difference: the objects in the image were active. To act on them, you pointed and clicked on them. It was a revolutionary development that, I am sad to say, has not been widely copied. ICOM followed up Deja Vu with Uninvited, Shadowgate, and Deja Vu 2. Several of these products were later licensed to NEC as cartridge games. ICOM also published TMON, a debugger for the Macintosh that enjoyed great popularity among Macintosh programmers. At the time of Tod’s death, ICOM was working on a dramatic new CD-ROM product.

Tod was also well-known for his generosity. He helped many people get started in the business, and financially assisted many struggling individuals and companies. Tod will be remembered for his generosity as well as his creative contributions to the industry.

_________________________________________________________________________________________________________

Annual Reminder to Get on to Genie

The JCGD has its own RT (Round Table) on GEnie. As a subscriber to the Journal, your time in the Game Design RT is absolutely free! 

Logging On to GEnie
GEnie can only be accessed during off-hours: weekday evenings after 6:00 PM, and all day weekends and holidays. Set up your telecommunications software to dial 800 638 - 8369. 

This is a national GEnie access number; you will want to find your local access number once you are logged on. You may need to prod it with a carriage return. At the prompt ‘'11#=', type "XTX99623JCGD" and a carriage return. This is our secret ID number that identifies you as a JCGD subscriber and allows you to sign up for GEnie without paying the normal sign-up fee. 

You will still need to supply a credit card number, and you will still have to pay normal charges for the time that you spend in other areas of GEnie; however, you will probably find these other areas well worth the expense. 

Once you have logged on, you will need to send an EMail message to get set up. So move to the mail section of GEnie. Simply type MAIL<CR>, or navigate through the menu system to get there. The menu system for GEnie is quite clear and you should have no problem navigating your way to the mail area. Once there, select menu item 6 (Enter a Text Letter Online) to send a mail message to a fellow named "GM" . This is Richard Mulligan, our immediate host. Carbon Copy your message to "CCRAWFOR". (that's me.) The subject of the letter should be "JCGD Free Flag". The content should be a short sentence such as, "Hello, here I am." When he receives your letter, Richard will set the flag in the GEnie system that insures that your time in the Game Design RT is not billed. When I receive the copy of your letter, I will open up the gates that allow you into the Inner Sanctum of the Game Design RT. 

Don't be intimidated by all this; the GEnie system is completely menu-driven, so it is very difficult to screw up, even if you don't have a manual. Moreover, if you need help at any point, just type "HELP" and hit the carriage return and it will explain your options.

The Game Design RT
At this point, you should type "JCGD"' to go to the JCGD RT. It will ask you if you wish to enter the JCGD area; respond affirmatively and enter. The act of entering the JCGD RT tells the GEnie system that you are part of the JCGD database. This is important, because it's the only way that 1 can get you into the private areas. Select menu item 1 (Bulletin Board) to enter the Bulletin Board area of the RT. Look at a message or two, then leave. 

The time that you spend in this first session will be billed against your credit card, so try to make it short, although the hourly charge of GEnie is so low that you needn't rush. You'll have to wait 24 hours for Richard Mulligan to set your JCGD Free Flag. 

When you return the next day, Richard Mulligan will have set your JCGD Free Flag and I will have set access to the hidden areas of the RT. Type 'JCGD" to get to the RT and, at the next prompt, type "Y to enter the bulletin board section. 

You will find yourself in Category 1. There are fourteen categories. However, only the first ten are open to the public; categories 11 - 14 are hidden and secret. When you first enter the Bulletin Board area, GEnie won't know that you are one of the Chosen Few and will not let you into these categories. As far as you will be able to tell, categories 11 -14 don't exist. Three conditions must be met in sequence before you can enter the Inner Sanctum: 

1) you must copy me on the JCGD Free Flag EMail; 

2) you must enter the Game Design RT; and 

3) I must set the entry flag for you. 

I've had a lot of problems with people failing to follow this sequence and then complaining to me that they can't get into the hidden areas of the RT. As soon as I receive your "JCGD Free Flag' mail, I will attempt to grant you access to the hidden areas. If you have not entered the JCGD RT, then the software will not recognize your name in my request, and will deny access. So please, make certain you follow the above sequence. 

It will take a day or two to read my mail, discover your message, and the flags to let you into the hidden areas of the RT What's more, I can't do it until after you have entered the Game Design RT once. There is a way to know when you are "in"! try the command "SET 5", This command will attempt to move you into category 5, one of the hidden categories. If you are itv then it will comply; otherwise, it will blithely lie to you that there U no such category. Once you are in. you can use either the SET command or the BROwse command to read through the hidden categories. Be warned, though, that the CATegories command will still not list the hidden categories. 

Guidelines
Lastly, some suggestions for gentlepersonly (gag!) behavior. We are guests on GEnie and I would like to make our presence a positive contribution to GEnue. not a liability. The public area of the Game Design RT has been set up to provide regular (read: paying) customers of GEnie with useful information. Please, take the time to contribute to the ongoing discussions there. 

While you are in the public areas, please don't give away the fact that there are private hidden areas that are not accessible to the general public. It only upsets people to know that they are Unchosen. Moreover I don't want to show my gratitude for GEnie's generosity by telling its customers that they can reduce their GEnie spending by subscribing to the Journal. So let's just keep the existence of the hidden areas our own little secret, OK? 

Please take advantage of the other Round Tables on GEnie. There are a lot of them, covering a great deal of material, and I am sure that you will find some of them interesting and useful. You will have to pay for the time you spend in these RTs. but you will undoubtedly find some that are well worth the money. 

As always with any telecommunications system, you must be very careful to avoid the problem of misunderstood communications. 

Remember, the fine nuances of voice intonation and facial expression are lost in the pure ASCII world. Offhand witticisms offered in jest can read like viscous snarls; terse rebuttals can come across as cold anger. It's very easy for well-intentioned people to end up at each other's throats. 

To prevent this, always word your messages in as conciliatory and professional a tone as possible. Be wary of sloppy language. If you crack a joke, terminate it with the three characters semicolon - hyphen - close paren. ;-) They represent a sideways smiling face and say, “That's a joke, friend!” I guarantee, if you don't clearly mark it as a joke, somebody will take offense, (e.g., “Oh, yeah? Well, for your Information, buster, I happen to raise chickens for a living and I can assure you that I never allow my chickens loose near roads, and so they don’t cross them!”)

The problem is trickiest when you find yourself in the thick of a hot debate. Most people have difficulty maintaining strict standards of professional expression on bulletin boards. When that bastard on the other side of the wire lets fly with a particularly pointed broadside, it’s hard to keep your cool. All too often you shoot back a furious reply laced with juicy insinuations and clever put-downs. This is called flaming. Don’t do it—flames, like forest fires, seldom just bum out. They grow on you. Pretty soon the whole board is one raging conflagration. 

We don't want to stifle honest intellectual debate. Our profession is still young and uncertain; there is much room for major differences of opinion between intelligent people. We want those differences of opinion to get a full airing. We need a demolition derby of ideas, a barroom brawl of opinions. Please, get in there and fight eloquently for your beliefs — but keep it on an intellectual plane, not a personal one.

I urge you to join the Game Design RT on GEnie and participate in the discussions there. Our experience with the JCGD BBS showed that the community of users benefited greatly from the discussions there. Our hope is that a much larger community will be able to crystallize on GEnie. I hope to see you here!

_________________________________________________________________________________________________________

Modern Times
Chris Crawford

The time has come to acknowledge a fundamental shift in our industry that has been developing for some years. As I observed in an essay last issue, games are getting bigger. There’s another side to this: the development cost of a computer game has increased dramatically. This is changing every aspect of our industry.

In the early 80’s, computer games were all produced by lone wolves. Development times were three to six months. Nasir Gebelli or Bob Bishop would grind out several games a year. The development cost of such games was little more than the programmer’s time, and the return could be enormous.

But by the mid-80’s, the boom years were over and there was no easy money to be made. With each passing year, the development costs grew larger. Small teams began working on games, and lone wolves became rare. The following table summarizes my estimates of typical development costs for computer games:

Year   Typical Development Cost

1985              $25K

1986              $35K

1987              $50K

1988             $70K

1989             $90K

1990           $120K

1991           $200K

The point of this table is that costs have shot up dramatically in the last two years.

Why the increase?
The first question is, why has this happened? The answer, I think, has to do with a variety of factors that can be summarized as the maturation of the distribution channel and the consumer. Just a few years ago, games tended to sell in a smooth distribution, with the best games selling 100,000+ units, good games selling 50,000 - 100,000 units, and weak games selling 25,000 - 50,000 units But in the last few years, that has changed as consumers gravitate more exclusively toward the hits. Nowadays a hit game will sell a quarter of a million units, while a good game will sell 50,000 units. We could put an optimistic face on this and say that the market has become more efficient at selecting the better quality games. 

But it’s more complicated than that. The distributors and retailers have become pickier. They don’t want to carry good games that sell 50,000 units; they want only the 250,000 unit hits. This has created a kind of self-fulfilling prophecy with respect to games. If the big distributors and retailers decide to carry a title, it is guaranteed good sales. If not, it dies. This phenomenon has accelerated the separation of games into big hits and losers. It also gives distributors and retailers a surprisingly large voice in game design.

Another factor in all this is the impact of marketing costs. In the old days, marketing was simple and cheap. Nowadays, the marketing campaign for a game can cost $100K. You can’t afford to spend that kind of money on a dog.

Bigger development budgets
Realizing this, publishers now go for the big hit. There is simply no point in wasting time on a good game. The only thing that works in this new regime is the big hit. One big hit like SimCity or Wing Commander can make a company. A dozen good products only permit you to tread water a little longer. Publishing games has thus become a huge gamble. If your game hits, you make a ton of money; if not, you lose a ton of money. It makes a lot of sense, under these circumstances, to push the odds in your favor by spending a lot of money on the product itself.

Unfortunately, it is possible to take this strategy to dangerous extremes. The two biggest hits of the last year, Wing Commander and King’s Quest V, illustrate just how far some companies are willing to go to get a hit. The development budgets for both products are rumored to be far in excess of anything previously undertaken (and the products show it.) It could be said that Sierra and Origin simply bought market share. They’ve proven that, if you throw enough money at a product, you can make a real winner.

Problems
But there’s there’s a catch to this strategy: it only works in the absence of significant competition. Had every major publisher in the industry released a monster-budget product last fall, Wing Commander and King’s Quest V would have had to share the pie with the other lavish productions, and it is likely that everybody would have lost money. So the question becomes, how can the industry stabilize on the ideal development budget? Are publishers going to be forced into a self-destructive big-budget war to remain competitive?

No more small-time operators
One thing is certain: with all  the high-stakes finance, the small-time operator has been squeezed out. There are a lot of people out there who nurse dreams of creating their own games at home. Many of them subscribe to this Journal. It’s time to say it loud and clear: you at-home amateurs don’t stand a snowball’s chance in hell of getting a game published. You might as well be using your videocamera to make a movie to sell to Hollywood. 

The same thing goes for the lone wolves out there. Face it: there is no way that an individual can make a commercial computer game these days. The cost have risen too high.

Development studios
For the moment, the development studio is the only place where computer games can be made. You’ve got to have a  team of specialists to put together a game with the quality design, graphics, sound, and animation necessary to compete in today’s marketplace. 

But even the design studio is threatened. Independent design studios prospered because of a stupid flaw inherant to most publishers. Until the last few years, most publishing executives believed that they knew as much about game design as their staff. In their view, there were no game designers, only programmers. This encouraged executives to participate in the design process. Of course, a busy CEO or marketing VP doesn’t have the time to think through all the ramifications of a design change, but this didn’t stop them from ordering such changes. The result was a chaotic design that advanced in herky-jerky fashion, bumbling along until reality caught up with the vanities of the executives and they put the mutant design out of its misery.

Independent design studios worked because they were more resistant to the jerking around. They could get a game done on time and under budget because they could stand up to the pressures emanating from the executive suites. That is why most publishers during the mid-80’s reduced the size of their in-house development staffs and relied more heavily on outside development studios.

The pendulum is now swinging back. Budgets have risen so high that publishers are becoming reluctant to commit huge amounts of money under highly risky circumstances where they have little control over the product. A publisher might be willing to risk $100K on an outside studio, but not $500K. The rising costs of software are enticing publishers to return to in-house development.

Less Diversity
The most insidious effect of rising development budgets will be an increased reluctance of publishers to take chances on unconventional games. The penalties for failure being so much higher, publishers simply cannot afford to take big chances on untried concepts. The dilemma we’re in is that such prudence guarantees our doom. We are an entertainment industry; creativity is our lifeblood. We desperately need the SimCitys and Railroad Tycoons pumping new blood into our industry’s consciousness. 

Future trends
This process can only continue. I expect some short-term re-adjustment to come. Our industry has yet to produce a monster-budget flop like Hollywood’s fabled Heaven’s Gate — but you can be sure that we’ll have one pretty soon, and that will instill a greater sense of prudence in publishers with respect to monster-budget products. Over the long run, though, the trend towards steadily higher development budgets will continue.

CD-ROM will hasten the process. Brian Moriarty was the first to observe that the prime effect of CD-ROM will not be so much to permit better graphics as to mandate higher costs. Moreover, an increasing percentage of development costs will go towards graphics rather than game design and programming; this will shift creative control away from the game designer and towards a new breed of multi-media producer.

_________________________________________________________________________________________________________