One of the common complaints against computer games is that they are too demanding of the player. A great game like Secret Weapons of the Luftwaffe comes with a 224-page manual, and you’d better read the manual before you attempt to play the game. The big hits from Sierra all require dozens and dozens of hours of trial and error as the player attempts to crack the game’s puzzles, many of which are fiendishly obscure. My own work is just as demanding: one of my games sported a title screen that warned: "Persons who play this game without reading the manual are wasting their time."
The problem with all this is that most people aren’t willing to invest that much time in a computer game. They want to be entertained without having to jump through hoops. They don’t want to be forced to work for their fun. This has been one of the reasons why computer games have failed to reach the mass-market.
Hence the temptation to concoct a game that doesn’t require much effort of the player. But what do we mean by "effort"? In the early days it was felt that effort was synonymous with reading documentation. The rule at Atari in the early days was that a game’s rules had to fit onto the small flyer that accompanied the cartridge. If the rules wouldn’t fit, the game was too demanding for most players and had to be redesigned. Nowadays we aren’t so dogmatic; we have seen some very successful games that have extensive documentation. Besides, the games that require little documentation all seem to be skill-and-action games.
The second factor that many associate with effort is interactivity. If the player has to work too hard to play the game, perhaps it is because the game asks him to make too many decisions. If we could reduce the interactivity, we could then ease up on the player’s workload. Perhaps this would solve the problem.
It is tempting to contemplate the possibility of eliminating interactivity altogether. After all, who anointed interactivity as the ultimate goal of all games? Why shouldn’t a sufficiently clever designer be able to create a game with no interactivity whatsoever?
This idea, so appealling at first glance, quickly collapses under scrutiny. If there is to be no interactivity whatsoever, then what will the player do? He could, of course, type on the keyboard or manipulate the mouse, but in a zero-interactivity game the computer would be precluded from responding to the player’s actions. The player is reduced to watching the multimedia presentation that the game designer creates. While this certainly reduces the player’s workload, it also reduces the personal computer to one hell of an expensive VCR -- and not a very good one at that.
No, a zero-interactivity game is truly a contradiction in terms. It is indistinguishable from a movie or a novel. But there remains another possibility: the low-interactivity game. The idea here is to create a game that doesn’t demand much of the player, but he still gets to make some input every now and then. The player’s choices are simple and do not require much in the way of thinking. Think of a low-interactivity game as a kind of movie with occasional branchpoints. The player spends the bulk of his time watching and listening, and only rarely is he required to do something. If only we could figure out what that "something" is.
This is not a new idea. It has been around since the early days of computer games. I remember people at Atari talking about games for Joe SixPack. The marketing people at Atari often requested a game for couch potatoes. A great deal of hot air was vented on the subject, and there were a number of projects initiated, but nothing ever seemed to come to fruition.
In the first big boom of computer games, from 1981 to 1984, a number of low-interactivity games were attempted. One of these was Alien Gardens, published by Epyx. You were a kind of alien bee flitting through a garden of alien flowers, trying to pollenize them. It was a very low-key game, definitely low in the interactivity department. Unfortunately, it made no sense and ranks as one of the great turkeys of computer game history.
1985 saw another low-interactivity product: Little Computer People from Activision. This odd product created a small family on your screen, moving around their dollhouse in the course of their daily activities. You the player watched them. The product attracted much attention from the press but it was not, I believe, much of a commercial success. And it spawned no imitators or descendents.
Epyx roared back in 1988 with more low-interactivity products: its line of VCR games, released with much hype and excitement. Realizing the clumsiness inherent in the serial format of a videotape, the designers rightly limited interaction to the bare minimum, focussing most of their attention on providing interesting footage for the player who would occasionally fast forward or rewind. Now, here was the ultimate couch potato game. You didn’t need a computer to play it and you didn’t have to do much work. All you did was sit back and watch the tape and occasionally push a button. Sounds great, right? It sounded great to a number of publishers, who frantically put together their own VCR products. Yet, despite their obvious advantages and some expensive marketing campaigns, VCR games bombed. They were a total disaster. Mindscape shipped one product and cancelled the second one, even though it was ready to ship, because the first game had failed so completely.
Another experiment in this direction was the CinemaWare line of games. These games were strong on spectacle and weak on interaction. The marketing thrust of the CinemaWare line was that these games were just like movies, except that you could play with them. Most of the design effort was put into making lots of pretty pictures and animated sequences. The gameplay itself was weak. The first line in the series, Defender of the Crown, created quite a sensation and sold very well. But after that, it seemed to be all downhill. CinemaWare went bust early this year.
There’s a lesson here: low-interactivity games don’t sell. They sound like a great idea, such a great idea that people keep going back and doing them over and over. And, in pure Darwinian fashion, the companies that have cast their lot in with low-interactivity games have suffered extinction. Epyx, Activision, CinemaWare, and Mindscape have all been reduced to ashes. But the survivors seem unable to learn from their competitors’ failures; proposals for low-interactivity games keep popping up like some time-hopping Sisyphusian dodo bird bent on repeating its extinction in as many eras as possible.
Why have low-interactivity games been such a dismal failure? One would think that there should be some small fragment of a market for them. Why is the historical experience so decisively negative in defiance of common sense?
There are two answers, I think. The first is that the available hardware is not up to the task. We have not yet hit the right combination of ingredients to build good low-interactivity games. The VCR gives lots of imagery, but its access times are so slow that even low-interactive games suffer. The computer itself simply cannot generate or maintain images of enough variety and quality to entertain the player by themselves. This arguement suggests that optical media games might solve the problem. They offer faster access times than videotape, yet much greater image capacity than the computer. Whether the combination will be fast enough and visually rich enough, we cannot yet say.
The second answer is more pessimistic. I have long maintained that interactivity is the essence of the gaming experience, and that the quality of the interaction determines the quality of the game. If this be true, then the very notion of low-interactivity games is intrinsically wrongheaded and such products will inevitably fail.
One way of expressing this line of reasoning is to start with high-interactivity games and then move towards lower interactivity. What do we gain and lose as we move in this direction? As we lose interactivity, we reduce the total quantity of decision-making that the player must perform. This reduces his workload. It also reduces his ability to creatively influence the outcome of the game. In other words, as we reduce the interactivity of the game (its gameplay), the player’s degree of participation in the outcome is diminished and he therefore becomes less involved in the outcome, which becomes more pre-determined.
But there’s a catch: the player’s workload is not proportional to the quantity of decision-making. Decision-making consists of two parts: a laborious process of learning the basic parameters for making the decision, and a faster process of applying those parameters. The player must go through the first process whether he makes one decision or a hundred. Thus, his workload is equal to a fixed quantity (learning the rules) plus a variable quantity (playing the game.)
An example might help here. Suppose I present you with two games. The first is a truly minimal-interactivity game. You will be asked to make one decision during the entire game. It is a murder mystery game the denouement of which places you in a room with the six main suspects and a gun. You must decide whom to shoot. That’s exactly one decision, about as low-interactivity as you can get. Yet you will likely ask a great many questions before making your decision. Can I shoot more than once? Can somebody else shoot me? May I choose not to shoot anybody? (True gamesters will note that the problems are trivially solved by playing the game several times, experimenting with each of these options in turn. While entirely possible, this flies in the face of the stated intent of the low-interactivity game.) Note that these questions are really questions about the rules of the game. You will have a considerable workload just learning the context for your single decision, and inasmuch as the outcome of the game rides on your single decision, you had damn well better learn the rules thoroughly.
The second game is a more conventional game with many decisions. Once again you will have the workload of learning the rules of the game, and in addition to that you will have the workload of making all those decisions. Yet the workload of learning the rules is most likely the more substantial of the two. In other words, if you end up making a hundred decisions during the course of this game, your total workload will not be 100 times greater than your workload with the minimal-interactivity game. It might not even be twice as great.
Thus, as we move from the higher-interactivity game to the minimal-interactivity game, two factors are reduced: the player’s workload and his ability to influence the outcome of the game. But -- and this is the key point -- the latter falls faster than the former. Reducing interactivity gains us only small benefits in terms of reducing workload, but costs us heavily in terms of the player’s ability to creatively influence the outcome of the game.
This, I think, is the real reason why low-interactivity games have been such failures. Diminishing the interactivity just makes the game less fun faster than it makes the game easier. What we gain in terms of reduced workload we more than make up for in terms of diminished fun.
An Interesting Exception
There is a small group of low-interactivity games that have been undeniably successful. These are the games produced by Cyan (Manhole, Cosmic Osmo, et al) and Amanda Goodenough (Inigo Gets Out, Your Faithful Camel, et al) They are low-interactivity games, really more like vaguely linear stories with some buttons to press. They have been moderately successful. What is striking is that all of these products are designed for young children. It appears that our industry’s Darwinian methods have at last found a suitable habitat for this otherwise less-than-fittest species of game.
Why is it that low-interactivity products are successful with young children when they don’t seem to work with older players? I think that the answer can be found by asking another: why don’t high-interactivity products work with young children? Try foisting SimCity or Robotron or Ultima VII on a six year old, if you’re willing to risk accusations of child abuse. The poor kid will be overwhelmed by such games. He just doesn’t have the perspicacity to handle such a game. What’s left for him but the low-interactivity games?
The Cyan games and the AmandaStories games show that low-interactivity games can be produced, and they give us a good idea of how a low-interactivity game works. They also demonstrate that such products do not work well in the larger world. They offer both positive and negative lessons.
Lastly, there are the blue-sky concepts for low-interactivity games. Most of these center on some form of storytelling. In one approach, the computer tells the player a story, with the player somehow providing cues that the computer uses to adjust the story to suit the player’s interest. For example, if the computer mentions an encounter with a beautiful girl, and the player so indicates, the computer could proceed to describe a sexual liaison. If the player is female, it might tell of a friendship developing between the two.
The problem with this lies in the nature of the cues provided by the player. Exactly how does the player communicate his desires to the computer? If we use a series of predetermined branch points, then the game has reverted to a conventional adventure game, and the player still must learn the language of expression for the adventure. Proponents of such schemes often fall back on deliberately vague formulations. The computer will "sense" the player’s mood, they claim. I find it difficult to imagine just how this sensing will take place, and how the computer will interpret whatever it senses.
A variation on this scheme makes reference to the manner in which a performing artist senses the mood of his audience and adjusts his performance accordingly. This, it is asserted, constitutes an advanced form of low-interactivity that could be harnessed for new types of games. The problem lies in the input and processing required to accomplish this. The performing artist is analyzing fine shades of voice intonation and subtle nuance of facial expressions. This type of processing is way beyond anything we can process on a personal computer, even assuming that we could equip our games with microphones and television cameras to provide the input.
There is a second and more powerful argument against such schemes. Even if we could implement them, they would still be inappropriate. The performing artist who adjusts his work in response to the audience’s feedback does so on a very gross average of the audience feedback. Some people will be screaming "Faster!" as others are yelling "Slower!" The artist can’t do both, so he responds to the majority. What’s more important is the fact that the audience understands this. We can’t all have our way, so we accept the situation. But when I am the only user of a computer game, I am completely justified in expecting that I can have my way. I expect the computer to respond to my wishes. If the computer fails to understand my wishes or is incapable of executing my desires, then I will be dissatisfied. So if I grunt or laugh or scowl or drum my fingers and the computer fails to get my message, then the product will have failed.
The historical evidence is quite clear: despite many attempts, low-interactive games have never been successful except as products for young children. We’ve been talking about this concept for more than ten years now and can’t seem to make it work. There is a slim possibility that we can make viable low-interactivity games using optical media. As yet, this is speculative.
The concept of low-interactivity entertainment is a ghost that we will never exorcise from this industry. The concept just keeps popping up like an annual flu bug. Some naive fool will come forward with "this great new idea that nobody has ever thought of before." As I discussed, the concept seems sound on first examination, so people will probably give it credence. Who knows, some credulous publisher might be persuaded to part with development dollars to explore the idea.