Perhaps the most exciting aspect of networked games is their ability to provide interpersonal interaction. As I have so often complained, traditional computer games are always about "things, not people" and this shortcoming has held back the development of the medium. The difficulty, of course, lies in the problems of artificial personality and personal expression. Sure, you could come up with a parser capable of understanding "I love you", but how about "Who was that man I saw you with last night?" especially with its manifold interpersonal implications?
The problem of automating interpersonal interaction, of coming up with artificial characters that really work, has been attracting attention for some time now, but the sad fact is that we really haven’t cracked the problem, or even come close. My own work in this field has made much progress, but it has taken three and a half years, and I still don’t have commercial product.
The people in the networked games biz toss their heads and laugh, "So what? Who needs artificial personalities when we can have the real thing? And no computer model will ever rival the richness of human interaction!"
They’re right on all counts. Moreover, they have another advantage: when you use the computer to connect humans rather than to simulate them, you save lots of resource. My software uses gobs of RAM and zillions of machine cycles to simulate the most rudimentary of human behaviors. The network people don’t have to write monster software to handle these problems; all they have to do is ship bits between players. What could be simpler?
But there are some drawbacks that have so far crippled the network designers, preventing them from realizing the potential of this medium. In this essay, I hope to address some of these killer problems and discuss strategies for solving them.
This is the worst of the problems. Imagine yourself in the middle of a hot game. Derek has just made a move on your girlfriend; your kid sister has just informed you that’s she’s pregnant, but will not reveal the father. And Vanessa has just announced an attempted hostile takeover of your oil company. Things are really cooking when suddenly Derek announces that his wife is calling him to dinner, and drops out of the game for the night. Because he’s playing a crucial part in the drama, the whole game is frozen. The problem is compounded by the number of players. The more players there are, the greater the chance that a single-player dropout will shut everything down.
This problem, of course, is not limited to interpersonal games; it has been around for a long time. I recall a story from a Defense Department computer simulation.that illustrates its severity. This simulation linked up commands from all over the country in joint wargames. I saw a videotape of one such operation, an amphibious invasion. A helicopter had just ferried some troops ashore and had returned to the troopship to make another pickup. It settled down on the landing deck of the troopship and cut its engines. A moment later, a line defect caused the loss of connection with the naval base controlling the troopship. Because the network used distributed computing, the loss of the connection triggered the loss of all units controlled from that station. The troopship suddenly disappeared from the simulation. The helicopter was now hanging in the air, with no power to its engines. It fell into the sea and was treated as a casualty.
The truth is, there is no way to insure that players will remain in a game they have begun. Some of them will certainly drop out before the game is completed, and if the role they played was crucial, then the game will collapse. What can be done about this problem?
I know of four basic approaches to this problem: player replacement, noncrucial players, reduced probability of dropout, and bridge artificial personality.
The first strategy is to immediately replace the missing player with another human. Presumably there will always be a steady supply of players; all the network need do is hold incoming players for a moment to see if any existing slots have opened up; if so, then the incoming player is plunked down into the existing game. The problem with this approach is that it drops the new player into a slot he knows nothing about. Without knowing the interpersonal history, how can the player appreciate the subtleties of the interpersonal situation? How can he know that the character he is playing has been a two-timing, double-dealing, low-down skunk for the last few hours, and that’s why everybody hates him? And consider the experience from the point of view of the other characters. Here’s a character who for three hours has followed a consistent course of action: he’s a snake! Then suddenly, the character is transformed into a teddy bear who wants nothing more than to be loved. This isn’t a plot twist; it’s a plot disjunction. Lastly, player replacement cannot always be counted on to work. There will still be times when there just isn’t anybody availabe, in which case the game has to shut down. Thus, player replacement does not provide us with a satisfying or reliable solution to the problem of player dropout.
Another approach I have heard about attempts to reduce the impact of any single player on the overall game. One such case involved a trading game in which characters engage in bidding for commodities. If one player drops out, then the market isn’t much affected. Another variation on this strategy makes the player a voter in making crucial decisions. This strategy eliminates the problem by eliminating the significance of the player. It no longer matters what you do, because the game can chug along just fine without you. I don’t see much value in this approach; it robs the player’s actions of meaningfulness. Who would care to play a game in which your own actions (or even your very existence) don’t really matter?
A third approach is to reduce the probability of dropout, either by reducing the duration of the game or by making the game turn sequenced with long intervals between turns so that players can be certain to get their moves in. In the former case, the game is kept to 30 minutes’ duration or less; this reduces the likelihood of player dropout. Moreover, it insures that, should somebody drop out, little is lost. The players can simply start over with a new game. The difficulty with this approach is that it limits the richness of play. Short games just can’t get into interesting territory. A great many human relationships derive their impact from the context in which they take place. You need to build up some interpersonal history before your interactions with others can become deeply interesting.
The time-sequenced approach often breaks the game down into daily turns. All the players read their news of the day and then enter their moves for the next day. At 5:00 AM the central computer processes all the moves and posts the results. Because players need only check in once per day, the likelihood of their missing a move is much reduced. On the other hand, this solution breaks up the interaction into a slow-moving dance of discrete steps. Seducing a cute chick one box of candy at a time could take months. While it works reasonably well with certain types of strategy games that require lots of thought with few moves, it cannot deal with the more intense interaction of interpersonal relationships.
The fourth approach to player dropout problems involves the use of what I call "bridge artificial personality". The idea here is to use artificial personality to bridge the gaps created by player dropouts. By noting a player’s moves, the computer can build up a model of the player’s personality; should the player later drop out, the computer can turn on the artificial personality to take over for the player. While the artificial personality would never be as rich or interesting as the real thing, it might be good enough to cover the gap temporarily.
The downside for bridge artificial personality is that this technology will require a considerable amount of work to create. However, such technology, once created, could be adopted to a wide variety of network products. It would also give us a new twist on the Turing test.
Another difficulty with networked interpersonal games comes from time zone differences. Most people are going to play games during their off hours, typically 7:00 to 10:00 PM on weeknights. Unfortunately, this window is too narrow to permit people from widely different areas to play at the same time. Indeed, even within the continental US this presents a problem: the people on the east coast are getting off just as the people on the west coast are getting on. When we start throwing in players from Japan and Europe, the problem becomes insuperable. There is simply no way to bring large numbers of players together from all over the globe at the same time.
Of course, if the game is designed for offline interaction, using some sort of delayed response or turn sequencing, then this problem vanishes, but human interaction doesn’t work like chess. Mood is just as important as strategy, and it’s really hard to maintain a mood over a 12-hour time gap.
It’s my belief that there is no really good solution to this problem. However, partial solutions can work. An interpersonal game could be set up with mostly west coast players, plus one person from Japan; if they play in the evening on the west coast, it’s still morning in Japan. Similarly, east coast players could play mostly among themselves, with the game spiced up with west coast players (-3 hour difference) or European players (+5 hour difference). The trick is to have most of the players from one time zone meeting at a convenient hour, and a few adventurous players from other time zones showing up at an inconvenient hour.
This is not so great a problem, but it still deserves some consideration: how do we insure that the game retains sufficient dramatic content? The problem here arises from the possibility that the players will fail to do interesting things and the game will dissolve into boredom. Or perhaps they’ll engage in overdramatic nonsense dashing from murder to seduction to dragons to space aliens. I see no decent solution to this problem.
Nazis and Dorks
Since the players provide so much of the game’s content, quality control of players is crucial to the overall entertainment value of the game. But how do we exercise quality control over the people who are paying the bill? If a particular player prefers to play as a nazi, constantly shouting "Heil Hitler!", what can be done to protect the more normal players from this person’s bad taste? In the same fashion, if one of the players is simply a stupid dork, how can other players be asked to cope with him?
This is a delicate problem, because it involves evaluations of the personal merit of individuals, but it is not a new problem. We all have to organize our social lives in ways that maximize the probability of running into interesting people and minimize the probability of running into unpleasant people. When was the last time you stopped by a bowling alley, or a discotheque, or a square dance hall, or a Grateful Dead concert? In each of these social gathering places, you have a pretty good idea of the kind of people you’re likely to encounter. Nobody will come right out and say that all Grateful Dead concertgoers are drug users, but you’d have to be awfully naive to be surprised if somebody offered you a joint while you were there. By the same token, it would be crass to say that all square-dancers are older people with conservative values, but if I wanted to socialize with such people, a square dance would be a great place to start. Thus, we all know lots of rules of thumb about where to encounter what kind of people. We use that information to avoid some places and seek out others. But such information is not yet available about network sites. Indeed, if there’s any generalization you can make about those who frequent networks, it’s that they’re probably undersocialized male dorks. Not very promising, eh?
Fortunately, there are some things we can do about this problem. The best way is to come up with a "player profile" that rates players in a variety of dimensions such as imagination, consistency, romanticism, team-playing, anti-social attitudes, rudeness, and so on. Every time a player completes a game, his coplayers are asked to rate him in each of the dimensions. Once a reasonable set of player profiles have been worked out, specialized games can be set up that have certain personality profile requirements associated with them, e.g., "to be allowed to enter this game, you must have a romance rating of at least 6, and a rudeness rating of less than 2." Even this scheme, however, is vulnerable: a group of anarchist punks could play a series of games with themselves, altering their personality profiles so that they could gain entry into whatever game they chose, where they could wreak havoc.
Our problem is that the normal methods of enforcing group expectations on individuals break down in the network environment. If I were to wander into a gay bar and loudly start telling ugly jokes about homosexuals, I’d be asked to leave, or perhaps I’d get beaten up. But there are no such options available in a group environment online. My guess is that, until network environments provide the majority with the power to enforce sanctions against individuals, social groups will not be able to prevent anarchist troublemakers from intruding on their fun.
Another issue in network interpersonal games is the problem of establishing the ideal group size. Social interaction is tricky business; if too few people are involved, the interaction becomes inflexible, while if too many are thrown together, the group becomes socially unmanageable. Unfortunately, the ideal size depends largely on the people involved. Some groups will function quite well with one or even two dozen members; others will fall apart with more than five members. There’s no way to tell in advance. My guess is that we’ll have to start out with the classic seven-person interaction and then figure out ways to modify it.
Free text or Regulated inputs?
This is a crucial and difficult decision. Should the players be allowed to interact via freeform text or should their interactions be regulated through a standard interface language? The former approach gives them the freedom to pursue any options whatsoever, to interact in a wide variety of ways, but it suffers from the ability of troublemakers to mess things up for others. In general, I see this problem as minor. However, the regulated input approach has the additional strength that it can allow the computer to regulate some form of reality. That is, regulated inputs can permit the computer to keep track of variables and insure that actions are in accord with some notion of reality.
Of course, free text and regulated inputs are not mutually exclusive; it’s easy to include both the same product. The issue is more a matter of how much of the interpersonal interaction takes place through free text and how much goes through regulated inputs. A good example is provided by Habitat, which mixed some free text with some regulated input. The reassuring result from Habitat is that social groups formed and began to establish higher rules of social behavior.
This is a particularly thorny problem. The audience would expect to be treated as equals, yet much of the richest social interaction arises from the inequalities of the human condition. Some people are richer; some people are smarter; some people are prettier. These inequalities play on human foibles to generate social conflict. Yet who would want to play a game as the ugly poor kid without a high school diploma? How do we reconcile the natural egalitarianism of the customer ("My money is just as green as his") with the dramatic necessity of inequality?
I think that this problem can be resolved through a kind of karma. The very first game you play, you have no karma at all, and so you enter the game with a weakling character. However, your overall goal is to improve your karma. Thus, even though you play as an ugly, dumb, poor nobody, if you play well (whatever that means), your karma increases. The next time you play, you’ll be given a character who’s not quite so ugly, so dumb, or so poor. If you play long enough and well enough, you’ll play as one of the Beautiful People. Perhaps you’ll be a fabulously wealthy, ravishingly beautiful young CEO of a major software company. Perhaps you’ll get to be a really nasty bad guy with all sorts of exciting opportunities for villainy -- and if you’re a truly fine villain, why, your karma increases!
What this suggests is that players should be rated, not by any absolute scale of direct personal achievement, but rather by a scale of dramatic success. In other words, we don’t measure a player’s performance by how much money he acquired, how many Fame Points he picked up, or how many Cute Chicks he bedded. Rather, each character should be assigned a set of dramatic goals and evaluated on how well he met those goals. Thus, Lovely Nell might be rated on how well she met and married Mr. Right, while Snidely Whiplash will be judged on how many girls he tied to the railroad tracks. Lassie will be judged on how many times she gets little Timmy pulled out of the well, and Captain Kirk will get points for every time he disables a rampaging computer by making it think about a logical impossibility. In other words, you get karma points for being true to your character.
This has the additional merit that it encourages players to spend more time on your network, building up their karma so that they too can play as Scarlett O’Hara, or J.R. Ewing, or Spock. What a delightfully commercial concept!