March 23rd, 2012
I had an interesting realization today relating to the behavior of games. Let’s start with the observation that a game establishes a set of causal relationships that the player learns in order to master the game. For example, the player might learn that the shotgun is the best way to kill the purple monster, but the green monster is best attacked with the flamethrower. Upon entering Level 5 of the game, the player should immediately jump to the right, because an RPG round will hit his point of entry just a few seconds later. Games are full of these causal relationships that must be mastered in order to win.
But how tight should the connection between cause and effect be? How much time, or how many events, can take place between the player’s action and its consequences? I call this tightness of connection the causal immediacy of the game. If a player’s action A immediately leads to consequence Z, then the causal immediacy is high. If, on the other hand, Z doesn’t take place until much later, then the causal immediacy is low.
At first glance, it would seem that high causal immediacy is desirable. After all, how can the player figure out the causal relationship between two events that are widely separated in time?
Which brings me to my second concept: causal plausibility. This is the degree to which the consequence Z is plausibly related to the action A. It’s best explained by one of the classic stories from game design. Back in the 1980s, there was a text adventure based on the novels of Douglas Adams, which are celebrated for their quirkily skewed view of reality. At one point, the player must overcome some obstacle, and the solution is to insert a fish into a vending machine. This was certainly in keeping with the craziness of the book, but its causal plausibility is close to zero. How can a player learn a causal relationship that is highly implausible? In order to learn the causal relationships in a game, the player must be able to form and test hypotheses, and obviously players will start with the most plausible hypotheses. The more plausible the causal relationship is, the less immediate it must be.
For example, suppose early in a game you find three bullets, colored red, blue, and green respectively. Much later in the game, you encounter three monsters who are identical in every way except their colors, which are red, blue, and green. The connection between the bullets and the monsters is close enough that a long time separation does not obscure the obvious causality that the player is required to infer. If the correlation between colors were not so blatant, however, then the time separation would have to be shorter.
Now for a third consideration: causal breadth. Most gamers expect simple one-to-one relationships between causes and effects. Player action A always and necessarily leads to consequence Z. But such logical simplicity is rare in the real world: all too often a single event has many causal factors that contributed to it. Suppose that we have a consequence Z that requires actions A, B, and C to take place. How will the player figure this out by trial and error? What are the chances that the player will bumble across that combination of actions? Here it is all too easy to lose the player in complexity.
Thus, a game will have the smoothest learning curve if the lessons of the game have high causal immediacy, high causal plausibility, and low causal breadth. But is this what we really want? We have a zillion games with these traits. Wouldn’t a more interesting game have more interesting causal relationships? For example, wouldn’t higher causal breadth offer richer play experiences? If so, then it must be accompanied by high causal plausibility and high causal immediacy.
These thoughts arose when I considered the playability of my current project, Balance of the Planet. This game has nearly a hundred different factors that relate to each other through a complex web of causality. The causal breadth of the game is very high. But the causal immediacy of the game is low: the player’s decisions concern setting tax rates, which trigger changes that ripple through a broad net of cause and effect before generating results that the player wants: better scores. The only design element in favor of the player here is causal plausibility. After all, if you raise taxes on carbon emissions, then you’ll reduce production of coal relative to, say, wind energy. The reduced levels of coal use will lead to lower carbon emissions, which in turn will affect the rate of climate change, which in turn will reduce costs to the economy – and so on and so forth. The chain of connections is long, but it’s also easily understandable. Thus, Balance of the Planet shows the player a way of thinking not common in games: high causal breadth and low plausible immediacy combined with high causal plausibility.