Analysis of Causal Import
The most common complaint emanating from players has been that they can’t figure out the causal tree. I have long thought that it might be good to include something along these lines, but now that I look into the details, I see some serious problems. Consider:
Case 1: additive causes
The total contents of the Piggy Bank is equal to the income from Air Pollution Taxes, Radioactive Emission Taxes, Carbon Taxes, and so forth. In this case, it’s trivial to show the magnitude of the inputs: each tax supplies some percentage of the total amount in the bank. It’s a pie chart, pure and simple. Looks easy enough, doesn’t it?
Case 2: multiplicative causes
OK, now let’s try something a little harder. What happens when the causal factors multiply together? For example, the total amount of Air Pollution Taxes gathered is equal to the rate of taxation times the amount of air pollution. How are these presented? Are they equal in magnitude? Suppose we have a taxation rate of $2 per ton and a pollution amount of 5 million tons, resulting in tax income of $10 million. Now suppose the numbers are altered: the taxation rate is $5 per ton and the pollution about is 2 million tons, again yielding $10 million in income. Which of the two sources (tax rate and pollution amount) was more important? How much of the $10 million can be attributed to the tax rate, and how much to the pollution amount? There’s simply no way to differentiate the two: they both contributed equally to the final result, regardless of their actual values.
Case 3: weighted multiplicative causes
It gets worse: what if the equation for tax income were 3.14 x tax rate x pollution amount? That doesn’t change anything, because we can’t assign the 3.14 factor to either the tax rate or the pollution amount. Again, we simply have to assign 50% causation value to each of the two factors.
At this point, we seem to have a simple conclusion: additive factors can be attributed directly on their magnitude, while multiplicative factors must be assigned equal weight. This would allow us to handle any simple expression combining addition and multiplication in many arrangements. But what about this one: we figure gasoline usage by taking our calculated transportation needs, subtracting the amount of public transport and the amount of electric car use, and dividing the remainder by the average MPG of cars. The equation is:
Gasoline use = (transportation needs - public transport - electric car usage) / (average MPG)
So, how do we determine the relative importance of these four variables? One common method is to carry out a fractional analysis. We calculate what would happen if each of these values were to individually increase by, say, 1%. Let’s run some numbers.
Transportation Needs = 1 billion miles;
Public Transport = 10 million miles;
Electric Car Usage = 100 million miles;
Average MPG = 30 mpg
result = 29.67 million gallons
Now, if we were to increase Transportation Needs by 1%, then the result would increase to 30 million gallons, or by 1.011%, so we conclude that Transportation Needs has an importance factor of 1.011. If we increase Public Transport by 1%, this decreases the final result to only 29.6633, or to 99.98887%, so we give Public Transport an importance factor of 0.9998887. Obviously, the effect of Electric Car Usage is ten times greater than that of Public Transport, so we give it an importance factor of 0.99887. Of course, if we increase Average MPG by 1%, the final result will decrease by 1%, so we have to give it a relative importance of 1.00. Thus, our final results are:
Transportation Needs: 1.011
Public Transport: 1.0011
Electric Car Usage: 1.011
Average MPG: 1.00
Addendum, September 7th: I have concluded that the entire notion of a weight-illustrating diagram is a waste of time. It turns out that there is already a means of illustrating the relative weights of different factors: the history bar graphs. They are used only when the quantity being displayed is the sum of several factors. There aren’t many of these. So it seems a waste of time.
However, I shall add the basic causal map. It will be fixed, a simple PNG image that’s slapped up on the screen when called for. I shall probably add some interactivity in the form of special information if the mouse hovers over a factor. That’s about it.