UC Santa Cruz Conference

April 15th, 2011

The games academic group at UC Santa Cruz hosted a conference (held in Milpitas) on Friday April 15th, 2011, and asked me to deliver a short lecture. Herewith my overall impressions of the one-day conference:


Years ago I wrote an essay entitled “Games are Dead”. Nobody paid much heed. A few years later, when I presented a summary of that thesis at the Game Developers’’ Conference, I earned the ire of many in the games industry. I harbor no regret for that – it was the truth, and I was right. In terms of creative progress, the games industry had become a brown star, slowing fading out. It was a BIG brown star, to be sure, but it was devoid of any creative innovation, and any entertainment industry that cannot maintain its creative momentum is merely marking time until its death.

But now that brown star is brightening dramatically. Indeed, it has become something like an inverted black hole: it is growing larger and larger, and it is doing so faster and faster. One hopes that an inverted singularity does not lie in our future.

The causes of my positive outlook are simple: the publishers have been eclipsed by a passel of creative new actors. There are gobs of startups hawking all manner of new ideas. The indie games movement is showing wonderful creative energy. And the maturation of the web has made it possible for indie developers to earn a living just by putting their stuff up there. Indeed, the single most salient factor in the reversal of my opinion arose when Jason Rohrer told me that he was earning plenty of money selling his products on the web.

Attending the Game Developers Conference five weeks ago also stimulated my thinking. Although the conference, as always, is depressingly dominated by the same old creativity-destroying publishers, there were occasional flashes of truly interesting stuff. That experience started me thinking. But it was my experience at the UC Santa Cruz conference on the future of games that brought all the bits and pieces into clear focus.

There were three keynote lectures and four panel discussions; I sat on one of the panel discussions. Each of the panelists offered a 15-minute presentation. Mine was on authoring tools and how they evolve. I also discussed my own experience with the SWAT authoring tool for Storytron.

Rather than discuss any particular lecture (save one), I shall offer my overall impressions of the talks I listened to.

Speaking Techniques
First, it would seem that people are finally getting the hang of how to use PowerPoint (or, for the Mac, Keynote) properly. There were a number of lecturers who had figured out that language must be oral and the imagery should serve to illustrate the ideas presented verbally. In other words: no slides with text. The best lectures (including my own) followed this rule. But there were several ghastly presentations that did not follow this rule. In one in particular, the speaker presented slides that were almost entirely textual in nature, and mostly just read the content of the slides to the (presumably illiterate) audience. Worse, the text was in a tiny font!

One sad realization: I was the only speaker to come out from behind Fortress Podium and face the audience. I was the only speaker to use significant vocal variation, and the only speaker to use much in the way of gestures. OK, OK, they’re academics, but that doesn’t excuse them from the responsibility to learn how to communicate their thinking clearly.

The Rehabilitation of Chris Crawford
I was much surprised to hear many of my ancient opinions being presented as new ideas at the conference. Over and over I found myself thinking, "I made that point in a lecture in 1980-something" or “I wrote that in an article in the JCGD in 1990-something.” I feel gratified that my ideas are finally catching on, but I am small enough to feel a tincture of resentment that nobody realizes that I was saying all these things twenty years ago. Mea culpa.

I was plied with plaudits. Three of the seven talks I heard prominently mentioned me or my work, and the general mood of the group seemed to be one of great deference towards me. I rather felt as if I should be bronzed. My wife will be furious to learn that they fed my already obese ego.

I was even more gratified to hear my terminology being widely used. I heard lots of people using my term ‘verbs’ – Huzzah! And my preferred term ‘storyworld’ seems to have established itself.

Two approaches in parallel
There were a number of interesting ideas tossed about. I had a great talk with Michael Mateas about the differences between our two approaches to interactive storytelling. But the similarities are also striking. Months ago, Michael and a group of his students wanted to build a clean, simple storyworld. They wanted something with tight natural constraints on the behavior of the player, yet also sported lots of dramatic intensity. They hit upon high school dating as the ideal scenario on which to base their efforts. By no mere coincidence, I used precisely the same reasoning to arrive at precisely the same conclusions. Thus, my “Prom Night” and their “Prom Week”.

Their storyworld uses graphics heavily whereas mine uses no graphics. Theirs has a smaller range of verbs – I’d guess 30 to 50 verbs, where mine already has 150 and will probably reach 500.

There were some factors on which we differ greatly. For example, their system has a much tighter cause-effect linkage. In other words, you get immediate feedback on every act your perform. In many cases it takes the form of a little icon representative of friendship, romantic relationship, or perceived coolness, which appears between two characters and floats upward, accompanied by an integer indicating the magnitude of the change in the relationship due to your action. It’s cute, but it bothers me because it makes the whole process look quite mechanical. Indeed, “long-term” effects manifest themselves over just a few terms. The demonstrator at one point showed how the player could obtain the sexiest girl in school as a girlfriend by breaking up her relationship with the coolest guy in school. Five easy steps later, he had broken up the romance and stolen the girl away. That bothers me: it’s just too mechanical.

When one of the designers defended this aspect of the design by observing that this system gave clear and quick feedback to the player, I pointed out that there’s a difference between a gamer and a typical American female, in that the gamer desires to "game the system". He wants to know the algorithms so that he can press the right buttons in the right order to get optimal results. This is an exercise in computation, not social reasoning. I claimed that female players will derive greater enjoyment from exercising their social intuition than their computational skills. I don’t think I got through to the fellow.

The most interesting exchange for me came right at the end of the conference, when Michael Mateas and I compared notes. I pointed out that the most prominent difference between our systems was the computational system employed. His system, following common practice in Artificial Intelligence, relied on an overwhelmingly boolean system of rules. These rules take the form of social reactions to events. For example, one rule might be “If a girl leaves a guy for another guy, the first guy will like the second guy less”. There is a small amount of arithmetic logic here, in that the rule implements its conclusion by subtracting a small integer from the variable representing the goodwill that the first guy feels for the second guy.

I pointed out that his system uses 1-bit logic, where my system, with its reliance on numeric calculations, relies on 32-bit logic. It’s obvious that the 32-bit calculation is theoretically preferable to the 1-bit calculation, but there are two flaws with this reasoning. I pointed out that his 1-bit calculation utilizes over 3600 different rules. That’s because boolean rules are easy to formulate and express. My system has greater resolution, but because it’s more difficult to use, I have far fewer “rules” (scripts) to guide behavior. In other words, for the same amount of effort, he can create lots of rules with low resolution, while I create many fewer rules of higher resolution. Which system overall permits greater overall creative resolution? I believe that my system is at present too clumsy to compete with his system.

He added another point: since my calculations ultimately resolve down to simple decisions between choices, I’m wasting energy calculating things to greater resolution than necessary. For example, consider this calculation:

3.209459870325 x 2.978386793638 = 9

I multiply two 12-digit numbers together and get a result that I report to only one digit. Why bother with twelve digits when I’m going to use only one digit in the final result? This is the nature of Michael’s objection, and it’s a solid one -- but it’s not quite as bad a problem as this example suggests. You see, many of the variables used in my system could be used multiple times. The more times you use a variable, the greater the value of many digits of precision. There’s an even more serious problem arising from using too few digits. Suppose that smiling at somebody makes them like you a little bit more. In the UCSC system, the smallest increment is +1 -- but big increments, like breaking up with somebody, aren’t that much bigger -- maybe 10 or 20. Is laughing in the face of somebody when you break up with them only 20 times more significant than smiling at somebody as you pass each other in the hall?

This is a variation of the "round-off" error problem so common in the early days of computing. You run into serious problems if the dynamic range of your activity exceeds your word size. If you want to include two behaviors that are wildly different in the magnitude of their effects -- such as, say, smiling at somebody versus having sex with somebody -- then you must have a word size large enough to express that difference.

My conclusion from all this is that Michael is better positioned for smooth evolutionary progress. His system can slowly add more arithmetic calculations as it grows, whereas my system must regress to less reliance to arithmetic calculations before it reach a level of facility where I can actually get something done.

Snipes
After one of the lectures, a member of the audience challenged the speaker in the matter of “procedural content creation” – the use of algorithms to generate things that had previously been done explicitly by creative people. The questioner did not have the time to fully map out his objection, but it seemed to me that he considered procedural content generation to be but one small tool in a large toolkit full of many other tools. Perhaps he does not think about computing the way I do, but I see “procedural” as something that involves “processes”, and that the “Central Processing Unit" is the very heart of a computer. If he doesn’t want to use the computer to compute, what does he want to do with it?

One of the speakers started off with the observation that I had urged her to put more energy into the listening portion of her designs. This is all part and parcel of my bit about interactivity requiring that the computer listen to the user every bit as well as it speaks to the user. The lecturer thought it would be interesting to try a counter-example experiment: what if we built something that deliberately minimized the listening function? She proceeded to build such a product, and demonstrated it. It was certainly a clever experiment. Oddly, I think her example led the two of us to opposing conclusions. I sensed that she felt that her experiment demonstrated that “unlistening” products could in fact be built and really did work. I concluded that her experiment demonstrated that such products are boring. I have no idea what the rest of the audience thought.

One of the lecturers discussed personality modeling. I was a bit bemused that this professor had carefully researched personality models used in psychology yet knew nothing of my work, including my chapter on personality modeling in Chris Crawford on Interactive Storytelling. Instead, she liked the Big Five model from psychology (the five dimensions of personality being “Openness to Experience”, “Conscientiousness”, “Extraversion”, “Agreeableness”, and “Neuroticism”). She also likes the three-dimensional PEN model (Psychoticism, Extraversion, and Neuroticism). I approached her afterwards and lectured at her about my experiences with using personality modeling in interactive storytelling. I emphasized that there’s a huge difference between modeling people and modeling characters in stories – she acknowledged the difference but I don’t think that the distinction will be incorporated into her work; she argued that the results from psychology are useful, while the results from the arts are too vague to be of computational value. Her argument reminds me of the drunk who searched for his lost keys under the streetlight because that was the only place where there was light. I also lectured at her about the importance of designing a language system that is symmetric for both user and computer – this to a professor who is a world expert in computational linguistics!

Students
I derived my greatest pleasure from my conversations with students. At first they seemed rather shy, as if talking to Zee Divine Crawford was a privilege confined to demigods, but somehow we connected and had some great conversations. The most difficult of these were my responses to their questions about becoming professional game designers. On the one hand, the brutal truth is that damn few graduates even get a job in the industry, and those that do are worked like dogs until they burn out and depart for more humane industries. How can I encourage them in such self-destructive behavior? On the other hand, it is a greater sin to quash a young person’s dream than to allow them to fail. I related these considerations to them, and concluded with this recommendation: roll the dice. Take your shot at becoming a big-shot game designer. Realize that the odds are against you, and be ready to bounce back from failure should it strike. Game design is still a good entry point into other fields of programming; other businesses now realize that people who work on games are not dilettantes and often boast impressive skills.

I also urged them to avoid the big publishers and concentrate on small startups or indie operations. These are the companies that represent the future of games. The big studios are for “company men”, not those whose hearts burn with dreams.

Tidbits
The place was packed with iPads and MacBooks. There were also a few Windows machines. Speaking of Windows Machines, nobody seems to have noticed the snide little joke I built into a slide explaining how important users groups on the web have become to the success of authoring tools:



Perhaps it’s too small.

Will Wright’s Lecture
The conference was concluded by a keynote by Will Wright. I was very impressed with his lecture; Will Wright is one of just three outright geniuses I have known, the other two being Alan Kay and Randall Smith. (No, I do not consider myself a genius, just a well-trained and disciplined thinker.) I won’t attempt to convey Will’s theme, as there really wasn’t any central theme. The lecture was a hodgepodge of loosely connected insights about the future. They were mostly brilliant insights, although I think that Will knows far more about technology than history. The lecture benefited from a beautiful collection of slides nicely illustrating and complementing his points. Will definitely understands the right way to use a slide set in a presentation. My primary emotional reaction to the lecture was jealousy at the large budget Will was able to apply to that slide show. I’ve built a number of slide shows, and each slide can easily require an hour’s time searching the Web for the perfect non-copyrighted images. There was also some custom artwork – boy, I’d sure like to have an artist on call to whip up custom imagery for me! Anyway, it was a great lecture!

Conclusions
At the beginning of this report, I wrote that the world of games is growing bigger and bigger faster and faster. My overall impression is that there is simply too much going on out there to keep up with. For most of the last decade, I would occasionally dip my toe into the water to see what was happening in the games biz. But now the world of games has become so vast that it’s like dipping my toe at an Oregon beach to estimate what the water’s like all over the Pacific. The effort required to stay on top of developments in the world of games has now become so great that I must choose between devoting much of my day to learn the latest and giving up entirely.

This raises a larger question for me: how much of my time should I devote to learning from others and how much should I devote to learning by my own direct efforts? My experience at the UCSC conference showed that there is something to learn from others. I was intrigued by some of the ideas that arose during that time. But all in all, I did not find the effort worthwhile as a learning experience. It was very worthwhile for other reasons: talking to the students, presenting my own thinking, showing my work to others. I admit, challenging and being challenged by people like Michael Mateas, Emily Short, and Ian Bogost certainly stimulated my thinking. Still, had I been given the opportunity to go for free, but only as a member of the audience, I would not have bothered.

This does not reflect vainglory on my part or stubbornness -- I think it reflects how far I have traveled down my own path. I now live in an intellectual bubble of my own creation. My way of thinking about interactivity and game design and so many other things is so odd that it simply doesn’t fit into the rest of the intellectual universe. Some of my thinking, if sufficiently twisted around, can be appreciated by others, and I’m certainly happy to explain my weird views to others, but there remain important areas of thought for me to develop, areas that are so alien to the current conventions that there is nothing in the outside world that can shed light on them. Once again, in yet another way, I am striking out on my own, traveling a path nobody else can see, making mistakes that are obvious to all and discoveries hidden from all. I will continue my efforts to make my weird little intellectual bubble comprehensible to others. Given the fact that some of my radical ideas from 20 years ago are now conventional wisdom, I expect that some my current ideas will eventually weasel into the community mind. I once subtitled Entertainment Software Design "Today’s Heresy, Tomorrow’s Dogma". Perhaps I should start using that slogan again.