CGDC Survey Statistics

by Evan Robinson


Survey Methodology
Seven hundred copies of the 1991 CGDC Survey were sent out at the end of January 1991. As of 6 March 1991, 57 had been returned (8%). Included in those 57 surveys were 162 individual evaluations of 54 separate publishers. We divided the data into two groups: developer data and publisher evaluations. At the 1991 DevCon, Stephen Friedman presented the developer data and Evan Robinson the publisher evaluations. This follow-up article is intended to acquaint those who did not attend the two lectures with the basic data and to address issues raised during those lectures.

How Good Is The Data?
This was the main question raised at the lecture on publisher evaluations. There is no question that there are problems with the sampling regarding publisher evaluations. First, the sample is entirely self-selected. Second, with the exception of Electronic Arts (and perhaps Broderbund), the number of responses for any given publisher was so small as to make any projections from the data suspect. These two factors alone should make us suspicious about drawing broad conclusions from this data. However, the raw data itself is useful in the same way that asking other members of the game community is useful: it provides data points. So long as you view the data in that light (as though you’d talked to a bunch of people and listed their responses), the data is meaningful.

With 57 surveys returned, the number of responses for developer data is much better than that of publisher data. The sample remains self-selected, and we must therefore be concerned that the responding sample may not be representative of the development population as a whole. It’s entirely possible that this data is skewed toward successful or unsuccessful developers, or some much less obvious trait that could effect the data.

How Can We Improve The Data?
An obvious improvement for the developer data would be to eliminate the self-selection process. I plan to accomplish this by calling a randomly selected sample of the population for next year’s survey instead of relying upon people to respond via mail. This will help to remove concern about the developer data.

Improving the publisher data will be considerably more difficult. Because every member of the population hasn’t worked with every publisher, getting reliable data by random sampling would be problematic. I believe that our current method is close to the best we can achieve considering our limited resources. One possible improvement would be to find a way to encourage more respondents to the publisher survey.

So Is It Worth Reading?
Absolutely -- so long as you recognize the limitations of the process. Statistical analysis is an attempt to predict the characteristics of a large population by examining a small sample. Generally speaking, the better selected the sample, the better the prediction. Our sampling on developer data is reasonably good, and can be improved. Our sampling on publisher data is not as good, and probably cannot be greatly improved. So I suggest you consider the publisher data as though you had talked to a whole lot of people and asked them what they thought of the publishers they had worked with.

Overall Publisher Impressions
The following are loose categorizations based purely upon visual examination of the distributions. As can be seen by the general categories below, overall the ratings were higher rather than lower.

Publishers scored above average on the following: Trustworthiness, Overall Quality of Software Publisher, Marketing & Sales, Testing & QA.

Publishers scored about average on the following: Developing In-House, Producers, Advances, Royalties, Accounting/Royalty Reporting, Speed of Payment (both Advances & Royalties), Overall Contract.

Publishers scored slightly below average on the following: Tools & Technical Support, Contract Negotiation, Willingness to Alter Contract.

It’s clear that the responding developers generally feel that publishers overall are doing a better than average job, with the small exceptions (remember that those are slightly below average) of Tools & Technical Support and Contract Negotiations. Collectively, I believe the publishers deserve applause for the current feelings of developers toward them. Of course, there is always room for improvement...

So What About Developers?
The following is collected from 57 responses representing something more than 57 people (some response sheets had data from more than one individual). In addition, it’s not always clear whether respondents completely understood some of the questions (it was not uncommon for the percentage totals on the two expense questions to total significantly less than or more than 100%). However, the sampling on this data is better than the publisher data, and our main concern is the self-selection.

What Do We Do and Who Are We?
Twenty-six respondents described themselves as programmers, 6 as artist/musicians, 29 as designers, 13 as managers, 2 as quality assurance, and 6 as other. Obviously, many people consider themselves to perform multiple functions. Twenty-three respondents described themselves as (co-)owners, 26 as independent contractors, and 3 as employees. Fifty respondents said they or their company developed original games, 23 said they did conversions of games developed by others, 10 said they published games, and 7 said they distributed games.

The mean age of respondents was 35, with a distribution covering a range from 23 to 57.

Twenty-six respondents said FRP games were their favorite, 16 combat simulations, 9 sports games, 20 action games, 19 non-combat simulations, 28 strategy, 10 other, and 1 person doesn’t like to play computer games.

The data regarding expense and income percentages is rather suspect, since some responses had total expenses ranging from 85-120% of total, and income from 90-105% of total. And the distributions are widely variable, but the mean expense percentages were: 17% Design, 43% Programming, 18% Graphics, 6% Music, 8% QA, and 12% Management. Mean income divisions were: 17% Design, 39% Programming, 16% Graphics, 6% Music, 9% QA, and 28% Management.

Forty-six respondents said they distributed on floppy disks, 18 on ROM cartridges, 4 in Coin-Op games, 2 in Handhelds, 6 on CD-ROM, 11 Online, and 4 Other.

If there were a “mean” developer somewhere out there, s/he would have made about $35,000 from electronic entertainment in 1990. It would have taken 2 1/2 months to sign a contract with a publisher. That contract would have included about $42,000 in advances against an 11% royalty. Those advances would be paid out about 2 weeks after they were due. The royalties would be paid quarterly, between 30 and 45 days after the end of the quarter. It’s probable that our developer would get some royalty payments.

The contract would probably specify that the developer got to keep the copyright, but the publisher would almost certainly not have to meet any sales quotas to keep the license to publish the product. Source code would probably be provided to the publisher.

If our hero(ine) were to sign a contract to convert a program written by someone else, s/he would receive about $24,000 to do the work, and would probably get a royalty.