This is a "back to basics" essay. I’d like to address one of those elementary concepts that so often gets overlooked in the stampede towards technological gee-whizziness: the need for computer interactions to proceed with celerity.
This point was brought home to me forcefully a few months back when I was consulting with a design team that was bringing a CD-ROM product up to alpha. During early development, they had of course kept everything on a big fast hard disk, and it worked just fine. But then they burned their first CD and overnight, their user interface turned into a disaster. It seems that the user interface went out to the drive for almost everything it did, and everything slowed to a glacial pace when the CD replaced the hard drive. The contrast between the old user interface and the new one was striking. The old one on the hard drive was a good design; everything worked just fine. The new interface on the CD was slow, clumsy, and confusing. You’d click on hotspots before they were ready to accept your click and nothing would happen. You’d get confused and click again somewhere else.By that time the first click was being processed, so now your actions would be completely decoupled from their consequences. As the result, the user interface was a confusing morass. Yet, the actual design and programming were exactly the same in each of the two interfaces! The only difference between them was the delay time interposed by the CD drive.
The moral of this story should be brutally clear:good user interface depends critically on the simple issue of response times. You can have great graphic design, sneaky-clever programming, evocative feedback sounds, and all the other trappings, but if you fail the basic test of speed, your interface is garbage.
How fast is fast enough?I believe that the rule of thumb here is based on the user’s perception. If the user waits long enough to notice the delay, then the delay is too long. For me, that means about half a second, tops but it depends on what I’m doing. For example, if I’m using my mouse in a fairly staccato fashion, (say, opening and closing windows) then the half-second rule is just fine. My eyes are bouncing all over the screen, taking in information that’s changing. It takes me about half a second to recognize a large change in the display anyway, so a delay of that length won’t be noticeable. On the other hand, my fastest input is typing, and I peak at perhaps ten characters per second. If I’m typing one character every 100 milliseconds, and the screen lags my typing by 500 milliseconds, my touch-typing will be seriously screwed up. Ergo, response times for fast input such as touch-typing should be shorter.
There are cases where a longer delay is acceptable. My rule of thumb is, I can keep the user waiting for up to n seconds every n minutes. Thus, five-second delays had better not occur more than once every five minutes. It’s a rough rule of thumb but it expresses the basic idea that it is acceptable to make the user wait if such delays occur rarely.
Looking at it from the other direction, how fast is too fast?At what point are we pushing the speed issue too hard?At the top end of the range of delays, the answer is simple:you can never be too fast. If it takes five seconds to load a big image, then trimming that down to one second is always preferable. At the other end of the speed range, though, we reach a point of diminishing returns. If my fastest typing takes 100 milliseconds per character, then getting response times down to 50 milliseconds doesn’t accomplish anything whatsoever.
Another point about speed:ofttimes increases in speed open up opportunities that simply aren’t feasible at lower speeds. For example, consider Doom running on an 8086. Sure, it could be done. But imagine what the game would be like with screen refresh rates of perhaps one frame every five seconds. The game would be unplayable. The ideas behind Doom didn’t become feasible until consumers had fast enough machines.
A slightly different example comes from the World Wide Web. The very concept of a graphical Internet makes no sense at, say, 2400 baud. WWWstuff really doesn’t work at all below 14.4 and doesn’t come into its own until you reach 28.8.
This whole issue of speed brings up a hidden gotcha in interactive design. Software developers have the best and fastest systems; consumers tend to have slower machines. So the software developer creates an interface design that works fine on his own machine and then is surprised when consumers complain that it’s slow or clumsy. The software running on the slower system is not merely the same thing slowed down; there’s a qualitative difference arising from the difference in speeds. Any time there’s a delay in responding to the user, the user’s mind starts turning over. Did I do something wrong?Is it waiting for me to do something more?Perhaps Ishould abort this operation and try something else. If the user acts on these impulses, the flow of the interaction is disturbed and the user interface breaks down. It’s not just slower, it’s broken.
I leave you with a particularly horrifying possibility. Imagine a line-oriented disk operating system that puts up the message "Press ENTER" and then goes out to the hard drive to get the rest of the message, which is "to format disk". Imagine what happens if the delay between the first message and the second message is more than a few seconds.