The nature of programming is changing. The hardware on which we work has changed dramatically in the last few years, and we should expect the style with which we program to change as well. Yet I don’t see programming styles changing with anywhere near the speed that the hardware is changing.
First we had punched cards, batch processing, and FORTRAN. Then we had terminals, structured programming, and Pascal. Then came the personal computer. We used BASIC and assembly languages, because they fit nicely into tiny computers. But then our personal computers grew larger and we started getting more powerful languages: Forth, Pascal, and C. During this evolutionary process, our programming styles evolved along with our hardware.
The mainframe people have digested decades of programming experience to come up with an entire programming philosophy. This programming wisdom is available in books such as Code Complete (which I highly recommend). This is valuable information, but we must take care to realize that it represents a digestion of a different kind of programming environment. The personal computing programming environment is younger and has its own unique properties; the tried-and-true lessons of yesteryear are not completely applicable to the world in which we work.
Three factors strike me as crucial to appreciating the programming environment of the 1990s. First is the vast power of current systems compared with what we had just ten years ago. I remember when compilation times were around ten minutes. With a great deal of effort, I was able to trim that to five minutes. Nowadays compilation times are measured in seconds. Everything else is better, too. The development environment is integrated so that editing, compilation, and debugging are all tied together. A click here, a command there, and you can sail through the program with astounding speed. We’ve come a long, long ways from the days of BASIC on the Apple II. Yet much of our programming style fails to reflect this. For example, how often do you still try to address multiple versions of one situation with a one-size-fits-all routine?Why not just cut and paste the code a half-dozen times, then twiddle each instance of the code to reflect the details of the particular instance? Sure it wastes RAM; who cares when RAM is sold by the megabyte?
The second factor that is critical is the highly interactive nature of programming with a personal computer. Most of the structured programming concepts that the experts have polished so finely over the years emphasize extensive planning and "write it once" coding. But when you can interact with the compiler at today’s speeds, the value of writing it once diminishes.On several occasions when I was uncertain about the syntax of a command, I found that it was faster to simply try out three or four reasonable variations on the syntax rather than take the time to look it up. Yes, it’s hacking; who cares? It works just as well and it’s faster.
Thirdly, it has become increasingly important for non-programmers to have programming access to the computer. We have seen the tremendous success of tricycle languages: BASIC, then Hyper-Card, then MacroMind Director, and so on. This is an important development; in the future, professional programmers will write core code for semi-programmers who will create the programming content for consumers.