January 31st, 2013
My wife has a small tax-preparation business with a partner. They use PCs because the best professional tax-preparation software is available only for the PC. Besides, they told themselves, they’d need a number of computers, and PCs are cheaper than Macs, so they figured they’d save a little money.
Of course, PCs are not without their little problems, so they have been very careful in what they download and what they install. One machine carrying the most sensitive data is never connected to the Internet. However, the professional tax-preparation software requires an Internet connection, so the bulk of their work is carried out on three machines connected to the Internet. They installed only the absolute minimum of software on these machines; other machines are used for email, word processing, Web browsing, and so forth. After all, since they’re handling other people’s tax information, they have to maintain high security standards. Oh, and they of course use the Norton anti-virus software.
Even so, problems crop up all the time; in such cases they call their handy-dandy PC geek, who runs a business providing just such services. He comes out, fixes their problem, and charges them a service fee. After several years of operation, they have learned that the “cheaper” PCs cost more overall than their Mac, which never needs servicing.
This tale serves as a prologue to a larger and more important story about the security of our computers. Just this morning I read a news story about how the New York Times was subjected to four months of attacks, most likely from China. Now, the NYT is a big company and they have an entire IT department for insuring that everything runs smoothly, and they were doing everything correctly, yet the hackers still got through. We read stories like this every day; it seems that hackers can do pretty much as they please on the Internet. Nobody is safe from them.
I’d like to remind the reader of a simple, absolute truth: there is no fundamental reason why computers must be insecure. Theoretically, it is a simple matter to build a computer system that can never be hacked. Yet, we never seem to achieve that theoretical ideal. The reason for this can only be understood by an excursion through the history of programming.
A History of Bugginess
In the earliest days of programming, you wrote a program in machine code, the 1’s and 0’s that the computer used internally. This was a tedious and slow task, and therefore computer software was small and simple. But then somebody invented a simple language that translated the machine code into matching code words. Instead of using “65 00”, you’d use “LDA #00”, which meant “load the accumulator with a zero”. That was easier to understand, easier to write, and easier to read, so this new concept, called assembly language, was quickly embraced through the world of computing.
But assembly language was still clumsy to use, and as programs grew bigger, it was difficult to keep track of everything they did, so bigger assembly language programs tended to be buggier and harder to debug. The solution to the problem was the high-level language, a programming language that expressed the core elements of programming in a more human-friendly manner. Throughout the 60s and 70s, designing high-level languages was all the rage, and a genuine cacophony of such languages crowded the stage. These high-level languages made it easier to write, understand, and debug programs, and so people could write bigger, more complex programs. Even so, eventually a new ceiling was reached: even with high-level languages, programs above a certain size grew harder to read, harder to understand, and harder to debug.
There were a number of attempts to address this problem, including go-to-less programming, structured programming, and so on. They helped, but the next big step was called object-oriented programming. The strategy here is to break up a program into something like a bureaucracy, with each bureau called an object, complete and self-contained. Supposedly the objects would pass data between them the same way that bureaus exchange filled-out forms. This would insure that everything was neat and tidy.
And in fact, object-oriented programming worked – larger programs could be written, understood, and debugged. So now we could write bigger and more powerful software.
Enter hackers, stage underneath
Now a new class of programmers appeared: hackers who sniffed out the tiniest flaws in programs and exploited those flaws for nefarious purposes. Just as our bodies teem with invasive bacteria and viruses, so too is every programmer riddled with microscopic bugs. Few of those invasive microbes are significantly harmful; they exploit tiny flaws in our immune systems to eke out a living under the radar. The same applies to the myriad tiny bugs in a program: they seldom if ever crop up, and when they do, the user never even notices. But hackers figured out how to take advantage of such tiny bugs to advance their own purposes. It’s as if somebody infected you with a genetically modified version of one of the harmless invasive bacteria in your body, only this microbe could take control of your mind!
Heretofore, the bug-free standard was very high, but not perfect. We could afford to build software that was 99.999% bug-free, because that last 0.001% was too small to cause any serious bother to anybody. But hackers seek out that 0.001% and use it to turn our computers against us.
The core problem
If we want to defeat hackers, we need to reach perfection in our designs: completely bug-free software. That may sound impossible, but in fact it is not. There is no fundamental reason why software must be buggy. Theoretically, we should be able to build iron-clad bullet-proof software. Two causes obstruct our pursuit of secure software: the limitations of object-oriented software and the weakness of demand for it.
Object-oriented programming is a great leap forward, but it only raises the ceiling of complexity. Were we to lower our sights and aim for less feature-laden software, we could certainly build secure software, but we just can’t resist the urge to add that next improvement that, unbeknownst to us, opens up a chink that hackers can exploit. Instead of staying out in sunshine where we can clearly see exactly how the software operates, we insist on moving “just a little” into the dark forest, assuring ourselves that we’re still in a safe zone. Partly this is just a matter of bringing the judgement of programmers up to date. A programmer must make dozens of judgements every day. By judgement, I mean a guess as to what will work, how fast it will run, how secure it will be, and so on. Experienced programmers have a solid base of judgement to rely on, but that base has not included the degree of emphasis on security that is now necessary. Programmers must learn to think more defensively as they work. That will come, but it will always be one step behind the times.
We need a new software methodology, something that goes well above object-oriented programming in its abstraction, something that permits us to design huge software systems in an organized, deliberate manner. We are only now breeding a class of master software designers who have the judgement to handle such tasks; now we need to organize their judgements into a new technology of software design. That will take a while.
None of these improvements, however, will arise without strong incentives – money. That means that software consumers must learn to be more vocal in demanding security in the software they buy. I would suggest that we establish perfection as the required standard. In particular, I’d love to see customers demand security guarantees from publishers. If a hacker uses your software to break into my system, you’ll pay for any damages he inflicts.
Right now, no publisher in his right mind would agree to such a guarantee; software is too buggy. But imagine the response to a publisher who offered such a guarantee: people would shower money all over such a publisher.
Unfortunately, even that scenario is a fantasy: the flaws that hackers exploit usually involve some interaction between one publisher’s program and another’s. In such a case, who’s to blame? Each one can argue that his own software would have worked perfectly without the flaw in the other guy’s program.
The Solution
What this means is that we must start over and build everything from the ground up with an eye to security. There are three ways to handle this: proprietary and open-source.
The proprietary approach would require some cash-heavy corporation to build a completely new operating system from the ground up. This is not so difficult a task as it might seem. When Java was developed in the 1990s, absolute security was one of its design specifications, and it appears that Java 1.0 met that requirement. However, Java has since mushroomed into a monster, and somewhere along the way flaws crept in. It is conceivable that Java could be patched to return it to its earlier pristine state; I think that such an effort could succeed. But I don’t think that Oracle has enough incentive to implement the kind of effort required to purge Java of its flaws.
It is conceivable that another corporation could design a bullet-proof operating system and offer it to the world, but such an enterprise would face enormous hurdles gaining acceptance. Most people nowadays have hundreds of dollars invested in software for their computers, and all that software would be useless on a new operating system. Worse, convincing software developers to port their programs to the new OS would be almost impossible. Even Java has had difficulties getting complete support.
Instead, I think our best bet would be to found our efforts on Linux, the open-source operating system. With a million eyes poring over it, I think we can develop bullet-proof software. We cannot expect the open-source community to do this without prodding. I suspect that there’s an opportunity for a well-funded corporation to initiate a “Bullet-Proof Linux” project, leading independent programmers in an effort to build a completely secure Linux.
This corporation could make money by developing applications that work on the new, secure Linux. It can advertise its products with “When it absolutely, positively MUST be secure” and provide some sort of guarantee. I believe that they’d be quite successful.
The Internet
One last issue remains: the structure of the Internet. The current design is fundamentally insecure, because the Internet was originally designed for government use, and later for university use; nobody ever thought that there would be a problem with such users sabotaging the system. Thus, the path taken by a packet moving through the Internet can be rigged to hide its original source. This design is unnecessarily insecure.
One approach would be to start populating the Internet with secure nodes that use a secured packet protocol. Such nodes would communicate ONLY with other secure nodes. This would be a completely new Internet, with no connections to the old Internet. Initially it would have a small number of nodes and would be used only by a few parties requiring high security. However, demand would rise rapidly, making it cost-effective to expand the network. Eventually, the old Internet would be completely bypassed, and someday we would simply turn it off.
It might not be necessary to abandon the old Internet; the use of digital certificates provides security. The problem now is that the use of such certificates is still too clumsy a technology for universal use. I myself do not understand how to use them for this site, and few consumers understand their significance. Until we have browsers that build certificate-handling directly into their operations, making the use of certificates transparent to consumers will we be able to apply this technology universally.