In the early 1980s, president Ronald Reagan proposed the missile-defense program known formally as the Strategic Defense Initiative (SDI) — and informally as “Star Wars.” Then the Pentagon and its contractors began trying to build the thing. They’re still at it today.
In 1985, David Lorge Parnas — a well-reputed computer scientist best known for elucidating the principle of “information hiding” that underlies modern object-oriented techniques — publicly resigned from a government computing panel that had been convened to advise the Star Wars effort. Star Wars couldn’t be built according to the government’s requirements, Parnas declared, because the software simply couldn’t be built. Unlike many who weighed in on Star Wars, Parnas didn’t express opposition on moral or political grounds; he was, he said, a longtime toiler in the field of defense software engineering, and had no “political or policy judgments” to offer. He just thought that SDI’s engineering aspirations lay impossibly far outside the bounds of real-world capabilities.
In support of this judgment Parnas composed a series of eight short essays that the American Scientist — and later the Communications of the ACM — published together under the title, “Software Aspects of Strategic Defense Systems.” It’s a magnificently forthright piece of writing — it encapsulates the best of the engineering mindset in prose. And it explains some fairly complicated notions in terms anyone can follow.
For instance, it wasn’t until I read Parnas’s paper that I fully understood why digital systems are so much harder to prove reliable than analog ones. An analog system operates according to the principles of continuous functions; that is, if you know that a point at the left has one value and a point at the right has another, you can reliably extrapolate all the behavior in between — as with the volume knob on a radio. Digital systems have a vast number of discrete and unpredictable states; knowing that “2” on the volume knob is soft and “10” is loud doesn’t help you know in advance what the behavior of each in-between spot will be. So every single state of a digital system is a potential “point of failure.”
Parnas’s paper also contains a classic description of how real-world programmers actually do their work:
The easiest way to describe the programming method used in most projects today was given to me by a teacher who was explaining how he teaches programing. “Think like a computer,” he said. He instructed his students to begin by thinking about what the computer had to do first and to write that down. They would then think about what the computer had to do next and continue in that way until they had described the last thing the computer would do. This, in fact, is the way I was taught to program. Most of today’s textbooks demonstrate the same method, although it has been improved by allowing us to describe the computer’s “thoughts” in larger steps and later to refine those large steps to a sequence of smaller steps.
This crude method is fine for small projects, but, applied to large complex systems, it invariably leads to errors.
As we continue in our attempt to “think like a computer,” the amount we have to remember grows and grows. The simple rules defining how we got to certain points in a program become more complex as we branch there from other points. The simple rules defining what the data mean become more complex as we find other uses for existing variables and acid new variables. Eventually, we make an error. Sometimes we note that error: sometimes it is not found until we test. Sometimes the error is not very important; it happens only on rare or unforeseen occasions. In that case, we find it when the program is in use. Often, because one needs to remember so much about the meaning of each label and each variable, new problems are created when old problems are corrected.
Phenomena like concurrency (multiple simultaneous processes) and multiprocessing (multiple CPUs subdividing a task) only deepen the problem.
How, then, do we end up with any big programs that work at all? “The answer is simple: Programming is a trial and error craft. People write programs without any expectation that they will be right the first time. They spend at least as much time testing and correcting errors as they spent writing the initial program.”
With these observations in mind, Parnas casts a cold eye on the SDI project, which aimed to produce working systems that could identify and target incoming enemy missiles in a matter of minutes. The system couldn’t be tested under real-world conditions; it would be expected to function effectively even when some of its pieces had been disabled by enemy attack; and it was intended to be foolproof (since, with incoming H-bomb-armed ICBMs, 90 percent wasn’t good enough). No such system had ever been built; Parnas maintained that no such system could be built within the next 20 years, using either existing methods or those (like AI or “automatic programming”) on the horizon.
“I am not a modest man,” he wrote. “I believe I have as sound and broad an understanding of the problems of software engineering as anyone that I know. If you gave me the job of building the system, and all the resources that I wanted, I could not do it. I don’t expect the next 20 years of research to change that fact.”
He was right there, we can now definitively state. Nonetheless, the Bush administration has revived the missile defense initiative for the war-on-terror era. It’s true that the context has changed: today we might face not a considerable Soviet arsenal but, say, a handful of relatively low-tech North Korean missiles. Surely it would be nice to have some sort of defense in place. On the other hand, the record of building and testing the system to date has been fraught with failures and problems that would not have surprised Parnas, or anyone who’d read his paper. (Software isn’t the only problem; consider the saga of the massive missile-defense radar system on a converted oil rig, that the military has been unable to transport from Hawaii to its Aleutian destination for fear it wouldn’t survive the trip.)
“Software Aspects of Strategic Defense Systems” offers a wealth of pragmatic, experience-based insight into the complex challenges of large software projects. (Just look at Parnas’s critique of the applicability of the notion of “program verification” — mathematical proofs of program correctness — to big undertakings like SDI.) It’s a landmark of the literature that should be more widely circulated today. I’m tempted to write, “Send it to your congressman today,” only I doubt anyone in Washington would read it.
[tags]code reads, sdi, david parnas, software development, software engineering, star wars, missile defense[/tags]
Post Revisions:
There are no revisions for this post.