In the early 1980s, president Ronald Reagan proposed the missile-defense program known formally as the Strategic Defense Initiative (SDI) — and informally as “Star Wars.” Then the Pentagon and its contractors began trying to build the thing. They’re still at it today.
In 1985, David Lorge Parnas — a well-reputed computer scientist best known for elucidating the principle of “information hiding” that underlies modern object-oriented techniques — publicly resigned from a government computing panel that had been convened to advise the Star Wars effort. Star Wars couldn’t be built according to the government’s requirements, Parnas declared, because the software simply couldn’t be built. Unlike many who weighed in on Star Wars, Parnas didn’t express opposition on moral or political grounds; he was, he said, a longtime toiler in the field of defense software engineering, and had no “political or policy judgments” to offer. He just thought that SDI’s engineering aspirations lay impossibly far outside the bounds of real-world capabilities.
In support of this judgment Parnas composed a series of eight short essays that the American Scientist — and later the Communications of the ACM — published together under the title, “Software Aspects of Strategic Defense Systems.” It’s a magnificently forthright piece of writing — it encapsulates the best of the engineering mindset in prose. And it explains some fairly complicated notions in terms anyone can follow.
For instance, it wasn’t until I read Parnas’s paper that I fully understood why digital systems are so much harder to prove reliable than analog ones. An analog system operates according to the principles of continuous functions; that is, if you know that a point at the left has one value and a point at the right has another, you can reliably extrapolate all the behavior in between — as with the volume knob on a radio. Digital systems have a vast number of discrete and unpredictable states; knowing that “2” on the volume knob is soft and “10” is loud doesn’t help you know in advance what the behavior of each in-between spot will be. So every single state of a digital system is a potential “point of failure.”
Parnas’s paper also contains a classic description of how real-world programmers actually do their work:
The easiest way to describe the programming method used in most projects today was given to me by a teacher who was explaining how he teaches programing. “Think like a computer,” he said. He instructed his students to begin by thinking about what the computer had to do first and to write that down. They would then think about what the computer had to do next and continue in that way until they had described the last thing the computer would do. This, in fact, is the way I was taught to program. Most of today’s textbooks demonstrate the same method, although it has been improved by allowing us to describe the computer’s “thoughts” in larger steps and later to refine those large steps to a sequence of smaller steps.
This crude method is fine for small projects, but, applied to large complex systems, it invariably leads to errors.
As we continue in our attempt to “think like a computer,” the amount we have to remember grows and grows. The simple rules defining how we got to certain points in a program become more complex as we branch there from other points. The simple rules defining what the data mean become more complex as we find other uses for existing variables and acid new variables. Eventually, we make an error. Sometimes we note that error: sometimes it is not found until we test. Sometimes the error is not very important; it happens only on rare or unforeseen occasions. In that case, we find it when the program is in use. Often, because one needs to remember so much about the meaning of each label and each variable, new problems are created when old problems are corrected.
Phenomena like concurrency (multiple simultaneous processes) and multiprocessing (multiple CPUs subdividing a task) only deepen the problem.
How, then, do we end up with any big programs that work at all? “The answer is simple: Programming is a trial and error craft. People write programs without any expectation that they will be right the first time. They spend at least as much time testing and correcting errors as they spent writing the initial program.”
With these observations in mind, Parnas casts a cold eye on the SDI project, which aimed to produce working systems that could identify and target incoming enemy missiles in a matter of minutes. The system couldn’t be tested under real-world conditions; it would be expected to function effectively even when some of its pieces had been disabled by enemy attack; and it was intended to be foolproof (since, with incoming H-bomb-armed ICBMs, 90 percent wasn’t good enough). No such system had ever been built; Parnas maintained that no such system could be built within the next 20 years, using either existing methods or those (like AI or “automatic programming”) on the horizon.
“I am not a modest man,” he wrote. “I believe I have as sound and broad an understanding of the problems of software engineering as anyone that I know. If you gave me the job of building the system, and all the resources that I wanted, I could not do it. I don’t expect the next 20 years of research to change that fact.”
He was right there, we can now definitively state. Nonetheless, the Bush administration has revived the missile defense initiative for the war-on-terror era. It’s true that the context has changed: today we might face not a considerable Soviet arsenal but, say, a handful of relatively low-tech North Korean missiles. Surely it would be nice to have some sort of defense in place. On the other hand, the record of building and testing the system to date has been fraught with failures and problems that would not have surprised Parnas, or anyone who’d read his paper. (Software isn’t the only problem; consider the saga of the massive missile-defense radar system on a converted oil rig, that the military has been unable to transport from Hawaii to its Aleutian destination for fear it wouldn’t survive the trip.)
“Software Aspects of Strategic Defense Systems” offers a wealth of pragmatic, experience-based insight into the complex challenges of large software projects. (Just look at Parnas’s critique of the applicability of the notion of “program verification” — mathematical proofs of program correctness — to big undertakings like SDI.) It’s a landmark of the literature that should be more widely circulated today. I’m tempted to write, “Send it to your congressman today,” only I doubt anyone in Washington would read it.
[tags]code reads, sdi, david parnas, software development, software engineering, star wars, missile defense[/tags]
There are no revisions for this post.
I read the entire thing very slowly and I was saddened at what I read. Not only because it offers a grim view of software development, but because I don’t share his views at all. Today, I was reading an article in The Best Software Writing I book that you sent me (thanks again) about A Group Is Its Own Worst Enemy. It says that each group has a set of “truths” or a “set of religious tenets”. Programmers all fall under one of these groups by the programming languages they use. More than that, all these languages follow the do-this-then-do-that paradigm that you quote above from the SAoSDS article. Even with OO, that step by step paradigm does not go away. My #1 problem with programming is related to this.
What makes the SAoSDS article noteworthy is that it attacks the very group that he’s in. But he doesn’t say why. He really doesn’t. He mentions symptoms and what he thinks are causes. But he can’t pinpoint any of them. I read the whole thing waiting for the punch line and never got it. The thing is that I don’t think he CAN say it. Not only would he be going against his “programming group”, but he’d be pinpointing a specific cause. And that brings me to another point. Bush & co. are experts at not saying anything specific so that they can’t be attacked. Reading this paper, I felt the same way. There’s no substance. Not that this is wrong exactly. Problems need be pointed out. But saying that something has many states and that the only way we can get software to work is with testing is a little roundabout. In any other field when something is wrong, it’s usually because you’re doing it wrong. But in the software industry, the blame goes to the industry itself and not on us. And that brings me back to my main point.
We are to blame. If there is no silver bullet, it is our failure. What it is exactly that is to blame anyhow? The software? That’s ridiculous. The building process? Well, think of something better. I am. So that can’t be it. No, the blame is really the programmers, but no one can bring themselves to do that because it would be insulting yourself.
I’ll end with this question. What are all these people going to say when the first person puts on his jacket and leaves this party by building software in a better way? If you think you’ll fail, you will. So the suggestion that failure is inherent is more dangerous than anyone can possibly imagine. Don’t let this be a self-fulfilling prophecy.
It’s fascinating to read a reaction so different from mine! Thank you.
I think Parnas felt that he was writing for the general public rather than for specialists, and I imagine he deliberately kept the papers short and relatively example-free and entirely code-free with that in mind.
Like I said, I found his explanations lucid and helpful. I don’t find the arguments saddening, but then I’m not a practicing software developer. But I didn’t take Parnas’s views as an indictment of programmers so much as an attempt to explain to the rest of the world just why this field has such a record of intractability — and why it is still more of a craft than an engineering discipline.
I think when someone comes along and “builds software in a better way,” Parnas, and everyone else, will cheer. But 20 years ago he predicted it wasn’t going to happen over the next 20 years, and that prediction held.
I don’t think Parnas was attacking programming as a discipline, just this one problem. I think he has the right idea. Due to the complexity of this problem the simple facts are:
– we want to build a system that will be magnitudes more complex than anything we’ve built before
– this system might be needed to spring into action only once in its lifetime
– the system can’t fail, there will be no second chance
– we cannot test this system in anything resembling realistic circumstances
Regardless of what methodology you use it is still near impossible task to complete. But maybe it is possible if we pour $500 billion into it? That still won’t guarantee us that the system won’t fail – that money may be a lot more effective somewhere else.
I found the following quote telling to demonstrate how vulnerable such a system would be to failure (from the linked article):
“And, ironically, the X-Band [radar system], considered one of the nation’s foremost technologies in defending against foreign missiles, has minimal security itself. Many critics speculate that it is vulnerable to attack by enemy nations or terrorist groups.”
So even a perfect system would have to operate in an intelligently hostile environment which is constantly trying to take it down, with technologies we may not be aware of.
I agree with Cleo that it’s the development team that is to fault – all software is built by people. However, if the project is much more complex than anything that has been done before, then it’s very likely to fail, even with the very best engineers on board.
As for the get-it-working problem: I find it funny that Parnas, the man who kind of *invented* information hiding (or at least was one of the key proponents), says we cannot get complex systems working. Of course in the day of machine registers, labels, and goto statements – not to mention rather immature programming languages compared to what we have now – things were a bit harder, but more high-level languages and approaches show that it can be done. Isolate a certain bunch of code and behavior in a module that has a number of parameters (i.e. no global shared state) and you can test the module *individually*, you can maybe even prove some of its characteristics. Of course Parnas and others knew these things in the 60s already.
So I think it isn’t development in general that is futile, but rather the SDI system as it was specified. It might be too much even today, but with yesterday’s tools the task was insurmountable.
Modern object systems or module systems, as well as more modern approaches to concurrency (think Erlang instead of Java’s shared memory) show that a suitable model of software can be pretty good at removing inessential complexity, that it can hide information quite well. And even if it doesn’t appear that way, when you see yet-another-scripting-language emerge, when you see most development taking place in something like Java – software technology does progress, however slowly. Maybe some day we might be able to build even better systems from individual components, maybe even the SDI system (though I doubt it would be too useful in its original form now).
Truly, this is a very important article in computer history. Call it the early renaissance of programming. Of cause Parnas doesn’t blame himself personally for inability to create such a system. It isn’t an act of personal confession. It is an act of serious doubting human nature to create such a system. Neither has he believed that he could build the system nor any of his article readers. This is a very important hypothesis of this article, comparing, for example, to Dijkstra’s ones. Deep in his heart, Dijkstra believed that problem of any complexity could be solved by him simply by applying “structured programming” concepts. Not surprisingly, Dijkstra never managed any project of a reasonable magnitude. The revolutionary thinking of Parnas boils down to a concept that any techniques we had up on the moment of writing the article cannot be applied, no matter how and whom they applied by.
I don’t quite accept the “testing argument” of Parnas, simply because; if it was true we would never succeed to land on the moon for the first time. Look also at the development process of “Hetz” Israel anti missile system (http://en.wikipedia.org/wiki/Arrow_missile). Isn’t it a star wars in miniature? And I must say quite a successful one. Actually, a close testing environment can be built, for example by launching the dummy missiles from the ocean by some navy ships.
Taking a little bit broader view we must take the “star wars” in its historical context. Remember that neither Americans could prove that their system did work neither Soviets could prove that it didn’t. Mere existence of such a system, even in its “untested” state, is better than having nothing at all.
I also wouldn’t agree with Parans’s “research argument”. All these mega-project in their mega-failures always produce interesting and successful sub-projects. So even in the failures of Star Wars we surely could learn something. Just like flying-to-Mars missions, building such a system would help to accumulate Americans an invaluable knowledge. MULTIX failure gave birth to UNIX after all. However, in support of Parnas’ argument, I must say that building such a system for the army meets an enormous bureaucracy process which means a certain suicide for development process. The way military works doesn’t suite modern agile practices of developing process that’s why it may be domed to fail.
In conclusion, I must say that I was personally involved in building military systems and I convinced that in modern warfare, IT infrastructure is a crucial advantage that any military system must have over its enemies. It was proven not once. Look at two last American wars in Iraq. The inability to withstand an American army attack (and no… I am not talking about what happened after, but only about blitz operation of taking over Baghdad), is a convincing proof that smart weapon wins. And smart means software after all.
“But 20 years ago he predicted it wasn’t going to happen over the next 20 years, and that prediction held.” – Scott Rosenberg.
Is this a prediction, or a self-fulfilling prophecy? Especially considering that the No Silver Bullet paper came out around that time? For example… As a hockey fan, if I say it’s impossible to score on me and after shooting the puck in the corner for twenty years, no one has ever scored, am I right? Or should the tactics change? Maybe shoot at the net for a change? I believe we’re all shooting in the corner when it comes to programming.
I agree that SDI probably can’t be done. But not because of the software. This is where I agree with most of the other posts here. Most of the reasons given are related to logistics and non-defined requirements. Software doesn’t even come into it.
I agree that he was definitely writing for a different audience than programmers. My only complaint is that he lumped in programming as if that was part of the problem. I don’t believe it is. That’s all I’m saying. Concurrency has been solved for over 30 years for example. Use a flow based approach with a dynamic linkage and execution controller and you’re all set. However, he says he’s never heard of it. So it’s hard to know what to make of this. It’s not like concurrency is a new problem even several decades ago.
To Boris: I like the point you bring up with Dijkstra. Are we discussing whether or not complex systems are possible with current programming techniques (and it’s a human failure)? Or are we saying that there is no possible way to build these complex systems no matter what development practices we come up with? In short, is it a human failure or one that is inherent in software development?
I believe that current popular tools are inadequate (and also the problem) because of the slow moving group mentality. The programming group has one of the slowest discussions ever. C was invented in 1972. It’s now 2007 and it’s still the most popular topic by far with C++,Java,D,C#,JS etc. That’s a 35 year discussion. Scott talks about programmer time. Here we have computing community time. It spans decades. So predicting the status quo is no prediction at all.
It took several hundred years before a stable route to India was created by Spain since the time of Marco Polo. During all this time, people thought it would never happen, yet people kept trying. I get the feeling that programmers have given up on finding a better route to software development. That’d be tragic IMO.
(Hope this post isn’t too long)
Cleo, I don’t like to discuss definitions, but predicting status quo is a prediction, and sometimes a very courageous one. In soviet times predicting that soviet economy will stay on the same low level because of inherent problems of communistic approach instead of “blossoming and leading all the world” would eventually lead you to jail. This is similar to Parnas article – he pinpoints inherent problems in current approaches and predicts a very pessimistic future. On contrary, predictions like “in three thousand years we will succeed to cure cancer” are really useless. Furthermore, with all my respect to Parnas, you are overestimating his influence on modern developers. Most of the developers never read it.So his prophecy cannot fulfill itself simply because no one has heard of it.
Well, I disagree with both of your points. The first point has to do with predicting the status quo concerning software. This is radically different than a situation where there are alternatives. In the software industry, there has only ever been one way to create software. There is nothing to compare to. The only differences that do exist would be like having one type of car with different paint jobs. It’s cosmestic changes only.
Let’s suppose there has only ever been one form of government in human history, say dictatorship. I too could say that government has inherent flaws. But I’d be commenting on dictatorship, not government. Because we’ve never seen anything else, how do we make the difference? I’m saying the same thing is happening with software. There are other ways. What this paper and others like “No Silver Bullet” are commenting on is not sofware development, but the current and only way we’ve ever done software development.
I stand by my notion that predicting things that have never changed is no prediction. Is my prediction that the Sun will rise tomorrow really a prediction? That’s all I’ve ever seen, so where’s the “new” or unexpected information that makes it a prediction? Software development currently and has always sucked for large systems. Where’s the “new” information there? Sorry, but without “new” information, it’s just not a prediction. I’d agree that predicting status quo would be a prediction if things changed a lot. But that’s not the case.
Secondly, *popular* programming tools in general are not built by Johnny programmer. They are build by acedemics and people that work in R&D where they have either time and/or money. These people will most certainly have read many of the papers like this one and “No Silver Bullet”. Now compound the fact that companies must try to make a profit and the risk of trying something new, especially in a field wrought with failures, is not so wise. Add to this the fact that at least one person in these companies has surely read these papers. In any case, what programmers do you know that have built very large systems and think they are easy anyhow? So yes, it definitely IS a self-fulfilling prophecy. But my main point was that this paper is yet another push in the wrong direction. Over time, do you think there will be more, or less, of these kinds of papers? That’s what worries me.
Now, you could argue that making a “prediction” (a statement) for the good of the people is a valid one and I would agree. But that’s another thing. I disagree with the point being made in this paper about software in general. I agree as it relates to current programming practices and the paper does explicitly state as much. Yet I worry that not many will make the distinction because there is nothing else out there.
Ok, so you disagree with “status quo” prediction of Parnas and Brooks. Tell us what do you personally think about software development future. And how will it change in 20-30 years.
I’d recommend Dijkstra’s note,
“Why is software so expensive?” An explanation to the hardware designer.
That article clearly articulated the critical consequences of the discrete nature of software versus the analog nature of most physical systems with which we interact (and, specifically, which we humans design and test).
Please review the facts before making statements such as
“Not surprisingly, Dijkstra never managed any project of a reasonable magnitude.”
The development of operating systems (e.g. the EL X8 and the T.H.E. multiprogramming operationg system) and compilers (e.g. Algol 60 compiler for the Mathematical Centre in Amsterdam) certainly qualify as projects of “reasonable magnitude”, especially as they were on the cutting edge of computing for their day (well before the time when one could go take a class on “compiler theory” anywhere).
I would add to Slaven’s nice summary one other bullet point:
– the objectives of the system are stated only in vague, ill-defined terms,
a property that still applies (in some degree) to many software development projects. To see that I’m not simply being snide about business people or requirements writers, please read on.
In a 2003 article entitled “Why not just block the apps that rely on undocumented behavior?”, Raymond Chen described how compatibility can be even more hairy than we ordinarily think. (Thanks very much, Scott!) Although most of us aren’t writing operating systems, the point he makes scales out nicely, especially in the mashup world of Web 2.0 and web services.
I could develop a piece of software, A, which has certain advertised, documented behavior, and which fulfills those obligations completely. Someone else could develop a piece of software, B, which interacts with A to perform some function of interest to its developer or clients. It is in general very difficult to ensure that, no matter how B behaves in its interactions with A, no information about A can “leak out” except what is defined for the documented behavior. Joel Spolsky called this the issue one of “Leaky Abstractions”
on his web site. (For example, I recently saw an article on how black hats can use high-resolution timing to infer information about a system under assault. I doubt that very many of us build world-facing production systems with random delay loops to thwart such snooping!)
Now here are the punch lines.
1) If I am now planning a new revision of A’, the documented specification will likely be the spec of A plus the new intended features. However, if some propery of A has leaked out, and B has in ANY way become entangled with that leaked property, I may very well alter that property in A’, thus altering B. Some unintended, undocumented property of A has become a hidden part of the specification which I would have to specify in A’ in order to satisfy the transitive closure of A’s users, B’s users, etc…
2) Someone who sets out to build C, which interacts with both A and B, may do an excellent job of specifying, documenting, developing, and testing C, and yet be bitten by the subtle dependence of B on A under peculiar sets of circumstances which would never appear from the published specs as corner cases requiring testing.
The consequences are that both A’ and C have “real” requirements (based on the mandate to meet the customers’ expectations) that are simply unknown, or only known in the vaguest way (don’t break compatibility with the existing client systems). Over time, this can become an enormous burden.
Combine the above with the attempts in some parts of our economy to turn software development into mass-production, assembly-line work, and it will not be surprising that software-based systems still exhibit surprising behavior.
Thanks Cleo Saulnier for your interesting and insightful comments. That I don’t agree with everything you said matters little. :-) It’s rare to read something that sincere, considered and worthwhile on the net.
And thanks Scott, for digging out these references, and pointing us toward them!
I found this paper very well written and the points made were clear and understandable. I especially liked the section “Is SDIO an efficient way to fund worthwhile research?”. Like Parnas, I have watched the formation and appointment of oversight committees placed in charge of these type of projects, and I have always been surprised by the people who wind up making up these groups. I thought the reasoning behind why these “technocrats” gain the positions they do (because the truly competent people are seen as too valuable to spare by their managers, for instance) was very well put. This quote especially hit home to me on the problem with committees composed of technocrats:
“The SD10 is a typical organization of technocrats. It is so involved in the advocacy of the program that it cannot judge the quality of the research involved. The SD10 panel on battle-management computing contains not one person who has built actual battle management software. It contains no experts on trajectory computations, pattern recognition, or other areas
critical to this problem. All of its members stand to profit from continuation of the program.”
That last sentence, that the members of a committee all stand to profit from the continuation of the program, is a problem I see constantly, where people essentially become professional committee members.
Now, I can not help but comment on Cleo’s review of the paper, as my opinion stands directly at odds with his. This comment especially struck a nerve with me:
“We are to blame. If there is no silver bullet, it is our failure. What it is exactly that is to blame anyhow? The software? That’s ridiculous. The building process? Well, think of something better. I am. So that can’t be it. No, the blame is really the programmers, but no one can bring themselves to do that because it would be insulting yourself.”
People have been “thinking of something better” for the last 70 years. That is exactly Parnas’s point. And these are not stupid people, there are brilliant researchers, working on their own, working “outside the box”, yet they still have managed to deliver only incremental improvements. And that is fact, and frankly, I find your statements that we are simply following herd mentality offensive. Once you’ve proven that you can overcome these fundamental limitations of software engineering then such bold claims will be your right, but until that time, your statements have no basis.
First, I’d like to say I appreciate those who support my posts, even if you/they disagree with them. I’d like to respond to a couple responses. Hopefully, I can keep it short.
“Tell us what do you personally think about software development future. And how will it change in 20-30 years.” – Boris
Be glad to. That’s what I’ve been doing the last year. First, if you look at the raw data of execution speed of processors, Moore’s Law is still well and kicking. But individual core speed has gone linear. But overall, compound processor speed is still going up exponentially. By 2012, 64-128 core processors will be available short of some other mechanism coming out to achieve the same speed such as what AMD wants to do with custom cores for specific tasks. Either way, Moore’s law will hold.
Now, what does this mean? By 2012, we’ll have to handle at least 64 cores. There won’t be computers coming out with less than 16 or 32 cores. And newer machines will have over 64. We won’t be able to program the same way anymore. Threading is not the answer. It can’t be. Anyone who’s done threading knows this isn’t for everyone. And we have enough papers that say current techniques aren’t working.
However, there is a trend coming out. It’s about removing one level of complexity to our software called the execution point. We don’t need it. We’ve never needed it. It’s fine for single processors, but it’s really awkward for anything larger. Execution is the single largest cause for software complexity. Remove it and you’ll reduce complexity by a huge leap. Not incremental. That’s why I say that most complexity is caused by us programmers. We introduced complexity that was never needed. And many software are already using this. For example, Lightwave 3D has a node editor. Other packages that must handle large amounts of data are already working this way. They just can’t be bothered with assigning sequential tasks to specific workers. Instead, you just lay out what should happen to the data and the underlying software will figure out how to partition the tasks. That’s the way of the future.
That’s happening right now and complexity is being reduced because we have non-programmers using this stuff. That’s a hard claim to dismiss. I’ve heard that Lisp has something called Cell. I know nothing about it other than it’s supposed to enable flow based programming. But it will get bigger and more popular. Other languages are starting to come out with similar packages. Too bad they’re not well thought out at this point. But it’s a move in the right direction.
And it’s not about software being designed like hardware. I don’t know why, but many programmers dismiss flow based techniques matter-of-factly just because it seems related to hardware. Personally, I don’t want un-needed complexity. So why would I use anything execution based instead of data based?
After 2012, there will be a clear definition between languages and development environments that support distributed computing and those that can’t. BTW, here’s a hint at the current state of development.
– Machine language.
– Assembly can produce machine language.
– Compilers (any current GP language) can produce either assembly and/or machine language.
– Flow based tools can produce GPL and/or assembly and/or machine language.
Each step can produce code at any lower level. Portability is achieved at the highest levels because lowel level are usually platform or machine dependent. So VM’s and interpreters are out for portability. Emulation? Sure. But they won’t be able to use the best features of each machine. This will become of prime importance as we move to more and more gadgets and different processors.
The next step up is interesting. In current programming languages, we write applications differently than we do libraries. So most of the world’s development time is not re-usable. It’s completely wasted. With data based programming, this is not so. An application is itself a component that can be reused in larger systems. So the next level up will be massively complex systems that we can’t even imagine today. Once we start writing components that never need to be written again and the world begins to have a repository of stable and tested code, we’ll start to progress at an exponential rate. This is at least 20 years in the future. See the paper http://catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ linked by Scott. Now apply that to a projects where the entire world’s resources you can use and where execution coupling is not a limiting factor.
To Andrew Chase: The above applies to your comment as well. Execution is extra complexity that isn’t needed. And the solution is happening all around us. It’s happening in fields that aren’t limited by programmer mass mentality. This is the way it happens everywhere, not just with PL design. Someone who’s not trained in the field will come along and say, “Why don’t we use this?” just by applying a little commen sense. And while most PL designers are scratching their heads, the rest of us are moving ahead.
My statements have basis. They’re very real and happening this very minute and being used by non-programmers.
I’m sorry that you find my comments offensive. What I find offensive is that we’re over 30 years behind the times and that we’ve been forced to use unnecessary complexity for all this time. What’s the fascination with this execution point anyhow? It’s a low level hardware hack to enable the repeated use of logic gates that used to be expensive to produce. They’re no longer expensive and it’s as low level as it gets. Same things with threading. The function is probably the most detrimental software construct to hit the field. Not that a function is bad in of itself. It’s that as the only thing to use makes for atrocious programming.
I’ve shown ONE way forward. ONE way that removes a complexity that was self-imposed. Now, what does that say about all these experts you speak of? I’m sorry, but they’ve failed. The lot of them. Brilliance that fails is no brilliance at all. Sorry, but I’m really upset at all the time these researchers have wasted. They get no brownie points from me is all I’m saying. ;) There is no one, no where in the computing field that I can’t hold my own against anymore. For too long, I thought that these people MUST know what they’re doing and there can’t possibly be anything they’ve missed. Imagine my surprise. Enough is enough already.
Beyond all that, here’s something else that will be a fascinating area of research. There was already a discussion I saw a while back about using humans as processors. I forget who it was, but they had a game to classify pictures (to be then used in a search engine). You had two people see the same picture but both players had no contact. They both inputted words that they thought described the picture. If any words matched, they were awarded points and obviously, this word should be one valid description of the picture. Here, humans are providing computations that no computer in the world could do.
In flow based programming, you could do the same and use the user as a component. A user usually waits for responses (computations by the server). But a server could likewise send a response and wait for a subsequent request such as in web transactions. To the server, it’s the user that is processing information. So your algorithm suddenly becomes
UserRequestForm1(U) -> SendForm1(S) -> UserForm1(U) -> ServerForm1Validation(Send Form2)(S) -> UserForm2(U) -> ServerForm2Validation(S) -> ServerCompleteTrasaction(S)
(S) = Server
(U) = User
Now you have an algorithm that includes the user. We don’t do that today. Today’s algorithms are ridiculous compared to the above. It can likewise be a very complex component on the user side. It doesn’t even need to be a user. It could be automated requests. It doesn’t matter.
The point is that a Turing machine only deals with routines that have all of the input beforehand. Turing himself said that his thesis does not deal with interactive software. So why don’t we look at what happens when we cross that fence? Yes, I know it’s equivalent. But who cares. Let’s go see what awaits on the other side. We get into software that uses humans and other complex machines as part of our software. But where we don’t know how complex or what mechanism (or what machines or processes) are used within those components. It opens up a whole new area of computing in a much simpler way than doing threading, locking, waiting and all that boring and complicated stuff. Just have an underlying execution handler that handles all this for us. You just need a granularity evaluator with gaussian elimination and you’re done.
That’s the tip of the iceberg. I have so many more ideas, it’s ridiculous on how to make software easier. That’s why I get frustrated hearing people say that there’s no silver bullet. Well, not if you believe there isn’t one. First off, maybe we should stop shooting ourselves in the foot and remove all this self-imposed complexity. We put it there. We are to blame. My claims are not bold. They’re obvious. That’s what makes it even worse.
Sorry again for the long post. Hope my tone wasn’t too forward. It wasn’t meant to be. I’m just very interested in this area of research.
And since work intruded as usual, I’m late to the party.
A few points that I don’t think were well touched on. Boris states that “I don’t quite accept the ‘testing argument’ of Parnas, simply because; if it was true we would never succeed to land on the moon for the first time.” Actually, that isn’t true. The equipment used for the Apollo program was well tested, albeit in small stages. Parnas’s point was that you couldn’t really test SDIO in the same way. The only good test would be a live event and that’s a one-shot deal. If it doesn’t work, game over.
I’d like to point people at the HBO series, “From The Earth To The Moon.” Specifically, episode 5 “Spider”. That brilliantly demonstrates the estimation and construction problems that Parnas points out when dealing with something brand new.
One of Parnas’s key points is the division between theory and practice, and their practitioners. There is a strong divide between the two camps with little communication between them. Lots of CS papers are written every day, unfortunately in abstruse language. They could contain useful information but the majority of the industry is unaware and/or unable to use the information. Practitioners tend to make do with what they have. Until mathematical/formal tools are developed that are easy to use, they won’t be used in practice.
A second key point he raises is that the industry needs new ways of thinking about programming and design. You can hear his frustration. He doesn’t have an answer and you can tell he would like one. In the interim 20 years, there have been new approaches (OO, AOP). It’s arguable if they’ve helped or not. I’d love to hear what Parnas thinks about this now.
Which brings me to Cleo. In your “prediction” post, you talk about removing complexity but not about how I would program. I presume in your system there would be no execution point, no functions. What metaphor would I use to code? How would I think about components, real time constraints, and concurrency? How would I test and deploy?
“I’ve shown ONE way forward. ONE way that removes a complexity that was self-imposed. Now, what does that say about all these experts you speak of? I’m sorry, but they’ve failed. The lot of them. Brilliance that fails is no brilliance at all. Sorry, but I’m really upset at all the time these researchers have wasted.”
Well, unfortunately, I don’t see what your way is. Frankly, illustrating paths not to take is never a waste. At the very least, you’ll avoid the quicksand. To steal a quote from Newton, you’ve stood on the shoulders of giants. I’m not sure you see that.
I’m a pragmatist. If, you can get your schemes to work, then kudos to you. If it’s easy to use, then I’d use it. In the meantime, I’m just going to do my job.
Dave C: You’re not alone. The only thing I can say is to do research (go see the examples I’ve given) and that you should start by looking at what it means if you specified what should to happen to your data and not the processor. Programmers misinterpret this. Oddly enough, non-programmers understand it right away with a “DUH” reaction when given an example. This might explain its popularity amongst them. I know one thing for sure. If you use what you know about programming, you’ll never get it because you’ll try and associate what you’ve been using to this and that never works. You can use past experience, just not anything programming related. The real world already has this solved. Why go against the grain?
To answer your question about concurrency, you get it for free. It’s implicit. And components are meeting points or locations for data to transform. It’s a way to be able to say “here is where my data X (and possibly Y, etc.) turns into Z”. You can assign different components to different processors although there are better ways to do this. This would of course be done by the compiler.
Am I standing on the shoulders of giants? Surely not! If I were, I’d be standing on their necks to keep them from causing more damage and from bringing out more crap that doesn’t work. See, I’m a pragmatist too.
OH, just thought of another example. Play the game Settlers 4 by BlueByte. That’s a perfect example of how you can get things done without an execution point. In case you’re not familiar with it, it’s a game where you can build huts that do different things like bakeries, fishing, miners, wood cutters, weapon builders, wheat farmers, etc. Each one can produce something or absorb something. Most of the time both. So each hut has inputs and/or outputs. What one hut produces, another (or many) consumes. Bakeries for example, need water and flour and produces bread to then be used by any of the three or four types of miners. Materials are brought from one hut to the other by currently idle people. They know what to transfer by looking at any items laying at the outputs and seeing if any other hut needs it. The huts themselves require workers and materials for their initial construction. The ultimate goal is to build a settlement that can produce and sustain an army to then crush your opponent. Isn’t it always?
All the “nodes” (huts) are processed in parallel. In that game, you don’t tell each individual what to do. Instead, you deal with logistics of the materials and where they should go. When enough material arrives at a hut to produce one output item, the worker in the hut will process the input (as needed) and produce the output. This way, you get a first-hand view of what happens with data dependencies and how it works in this model (ie. when you run out of a raw material or too much of others needing another sawmill for example). For tasks, whoever is available will do the work. So think of the people as processor cores. They get assigned automatically as needed dictated by the incoming data. You don’t ever deal with that directly. And if you run out of workers, they should alternate jobs wherever needed (although you can’t do this in Settlers without manual intervention). In short, all huts are operating in parallel.
The way we write software today is like the game of Settlers, but where we’d tell each individual person exactly what to do for the entirety of the running time. There are hundreds and sometimes thousands of people in your settlement. Yes, you can see them all individually on your screen. Yet real people play this game all the time. Although the game has a learning curve, obviously there’s a level of complexity that’s been done away with. There’s just no way we can handle thousands of workers individually and expect reliable results. And this is what we naively expect from programmers who, by the way, are not known for their socialising and interaction skills. Now, it should be clear why software fails and why it’s so complex. You should also be able to see the alternative as it actually exists in the game. This alternative is no different with general programming. That’s the blackest and whitest visual description I can give.
Actually, I am familiar with environments similar to Lightwave 3D. In the telecom industry they are called Service Creation Environments. I’ve even written some of them. The problem is that such environments are extremely limited because they *have* to be built with specific assumptions. That also applies to the concurrency you may get for “free”. Turning one into a generic system, which you seem to propose, is most likely impossible. From my point of view, you’ve just traded one form of complexity off for another.
“Turning one into a generic system, which you seem to propose, is most likely impossible.” – Dave C
Then it seems I unknowingly created the impossible. I don’t think there’s anything that can boost my ego more than this. Dave, can you answer me this. How can you succeed if you believe something to be impossible? Forget software complexity. Just the mentality itself is unconscionable to me. Self fulfilling prophecy indeed.
I’d comment on specifics that you mention, but you offer nothing to support why your claims would be true, therefore I am obligated to dismiss them all. They are statements without substance and thus cannot be argued against. Not even wrong.
Just to give another example in support of what I’m talking about, look here: http://www.aviationweek.com/avnow/news/channel_awst_story.jsp?id=news/CHI01177.xml
What does that say about the SASDS paper?
Two points. First, I said most likely impossible which is not the same as to say totally impossible, hence I do allow for the possibility.
If you really have created such a system then sell it or put it out there. Get people to use it. As I’ve said before, if it is good and easy to use then I’d use it.
Second, the Chinese presumably tested an anti-satellite system, which is very different from SDIO. In the 80s, the US had sucessfully tested similar systems. This test does nothing to invalidate the SASDS paper.
To get back to Parnas’s paper…
On the 6th page (1331), he talks about “What should we do and what can we do?” A lot of work and improvement has been done in these areas. Unfortunately, many companies try to save money by skimping on proper development steps. Software development is as much an effort in politics and economics as it is with programming. Regardless of the language/environment/methodology in use, politics and economics can always destroy a good project. It’s interesting that Parnas believe that SDIO is infeasible solely on technical merits. SDIO’s fate is almost certainly cemented if you consider the project was incredibly expensive and subjected to politics that most projects would not have to deal with.
On a later page (1333), he discusses his thoughts about automatic programming. I agree that this is a euphemism for programming in a higher-level language. As a result, I think Sapir-Whorf applies: It would be difficult to program certain concepts and structures that are not implicitly supported by the language. Even if you have a perfect high-level language little could be done to deal with the uncertainty that is inherent in the SDIO project constraints. So I can certainly see why he thought automatic programming would not help with SDIO either.
I’d like to think that at least some things have changed in the interim 20 years. And there are massive, robust systems in operation. The phone system is one of them. So, while SDIO could–but might not–be infeasible, many systems have benefited from our advancements.
Back to the subject, there are many topics here, most of which are not exclusive to software development:
2. Technical feasibility.
3. Actual construction of the software.
Which one are we talking about? I find the paper is more fucused on #1 and #2 which are valid, but not particular to software development alone.
I don’t know what we can do about #1 and #2. All fields have the same problem. I’m sure Boeing would like planes that never crash and go at mach 10 without a sonic boom. But that’s not very realistic or cost effective.
Once you have something realistic, or at least in the realm of the technically feasible, you can start thinking about building it. I’m worried that the SASDS paper mixes all this together and doesn’t make the appropriate distinctions.
Before continuing any sort of discussion, should we not qualify what is exclusive to software development and what is common with other fields? What part exactly is difficult and exclusive to software development? Requirements? No. Techincal feasibility? No. Changing requirements? No. Vague user demands? No. The only thing exclusive to software development is the building of said software (code or whatever else you use). And there are ways to greatly simplify current programming practices. So personally, I’m concerned about what exactly it is we are discussing.
Actually, he talks about all 3 points in his paper. I don’t think it is relevant that neither points 1 nor 2, nor any other points for that matter, are limited to software development. Why should that matter?
What are we talking discussing? For me, I’m interested in looking at the papers posted with the hope and aim to glean more insight into developing software. Why were these papers written? What was the situation at that time? Have things changed? What have we learned?
I understand you believe a silver bullet is possible. But I’m not interested in discussing that in every post.
“Actually, he talks about all 3 points in his paper. I don’t think it is relevant that neither points 1 nor 2, nor any other points for that matter, are limited to software development. Why should that matter?” – Dave C.
Uhh… Maybe so we don’t reinvent the wheel, no?
“Those who cannot remember the past are condemned to repeat it.” — George Santayana
Discussing these topics is not the same as reinventing the wheel, imo.
A few years ago, I started up a forum for an artist friend of mine. It has since grown to be the largest online artist community. But even in the early days, we always had to deal with the social aspect of the forum. Being an admin and not being one who cared much about social software, I had a very simple rule. If someone’s out of line, I contact them directly and if it persists, I ban them. These were all active members, so there was a lot of resistance to banning other members friends. Being graphics artists, they had banners and all sorts of visual paraphernalia in order to mount what whey called a “movement”. It was quite a sight. Eventually, it died down though. Martyrs online are soon forgotten. Besides, they were let back in after everything was back to normal if they requested it.
What was curious though is that these were active members. You occasionally got the new member who wanted to cause a fuss just for kicks. But the longest lived flame wars were by long time members. It’s also been said that talking directly works well because it takes away the audience, but as soon as they go back to the forum they revert back to the old behaviour in these flame wars. I’ve heard plenty of excuses like this by people researching social software and I know it’s not the audience. Not for long time members. They already have an audience.
Look at my conversation above with Dave C. We aren’t even talking about the same thing anymore. How did it get to this point? His last reply has nothing to do with what I was talking about. This has happened in the last three of his replies. Is Dave C. stupid? Why can he not understand simple stuff? How can I get through to him? Did he not see I already answered those questions? Why am I frustrated? Of course Dave C. is none of those things and the blame or problems have nothing to do with him. I get the very same questions and replies on my own blog and I have a completely different reaction there. So what gives?
Forums and conversations that are not meant to have a million replies are not conducive to discussions. They are conducive to presentation of ideas. And maybe a few reactions to those ideas. But that’s it. Most conversations need small, incremental exchanges of information in order for both sides to get a frame of reference. If just one point is missed or misunderstood on a forum (or blog message), you can’t correct it because the floor is to the one writing the reply. So the discussion gets steered in a direction that may seem ridiculous. The frustration comes because there’s no corrective behaviour possible in these kinds of discussion. We are helpless to fix them because the communication channel is severely limited. We have our hands tied behind our backs or driving blind when we reply. Even on the artist site I spoke of, ALL 100% of problems were solved on IM. All of them. None could ever be solved on the forum itself because it always got worse. We didn’t know why at the time, but we knew that IM enabled faster resolutions.
Why the long story? Well, the SASDS papers is a long list of frustrations. It’s a classic example of how flame wars start. Except in this case, there’s not really any other side. But all the classic signs are there. There’s clear frustration. There’s clear blaming. There’s no cohesiveness in the presentation of what the problem is. Everything is tossed out there seemingly at random and lumped together. On top of that, he’s getting the silent treatment. FYI, the other side is actually the machine(s) he’s trying to write software for.
Remember how frustrating those old “syntax errors” were? Those are caused by lack of communication channels. At university, I’ve seen my share of people curse at the monitor. I’ve even seen people damage property because of this. Today, we’re getting “systax errors” of a higher intellectual level. We don’t actually get a syntax error, but the error is still there.
What if the problem isn’t one of software only, but more specifically one of communication? When you have a discussion, you need the other side to be able to understand what you’re saying. If you want more complex systems, you need to enable a more complex communication channel. One that support incremental and error correcting exchanges of information. One where you don’t need to provide the minute details of every compound operation or “word”.
No one expects a five year old to understand quantum physics because he doesn’t have the vocabulary for it. We’d have to go through each and every piece of background information and explain it. Good luck! Yet we try to do exactly the same thing with computers. We’re still using the basic 7 primitive commands needed to be Turing complete when we program computers.
It’s somewhat like the chicken and the egg. You need a complex communication channel to get a complex system, but you need a complex system to understand the complex communication channel. The thing is that we only need one of them. Each one can build the other (or it should unlike today). While everyone is saying that the system itself is impossible to build, I have my doubts on the communication channel. That’s where I’m focusing my attention. And I don’t mean programming languages. They’re all built on the same set of 7 “words” where functions are just paragraphs, not new words. That may be the worst realisation of all. We thought the vocabulary was expanding, when all this time it was staying the same. Unfortunately, the OOP mentality of the last 20 years has completely corrupted any hope of understanding this.
A system can only be as complex as the communication channel. Or at least some upper limit based on it. If you were to create a robot that could walk, talk and do tasks, would you still use loops, if/then/else and math to tell it what to do? Or would you like some kind of verbal recognition of certain common words? Computers today are incredibly powerful, but they are dirt stupid. We have to tell it everything. If we continue this trend of trying to fit watermellons through a water hose, we’re looking pretty stupid ourselves. Frustration and no silver bullet indeed.
I liked your book a lot.
Although you are mainly concerned with the software problems in SDI, there are other more serious ones. See http://www.commondreams.org/views/051100-101.htm
The larger problem is…we probably won’t survive much longer unless we figure out a way to get competent people into public office.
“What are all these people going to say when the first person puts on his jacket and leaves this party by building software in a better way? If you think you’ll fail, you will. So the suggestion that failure is inherent is more dangerous than anyone can possibly imagine. Don’t let this be a self-fulfilling prophecy.”
Yes, never say never. Without going too far, see the Scott’s article:
“Anything You Can Do, I Can Do Meta”
Well, Simonyi does not whine the old “impossible” song, even perhaps he is your guy leaving the party, but, as the article puts, he “has no target date or shipping deadline”.
Today there are already pretty high level tools, domain specific modelers if you will, that accomplish tasks formerly requiring custom software. I’m thinking along the lines of workflow apps, matlab’s toolbox, even web site producing web apps.
These tools are data centric; the user is describing the flow of data instead of a linear sequence of instructions. The problem is that there is always some limit to the application that can be described in terms of pre-defined data flows. Most of these tools acknowledge this by building in some kind of custom action/script/external function capability. Since businesses don’t typically spend money on building functionality they don’t need the minute business A needs to access data from business B the domain specific modeling tool has to be extended with custom action.
Even if everyone has exposed their data with some common description format, say SOAP or what have you, what happens when I want to transform that data in ways not already built into the modeler?
SQL runs into this problem. It is wonderful for a wide range of operations but for some operations procedural statements are necessary. Some of the more convoluted sets are much more easily described procedurally than declaratively.
To make matters worse the more general a modeler becomes the steeper the learning curve. Eventually the modeler becomes as difficult to use, or more so, as any decent GPL.
It is still with us. Software production is nominally 50% efficient after five decades of experience. Whatever has been and is being done to address the software crisis is: 1) not effective, or 2) if it works to some degree, it is not an efficient use of technical talent and labor hours. Software is not magic, it isn’t intelligent, and it isn’t even smart. Software implies Turing machines (TMs) which are (theoretical) step-by-step algorithm implementers and are not suitable for real-time all-the-time operations such as needed for process-control in modern systems.
TM-type machines are the correct technology for performing arithmetic calculations, or logic operations on static and unchanging inputs, as Alan Turing theorized. TM-type machines are the wrong technology for complex systems having changing inputs. Discrete, static, frame-based controllers are inappropriate for controlling dynamic systems.
There can be no silver bullet for software as it is. In twenty more years, software will still be done similarly and have the same problems it has had from the beginning. There may be higher-level languages, and we have some now, but underneath it all—as the necessary and sufficient operations, the three primitive operators that can construct all the computers in existence—it is the same old triad: AND, NOT, and STORE.
The fundamental problems and limitations of software engineering thus lie within the discipline and are caused by the simplistic logic it attempts to manage. All of Boolean-sequential logic rests upon the two spatial relations of conjunction (AND) and negation (NOT), and one command, STORE. Three essential words. Software becomes overly complex because the words we use to compose it are too simple. It would be very difficult to write anything of significance using only three words. Having done so at great expense, it would be very difficult to understand, fix, or maintain. That is the situation.