You are viewing an old revision of this post, from May 20, 2010 @ 10:18:38. See below for differences between this version and the current revision.
I listened to this interview yesterday with BP director Robert Dudley on the News Hour:
ROBERT DUDLEY: …The blowout preventers are something that are used on oil and gas wells all over the world, every well. They just are designed not to fail with multiple failsafe systems. That has failed. So, we have a crisis.
…JEFFREY BROWN: Excuse me, but the — the technology — the unexpected happened. And so the question that you keep hearing over and over again is, why wasn’t there a plan for a worst-case scenario, which appears to have happened?
ROBERT DUDLEY: Blowout preventers are designed not to fail. They have connections with the rig that can close them. When there’s a disconnection with the rig, they close, and they’re also designed to be able to manually go down with robots and intervene and close them. Those three steps, for whatever reason, failed in this case. It’s unprecedented. We need to understand why and how that happened.
The failsafe failed. It always does. “Designed not to fail” can never mean “certain not to fail.” There is no such thing as “failsafe” — just different degrees of risk management, different choices about how much money to spend to reduce the likelihood of disaster, which can never entirely be eliminated.
Two different social attitudes conspire to lead us to disasters like the Gulf spill. On the one hand, there is the understandable but naive demand on the part of the public and its proxies in the media for certainty: How can we be sure that this never happens again? Sorry, we can’t. If we want to drill for oil we should assume that there will be spills. If we don’t like spills, we should figure out other ways to supply our energy.
On the other side, there is what I’d call the arrogance of the engineering mindset: the willingness to push limits — to drill deeper, to dam higher — with a certain reckless confidence that our imperfect minds and hands can handle whatever failures they cause.
Put these two together and you have, rather than any sort of “failsafe,” a dynamic of guaranteed failure. The public demands the impossibility of “failsafe” systems; the engineers claim to provide them; and everything is great until the inevitable failure. Each new failure inspires the engineers to redouble their efforts to achieve the elusive failsafe solution, which lulls the public into thinking that there will never be another disaster, until there is.
I wrote about these issues as they relate to software in Dreaming in Code. But at some point the need to understand this cycle demands a more artistic response.
May I suggest you give a listen to Frank Black’s “St. Francis Dam Disaster,” a great modern folksong about a colossal engineering failure of a different era.
- May 20, 2010 @ 10:20:35 [Current Revision] by Scott Rosenberg
- May 20, 2010 @ 10:18:38 by Scott Rosenberg
|May 20, 2010 @ 10:18:38||Current Revision|
|Deleted: "Failsafe" is an oxymoron: BP's Gulf ||Added: "Failsafe" is an oxymoron: BP's Gulf and the St. Francis Dam|
Note: Spaces may be added to comparison text to allow better line wrapping.
Hey, I liked your post about Failsafe. We saw a similar dynamic in play with Toyota, and of course, Ralph Nader wrote UNSAFE AT ANY SPEED, years ago about cars exploding on impact.
With respect to our GREAT GULF OIL SPILL, 60 Minutes highlighted the BP management’s override of safety systems. Apparently, there was alleged negligence when the blowout preventers failed and the mangement team decided to “continue drilling.”
I happen to live in an engineering-rich region with BOEING and MICROSOFT, and I am employeed as a SOFTWARE ENGINEER. Safety for flyng is an obvious engineering priority. However, our society is very tolerant of software failing. Maybe that is due to a gross misunderstanding of how software is used in everyday machines, like drilling rigs, and automobiles. And maybe that is why TOYOTA tried so hard to protect its image.