Last week I was reviewing my Dreaming in Code slides and talk, which include a brief discussion of the old question, “Why can’t making software be more like building bridges?” when the news hit of the Minneapolis bridge collapse. In the book I used the long and painful stop-start process of the Bay Bridge replacement (a bridge that stopped for a redesign in mid-construction!) as one example of how bridge building may not be as reliable and predictable an undertaking as we think; the Minneapolis tragedy is another example.
At least it’s getting people thinking. For those interested in further reading, there’s Stephen Wolfram’s fascinating post suggesting that future bridge designs may emerge from the sort of mathematical explorations his software has enabled:
what should the bridges of the future look like? Probably a lot less regular than today. Because I suspect the most robust structures will end up being ones with quite a lot of apparent randomness….we’re going to end up being exposed to something really quite new. Something that exists in the abstract computational universe, but that we’re “mining” for the very first time to create structures we want.
Computerworld reports on new systems that allow the placement of acoustic sensors on bridges to provide better feedback mechanisms than today’s routines of visual inspection.
I was also reminded of a thorough and informative paper from 1986 that I came across in my Dreaming in Code research: “Case Study: A Computer Science Perspective on Bridge Design,” by Alfred Spector and David Gifford. (There’s a PDF available here.) This paper outlines the more mature and rigorous process of designing and specifying a new bridge and systematically compares it to the looser and less clearly defined processes we use in so much software engineering.
[tags]bridges, software engineering, stephen wolfram[/tags]
Post Revisions:
There are no revisions for this post.