Fascinating piece by Herb Sutter, The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software, says that, with Moore’s Law plateauing short of 4 GHz, and the processor universe moving to “multi-core” designs to squeeze better performance from chips, software developers are going to have to learn a whole new ballgame.
Predictions that Moore’s Law is going to hit a wall have regularly proven mistaken over the past decade or two, but that doesn’t mean that this time they’re wrong too, and the news from Intel et al. over the past year suggests that the stall in processor-speed increases is real. So the hardware firms’ “multi-core” plan means that the next generation of processor speedsters will try to gain their oomph not by running one processor’s queue of instructions faster — that’s become tough as higher speeds have meant more heat, more power use, and more energy leakage (all, obviously, connected phenomena) — but rather by running multiple queues.
In layman’s terms: If your corner store experiences huge growth in customer volume, it can keep its one cashier working harder and faster, but only up to a point. Once that person hits his limit, the only way you can move more customers out the door faster is by adding a second register. (Unless you completely change the rules, by, say, asking the customers to check themselves out — in this comparison, the technology equivalent of “invent a new processor paradigm” to bust open the Moore’s Law logjam once more.)
In my everyday example, the “coordination cost” is fairly low — you just have to assume that the customers will figure out how to organize themselves into two separate lines. Or maybe if your store’s set up the right way you can have one line feed both registers. To adapt software to the multicore universe, though, Sutter’s analysis suggests, the costs are more complex, and programmers need to get good at thinking about a new set of problems — otherwise software won’t be able to take advantage of the new chips, and programs designed by developers who don’t really understand the new world will fall into new kinds of traps like “races” and “deadlocks.” Sutter writes that “The vast majority of programmers today don’t grok concurrency, just as the vast majority of programmers 15 years ago didn’t yet grok objects.” So maybe there’ll be work for programmers after all!
Meanwhile, when Intel decides that multi-core is what the public must buy, look for it to push software vendors to rewrite popular applications in new versions marketed under whatever ad-friendly moniker the new multi-core architecture is festooned with. (We went through this with MMX in the mid-’90s and again, on a smaller scale, with Centrino.) “Multi-core” and “hyperthreading” are sexless technical terms, so we can expect trademarks like “Maxium” or “CoreSwarm” and slogans like “Two is better than one!” or “The Power of Many” (no, wait, that’s taken).
The typical user will say, “Why do I need this stuff? My word processor is fast enough and my Web pages load fine.” But within three years the new architecture will be standard anyway, and within ten years the world will actually find something to do with the new processor power — like, say, distribute the work of 23 million video mashup artists simultaneously to your desktop, then catalog them and re-edit them according to your preferences on the fly! And the Silicon Valley cycle will grind forward.
Post Revisions:
There are no revisions for this post.