Wordyard

Hand-forged posts since 2002

Archives

About

Greatest hits

Ballmer explains Windows delays — or, how Vista is like Iraq

October 16, 2006 by Scott Rosenberg

Steve Ballmer was interviewed in Saturday’s Times. Noted:

Q. What was the lesson learned in Windows Vista? After all, it wasn’t supposed to ship more than five years after Windows XP.

A. No. No, it wasn’t. We tried to re-engineer every piece of Windows in one big bang. That was the original post-Windows XP design philosophy. And it wasn’t misshapen. It wasn’t executed, but it wasn’t misshapen. We said, let’s try to give them a new file system and a new presentation system and a new user interface all at the same time. It’s not like we had them and were just trying to integrate them. We were trying to develop and integrate at the same time. And that was beyond the state of the art.

This is at once an unusually candid and an oddly defensive statement.

Ballmer is saying that, in 2001-2, as Microsoft pondered the next phase of Windows’ evolution post-XP, the company deliberately chose to “re-engineer every piece of Windows in one big bang.” It’s a telling choice of phrase. In the software development world, “big bang” (typically used in “big bang integration”) is used to describe a bet-it-all strategy that involves building lots of parts of a system separately and waiting until the end to hook them up and hope they play nicely together.

So Ballmer is essentially admitting that the “design philosophy” of the new Windows was founded on a risky, widely discredited approach. Then he turns around and says that it wasn’t “misshapen” — twice.

Misshapen? Is this a new buzzword I’ve somehow missed? Did the Times reporters mistranscribe “mistaken”? What is Ballmer talking about?

Then he says, “It wasn’t executed.” Note the passive voice, correct for it: “We didn’t execute it.” Which means, “We didn’t do it.” That’s, you know, obvious, I’d think.

Then Ballmer closes the explanation by declaring that the problem wasn’t one of integration; it was even worse than that — it was that Microsoft, the largest and most successful software company in the world, set out to simultaneously “develop and integrate” new versions of all the core functions of its central product. Now, in 2006, the company understands that this was “beyond the state of the art.” But back in 2001-2, they didn’t see that.

This is a fascinating rationalization. I’m loathe to draw too facile a comparison between the tribulations of a technology company and the drama of global conflict. But here, I think, there’s a clear and illuminating parallel between Microsoft’s hubris in this era and the Bush administration’s overreaching in Iraq — two phenomena that overlap almost precisely on the historical timeline.

And no, of course I don’t mean to suggest that there is any moral equivalence, or that the sad saga of a software product’s delay is in any way an event of equal import to the tragedy of an unnecessary war of choice resulting in hundreds of thousands of deaths. But there are some similarities, too, to wit:

Bush’s team — chests puffed large from its success in invading Afghanistan post-9/11 — ignored conventional wisdom and disregarded expert intelligence and invaded Iraq, only to discover that the effort to control and transform that country was beyond its means.

Gates’ team — surveying a decimated post-dotcom industry landscape as the “sole superpower” of technology — simiarly ignored conventional wisdom and disregarded expert knowledge. Incremental development? Continuous integration? They are for mere mortals. Microsoft, with its mountain of cash and its armies of developers, could bring brute force to bear on the most intractable of large-systems development problems. The company would rip out the guts of all of Windows’ key subsystems and renovate them at the same time — because it was invincible!

The result was predictable. Now a more humbled Microsoft is limping to the finish line with a version of Windows that — whether users find it reat or so-so or terrible — will always be overshadowed by the ambitious claims once made for it. In the context of that falling off, Ballmer’s statement is positively bizarre.
[tags]microsoft, steve ballmer, windows, vista[/tags]

Filed Under: Business, Politics, Software, Technology

Code Reads #2: Dijkstra’s “Go To Statement Considered Harmful”

October 10, 2006 by Scott Rosenberg

Code ReadsThis is the second edition of Code Reads, a weekly discussion of some of the central essays, documents and texts in the history of software. You can go straight to the comments and post something if you like. Here’s the full Code Reads archive.

The title of Edsger Dijkstra’s 1968 “Go To Statement Considered Harmful” is among the best-known phrases in the history of programming. Interestingly, the phrasing of the title — which has become so regular a cliche in the field it inspired Eric Meyer to compose the waggish “‘Considered Harmful’ Essays Considered Harmful” — was not Dijkstra’s work at all. As Dijkstra explained it:

Finally a short story for the record. In 1968, the Communications of the ACM published a text of mine under the title “The goto statement considered harmful,” which in later years would be most frequently referenced, regrettably, however, often by authors who had seen no more of it than its title, which became a cornerstone of my fame by becoming a template: we would see all sorts of articles under the title “X considered harmful” for almost any X, including one titled “Dijkstra considered harmful.” But what had happened? I had submitted a paper under the title “A case against the goto statement”, which, in order to speed up its publication, the editor had changed into a “letter to the Editor”, and in the process he had given it a new title of his own invention! The editor was Niklaus Wirth.

How did Wirth come up with the odd phrase? My hunch is: some combination of English-as-a-second-language (though, since Wirth got his PhD here at Berkeley, that may be completely wrong) combined with the essential trait of radical concision drummed into the heads of programmers of that era. Fewer words! Fewer characters! Less space in memory!

[Read more…]

Filed Under: Code Reads, Software

Free books to Code Reads contributors

October 3, 2006 by Scott Rosenberg

OK, the response to my invitation to a discussion of The Mythical Man-Month hasn’t been…overwhelming.

Maybe nobody’s read the book. Or those that have done so have nothing to say about it. Or I said too much myself and nobody felt like adding anything. Or everyone’s too busy wondering when Denny Hastert’s going to quit. Or everyone’s too busy writing code to actually stop and think much about writing code. Or everyone’s too busy, period. Or I just haven’t gotten that Slashdot or Digg link yet.

I’m not worried — I figured this Code Reads thing would take time to get rolling.

But I do have a little incentive to offer: Thanks to the kindness of Gary Cornell, the publisher of APress, I’ve got five copies of Joel Spolsky’s excellent The Best Software Writing I to give away to Code Reads participants.

This great collection has 300 pages of entertaining and incisive writing by people like Clay Shirky, Eric Sink, Michael “Rands” Lopp, Paul Ford, Paul Graham, John Gruber, Cory Doctorow, Adam Bosworth, Raymond Chen, danah boyd, Aaron Swartz and many others. Each one of these pieces is worth a discussion in its own right. (Spolsky’s introduction and the full contents list is here.)

I’ll award these books at my discretion to contributors of value — substantive or simply diverting — to Code Reads discussions.

If The Mythical Man-Month didn’t ring your bell, next Monday I’ll be posting something about Edsger Dijkstra’s famous 1968 paper, “Go To Statement Considered Harmful.” Among other things, it has the virtue of being about 1/100th the length of The Mythical Man-Month.
[tags]code reads, best software writing, mythical man month, go to statement considered harmful[/tags]

Filed Under: Code Reads, Software

Code Reads #1: The Mythical Man-Month

October 2, 2006 by Scott Rosenberg

Code ReadsThis is the inaugural edition of Code Reads, a weekly discussion of some of the central essays, documents and texts in the history of software. This week we’re talking about Frederick Brooks’s The Mythical Man-Month. (OK, let’s be honest: I’m talking about it. I’m hoping you, or you, or you, may want to, as well! If you don’t want to read my essay, you can just go straight to the comments and post something.)

Frederick Brooks’s The Mythical Man-Month came out in 1975, and I first heard its title later that decade. I’d already been a teenage programmer of sorts (of games in BASIC) for a few years, but I knew nothing of the world of large software projects that Brooks’ work addressed. I did know that the phrase “mythical man-month” grabbed my interest. It sounded less like the management-science term it was, more like a description of some prehistoric beast, heaving itself out of the primordial swamp to lumber across a desolate landscape.

Transmuting dry corporate-speak into evocative imagery? That was some trick.

The Mythical Man-MonthWhen I finally did read Brooks’s book, more than two decades later, there, on its cover, were those beasts — struggling to pull their limbs free of the tar pit that served as Brooks’s starting metaphor for the awful dilemmas of software-project scheduling. By that time, I’d become more familiar with the sort of work Brooks’s book addressed. I’d knocked my head more than once against the wall of software development. And I found, as so many readers had before me, that this quarter-century old book anticipated and analyzed most of the problems I’d encountered — and even offered some useful advice on how to avoid them.
[Read more…]

Filed Under: Code Reads, Software

Announcing Code Reads — a weekly reading and discussion about making software

September 26, 2006 by Scott Rosenberg

Code ReadsDuring the years I spent researching Dreaming in Code, I accumulated a veritable mountain of reading material on the topic of software development, the history of programming, project management and so on. (I even read much, though certainly not all, of it!) There is, plainly, a core set of books, documents and texts that trace the evolution of this subject; I also gathered some unusual obscurities and overlooked offshoots.

Only a small fraction of this material made its way into Dreaming in Code itself, which is a narrative tale of the ups and downs of one project, set in the context of the longer history of the field. I’ve been trying to figure out a good way to share my discoveries, spark some interesting discussion and contribute a lasting resource to the Web based on the work I’ve already done and the reading I continue to do.

Here’s my plan: Every week I’m going to announce a topic — usually, a text or document, in many cases easily accessible online; a week later, I’ll post some thoughts, notes and ideas about the topic, and open the floor in comments for you to throw your two cents in. If all goes well, together we’ll build a handy annotated reading list for curious developers and interested outsiders — and maybe have some fun along the way.

I’m calling this impromptu, informal reading group Code Reads — mostly because we’re reading about code and coding, and also because I like the idea that the phrase induces the slightest hesitation in the reader’s mind (How do you pronounce it — like “code reeds” or “code reds”?), and I’m mischievously pleased to invoke that kind of ambiguity in a conversation about a field that abhors ambiguity.

So: Join me for Code Reads. Here, every Monday.

I’m planning to kick things off next week with some observations about Frederick Brooks’s The Mythical Man-Month — the book that, for me and I think many other students of this subject, really started it all. You’re invited. You don’t have to be a programmer (I’m not one, myself, though I’ve played at being one in previous phases of my life). You just have to be interested in the question that I ask in Dreaming in Code: Why is good software still so hard to make?

Joel Spolsky says that most programmers don’t read much at all: “The majority of developers don’t read books about software development, they don’t read Web sites about software development, they don’t even read Slashdot.”

He might be right. Then again, in my work I’ve encountered many, many developers who are fanatically curious about everything under the sun, emphatically including the history and nature of their own field. I’m thinking some of them might enjoy having this conversation with one another, and with the rest of us.
[tags]dreaming in code, programming, software development[/tags]

Filed Under: Code Reads, Software

Thoughts on a Thinkpad migration

September 22, 2006 by Scott Rosenberg

My old laptop, a trusty Thinkpad X30, began falling apart recently — literally, the plastic case developed cracks in the corners and pieces started to fall off. I don’t blame IBM: This machine got a lot of use during the years when I was working on my book — even fell off the table once or twice. It did good service. But I’m not going to trust my work to a computer that is shedding its protective casing like space shuttle tiles. So it was time to buy a new laptop.

I’ve been using ultracompact Thinkpads since 1998 or so and the days of the model 560. These computers have never failed me — never had a hard drive crash or other awful malfunction — despite years of abuse. (Mac folks, I love your operating system, but I don’t love its laptop hardware, so until there’s a Mac laptop that’s as lightweight and reliable as a Thinkpad, and that has a trackpoint-style pointer, it’s just not going to happen for me. Sorry.)

In ordering a new Thinkpad X60s, I wondered whether anything would have changed under the new Lenovo management. The good news, I’m happy to say, is that this Thinkpad continues to feel solid and behave well. The keyboard is if anything a little better than the X30’s (except I absolutely abhor the insertion of the “Windows” key and that funny other key on the right between “alt” and “ctrl” — what does it do, simulate the right-click? who needs it? why crowd the other keys? my fingers liked “alt” and “ctrl” right where they were, thank you!). It’s fascinating to put the new screen next to the old laptop’s LCD and see how 3-4 years of constant use have dimmed the display — something one doesn’t realize without this direct side-by-side comparison.

Thumbs down to Lenovo only for not offering a simple port replicator for the X60s — they make you spring for the fancier dock. Other than that, I’m pretty happy. And no, there was no way I’d wait to buy a new computer in order to graduate to Windows Vista. My philosophy is, never buy a 1.0 product. These ultracompact Thinkpads are so good because IBM has years of experience making them. Similarly, Windows XP (once it’s been upgraded and patched ad nauseam) has had most of its flaws beaten out of it in the years since its debut. Anyone who goes with Vista at launch has to be ready for a boatload of snags and bugs.

One eye-opening sidelight on globalization: the Lenovo web site sent me a UPS tracking number once my order shipped. When I plugged it in at UPS, I could follow my package’s progress all the way from Shanghai to the US. Not much more than a decade ago we were arguing about whether, you know, it was OK for advanced US computer technology to be made available to China. Now, we track the packages of advanced US-designed, China-manufactured computer technology from China’s ports to our doorsteps.

Anytime you move from computer to computer there is the hassle of migrating data (not too bad in the era of voluminous external drives — and migrating that way automatically leaves you with a convenient backup copy). The bigger hassle these days is installing your apps — assuming you haven’t gone completely over to the web-based model, which I certainly haven’t. Thankfully, Ecco Pro still installs nicely, from disk or download. Some of my other stalwart apps have gone free (like Opera) or free/ad-supported/paid (like Eudora), so it’s just a matter of download time plus digging up an old key. (If you have an OS problem, though, you have to deal with the horror of Microsoft activation — today Dave Winer reports one egregious example.)

But with the software installation comes the patching, and that is something of a nightmare. In the case of Windows XP, I dutifully installed a mountain of security patches, but declined the installation of the “Windows Malicious Software Removal Tool” (I’ve protected myself extremely well from malicious software without it). Once I turned down the current month’s edition of this tool, the auto-update wanted to install each previous monthly version, going back to its inception a couple of years ago. There was no way to defeat this that I could figure out. other than laboriously saying “no thanks” to AutoUpdate each time it turned the calendar back a month.

Then there was the Adobe Acrobat Reader. Once installed, it decided there were three critical patches I needed. But each one demanded that I install it, then reboot, separately. WTF? Three reboots for some lousy updates to a piece of software for reading a proprietary document format that I only use when people make me?

Adobe is full of smart engineers. Can’t they roll these things up, or at least set them up so the reboot only triggers once, after all the downloaded updates have installed? And gee, wouldn’t it be nice if they actually told us what these updates did, so we could decide for ourselves whether they actually matter?

Once again, we are asked to do things for the convenience of our software tools. The ostensible servant calls the shots.
[tags]thinkpad, laptops, software hassles, globalization[/tags]

Filed Under: Personal, Software, Technology

BASIC as mirror

September 14, 2006 by Scott Rosenberg

Programming pioneer Edsger Dijkstra once said, “Teaching BASIC should be a criminal offense.” (At least it’s attributed to him — he had a lot of snarky things to say about a lot of other computer languages, so at least it’s in character.)

David Brin disagrees — in fact, Brin is appalled that BASIC, which used to be pre-installed on every personal computer available for purchase, is now a rarity. His article recounting his and his school-age son’s search for a simple BASIC tool is in Salon today.

Brin likes to draw — and rile up — a crowd, as he did years ago in arguing the case that “Star Trek” is philosophically superior to “Star Wars.” And he’s succeeded again.

What I’m finding most interesting in the 150+ letters his article has already generated, whether they share Brin’s views or disagree, is the sheer passion on the part of the programmers responding. I guess the topic combines programmers’ near-religious intensity on the topic of languages with the deep-seated connection all creators have to the tools of their youth.
[tags]programming languages, basic, david brin[/tags]

Filed Under: Dreaming in Code, Media, Software

Machine-readable data and human-memorable stories

September 12, 2006 by Scott Rosenberg

I spent much of the last few years immersing myself in the lore and culture of computer programmers, but not until today did I encounter the Lojban phenomenon. Lojban is an invented language (in the tradition of Esperanto, which I actually studied for a couple of months in seventh grade, thanks to Mr. Glidden). One Lojban enthusiast was profiled on the front page of today’s Wall Street Journal; the article was mostly about a German programmer who has led a campaign against software patents in Europe. But it mentioned in passing his interest in Lojban, “an artificial language…intended to eliminate ambiguity and promoted by some programmers.”

Eliminate ambiguity? No wonder programmers are leading the bandwagon.

The Journal’s shorthand description may not do full justice to Lojban, which turns out, according to Wikipedia, to be an evolution out of Loglan, a “logical language” intended to “test the Sapir-Whorf Hypothesis” (the idea that the structure and nature of language shapes human thought). There is much more here at Lojban.org and also here — including the idea that Lojban is structured to be more machine-readable (i.e., intelligible by computers) than naturally occurring human languages, making it well suited for “human-computer interaction and artificial intelligence research.”

I got to thinking about Lojban and the desire to smooth out all the fuzziness and overlap of our naturally evolving languages while reading Adrian Holovaty’s fascinating recent posting about the future of newspapers. Holovaty is a pioneering figure at the crossroads of the newspaper and technology industries; he started out working for newspapers in Lawrence, Kansas, where he and a group of Python developers created the content-management framework now known as Django; now he’s with the Washington Post.

Holovaty’s post argues that the “story” paradigm of newspaper journalism is a straitjacket the profession needs to shed if it expects to make full use of computers in its future. Stories are just “big blobs of text”; they’re not structured in ways that allow their data to be reused creatively. Newspapers are producing vast volumes of information each day, but because they don’t store the information in ways that allow it to be computer-readable in meaningful ways, they are failing to take real advantage of what technology can do with it all.

I think Holovaty is basically right, particularly when he points in the direction of information like weather data, sports scores, crime stats and the like — “news” that is essential metric, information that arrives from day to day in a relatively predictable format and ought to be stored in ways that let you compare it and reuse it. And he’s smart enough to understand that the structured-data model he is advocating wouldn’t and shouldn’t replace real old-fashioned stories: “News articles are great for telling stories…The two forms of information dissemination can coexist and complement each other.” Amen.

But I’d also like to pause and reflect for a moment on the enduring value of the “story” as a tool for human memory compression.

Unstructured information, Holovaty complains, is information with a short shelf-life: “The information gets distilled into a big blob of text — a newspaper story — that has no chance of being repurposed.” That’s not quite true: It has no chance of being repurposed by machine. But the process whereby a writer distills a volume of data and detail into a coherent narrative that sticks in the memory, if done with lively care and skill, is one that very much promotes “repurposing” by other people. The story sticks in the mind. You repeat it to your friend at work or your spouse over dinner. They get interested and repeat it. In exceptional cases the story becomes a part of the collective memory.

The kind of “repurposing” that machines do with structured data isn’t often going to result in that kind of experience. It’s closer to the stuff that has always been looked down on in newsrooms as “service journalism” or “news you can use.” That condescension is regrettable, but it’s in part inspired by journalists’ awareness that this sort of work really can be done pretty well by machine.

The “repurposing” of structured information that Holovaty describes — say, the ability of someone looking at a Little League schedule to call up the weather forecast for that day and location — is highly useful. So far, as he points out, the newspaper industry has failed to offer such services, or even see them as part of its mission. And so Yahoo and similar online “portal” businesses have moved into the vacuum and turned them into businesses that newspapers now eye jealously.

But Holovaty’s post suggests a way that newspapers — and, really, any journalistic enterprise — can get back into the game. If newsrooms begin to build up storehouses of structured data, someone’s going to need to look through them for patterns and insights. Why are state corporate tax returns dropping in a booming economy? If twice as many restaurants opened this year as last year, why were there only half as many health citations? Are sunspots governing the fortunes of the local high school football team? (OK, so there’s also room for fun and nonsense.) This is the kind of work newsrooms remain uniquely well-situated to perform.

In other words, there’s still plenty of room for the old-fashioned journalistic roles of fact-finder, truth-teller, story-creator. The quest to make more information more useful to machines isn’t an end in itself; it’s a way-station along the way to telling new kinds of stories — lovingly mined out of machine-organized data and then composed in “big blobs of text” for human consumption.

I’ll be glad to read those blobs in a language that still leave plenty of room for double meanings and for poetry. Lojban looks fascinating, but I’ll keep my ambiguity, thank you. Wordplay and nuance and music don’t fit easily into a database schema — but they’re how we encode data so it sticks with us long-term. They delight us, and that delight carves new pathways in our brains. Story is repurposed into memory. It’s our ancestral algorithm. Computers don’t really get it. But who says we have to change for them?
[tags]journalism, newspapers, structured data, storytelling[/tags]

Filed Under: Media, Software

Kiko’s calendar auction and the old “incremental change” song

August 18, 2006 by Scott Rosenberg

Kiko is an Ajax-style Web-based calendar service. (It’s also the title of a fantastic album by Los Lobos.) Kiko’s developers, only a few months after unveiling it, have put it up for sale on Ebay for $50,000. So far, despite wide linkage, no takers.

Robert Scoble says this presages a Web 2.0 shakeout: “There are simply too many companies chasing too few users…. Getting the cool kids to try your technology isn’t the same thing as having a long-term business proposition.”

Could be. With Google’s new calendar gobbling up mindshare in an already crowded space (haven’t you heard that “Google is the New Microsoft“?), Kiko didn’t seem to have much chance.

The problem is that, unlike photo-sharing or video-staring or link-listing or news-rating, activities that have provided grist for successful Web 2.0 mills, calendaring doesn’t easily lend itself to large-scale social interaction and wisdom-of-crowds behavior. Calendars are either personal or apply to small, well-defined workgroups or personal circles. The piece of calendaring that’s most amenable to wide Web networking — the listing and sharing of information about public events — is already being pursued by several ambitious companies (Eventful, Zvents, etc.).

But even if calendars aren’t going to fuel the next Web 2.0 wunder-company, we still need them. The future for calendar software, as Scott Mace keeps reminding us, is more about interoperability than about snazzy Ajax features. Making sophisticated calendar-sharing work, and multi-authoring possible, and import-export painless — these are the things that will matter in this category (as the folks working on Chandler whose work I followed for Dreaming in Code understand so well).

Meanwhile, Justin Kan, a Kiko founder, lists his own set of lessons from the experience. They include the following: “Build incrementally. We tried to build the ultimate AJAX calendar all at once. It took a long time. We could have done it piece by piece. Nuff said.”

But it’s not nuff said, it’s never said ’nuff, it needs to be said over and over until you’re blue in the face and all your coworkers hate you and think you’re a monomaniac who has gotten this word “incremental” implanted in his neurons like some sort of development-process idee fixe. It is an important but counter-intuitive insight. It’s not how businesspeople want things to be. It’s not how developers are used to thinking. So if you actually understand that an incremental process for building an ambitious program or Web site is the best approach, you will have to be insufferable about it.

My friend Josh Kornbluth (who recently recounted some ancient tales from our collaboration 20 years ago on a low-rent radio drama show in the Boston area) once wrote a song titled “Incremental Change.” It was a cappella, it lasted all of 25 seconds and its entire lyric consisted of the following:

I think incremental change is a good thing
I think incremental change is a good thing
Incremental change: good thing!

Software development was almost certainly not on his mind at the time of writing. But the sentiment holds across a surprisingly broad range of fields.

POSTSCRIPT: Paul Graham, whose Y Combinator funded Kiko, says the company spent so little money the failure’s no big deal: “This is not an expensive, acrimonious flameout like used to happen during the Bubble. They tried hard; they made something good; they just happened to get hit by a stray bullet.”
[tags]web 2.0, calendars, software development[/tags]

Filed Under: Business, Dreaming in Code, Personal, Software, Technology

Firefox leak plugged — open browser tabs spared

August 14, 2006 by Scott Rosenberg

Opera is my primary browser, but I increasingly use Firefox because some Ajax-y sites work better in it, and because sometimes (for testing and such) you need multiple browsers. Anyway, my Firefox, I found, kept getting awfully slow, and sometimes would seem to put a drag on my system. That didn’t make sense.

It turned out not to be the fault of the browser itself but instead of a memory leak in a plug-in called Session Saver that I’d installed so I could shut down and restart Firefox with the same set of open browser tabs. Thanks to the invaluable Lifehacker I discovered that (a) Session Saver was the culprit, and (b) I could replace it with a different plugin called Tab Mix Plus that offered more options and no memory leak.

Of course, Opera is the original session-saving champion. Since Opera stabilized this feature several years ago I have never lost my open tab set to a program crash or system freeze. And I’m afraid my work habits involve some pretty serious open tabbing. At the moment, for instance, I’ve got seven separate Opera windows with a total of 79 open tabs. The open tabs represent my “to read” queue, my “maybe I’ll blog about this” pile, and sometimes just my “gee, forgot to close that search” residue. In other words, the current browser session is my work, in progress. Losing it is not an option. Thankfully, I never need to think about that any more.
[tags]browsers, opera, firefox, open tabs[/tags]

Filed Under: Software, Technology

« Previous Page
Next Page »