Wordyard

Hand-forged posts since 2002

Archives

About

Greatest hits

“Evidence-based” software scheduling a la FogBugz

October 11, 2007 by Scott Rosenberg

Yesterday afternoon I hopped over to Emeryville to hear Joel Spolsky talk. He’s on the road promoting the new, 6.0 version of Fog Creek Software’s bug-tracking product. I’d paid little attention to the evolution of this product — Salon’s team long ago chose the open-source Trac, OSAF used Bugzilla, and when I first looked over FogBugz ages ago it looked like a perfectly serviceable Windows-based bug-tracking tool, no more.

Well, in the intervening time, the thing has gone totally Web-based and AJAX-ified, and it’s pretty cool just on those terms. It’s also grown a wiki and become more of a full-product-lifecycle project management tool, with integration for stuff like customer service ticket management.

Still, what’s most interesting about the new FogBugz is what Spolsky and his team are calling “Evidence Based Scheduling” (or — because everything must have an acronym — EBS). Now, anyone who’s read Dreaming in Code knows that I devote considerable verbiage to the perennial problem software teams face in trying to estimate their schedules. This is in many ways the nub of the software problem, the gnarly irreducible core of the difficulty of making software.

With EBS, FogBugz keeps track of each individual developer’s estimates (i.e., guesses) for how long particular tasks are going to take, then compares those estimates with the actual time the task took to complete. Over time it develops a sense of how reliable a particular developer is, and how to compensate for that developer’s biases (i.e., “Ramona consistently guesses accurately except that things always take her 20 percent longer than she guesses”).

With this information in place — and yes, that’s right, to use this system the developers have to keep track of how much time they spend on each task — the software can turn around and provide managers with a graph of ship-date likelihoods. You can’t say for sure, “The product will ship by March 31,” but you could say, “We have a 70 percent likelihood of shipping y March 31,” and then you can fiddle with variables (like “Let’s only fix priority one bugs”) and test out different outcomes.

Spolsky explained how FogBugz uses a Monte Carlo simulation algorithm to calculate these charts. (He provided a cogent explanation that my brain has now partially scrambled, but I think it’s like running a large number of random test cases on the data to generate a probability curve.) In any case, while I’m sure many managers will be interested in the prospect of a reliable software-project estimation tool, what I find intriguing is the chance that any reasonably wide deployment of FogBugz might yield some really valuable field data on software schedules.

The sad truth is that there’s very little good data out there. As far as I understand it, the CHAOS report is all self-reported (i.e., CTOs filling out surveys). To the extent that users of FogBugz are working from the hosted service rather than on their own installations of the software, the product will gradually produce a fascinating data set on programmer productivity. If that’s the case, I hope Spolsky and his company will make the data available to researchers. Of course, you’d want all the individual info to be anonymized and so on.

As I said, all of this depends on developers actually inputting how they spend their time. They’ll resist, of course — time sheets are for lawyers! Spolsky said Fog Creek has tried to reduce the pain in several ways: The software makes it easy to enter the info, you don’t worry about short interruptions and “bio-breaks,” i.e., bathroom runs (hadn’t heard that term before!), you just try to track tasks at the hourly or daily level, and you chunk all big tasks down to two-day or smaller size pieces. Still, I imagine that if evidence-based scheduling doesn’t catch on, this will be its point of failure. Otherwise, it sounds pretty useful.

UPDATE: Rafe Colburn is starting to use FogBugz 6.0 and has more comments…
[tags]software development, project management, joel spolsky, fogbugz[/tags]

Filed Under: Dreaming in Code, Software

How hard is a simple web app?

September 20, 2007 by Scott Rosenberg

On the continuing subject of “just how hard / easy is it to create a Web application, anyway?”, Aaron Swartz offers some thoughts, centered on the launch of his new Jottit service. Swartz seems to be on the other side of the fence from the Joel Spolsky essay that I wrote about yesterday. (Although I bet there’s a lot these two agree on, as well.)

There are two ways I look at it. One is: It took us five months to do that? And the other is: We did that in only five months?

When you look at what the site does, it seems pretty simple. It has few features, no complex algorithms, little gee-whiz gadgetry. It just takes your text and puts it on the Web. And considering how often I do that every day, it seems a bit odd that it took so long to create yet another way. And then I check the todo list.

As I’ve said, this is a site I wanted to get every little detail right on. And when you start sweating the small stuff, it’s frankly incredible just how much of it there is. Even our trivial site is made up of over two dozen different screens. Each one of those screens has to be designed to look and work just right on a wide variety of browsers, with a wide variety of text in them.

And that’s just making things look good — making them work right is much harder…

Read the whole thing, and then recall it the next time someone tells you how simple it is to throw up a Web 2.0 site. Of course, Swartz is proclaimedly trying to “get every little detail right.” I gather he is not a Big Ball of Mud kind of guy.
[tags]aaron swartz, jottit, web 2.0, software development, wep applications[/tags]

Filed Under: Dreaming in Code, Software

Chandler Preview: from dream toward reality

September 20, 2007 by Scott Rosenberg

It feels like only yesterday I was staring in disbelief at the first hardcover copies of Dreaming in Code, but now we’re getting the paperback edition ready (for release in early 2008). I’d always wanted the chance to write a new postscript to the book, bringing the Chandler story up to date. The timing turned out to be fortuitous: the Open Source Applications Foundation released what they’re calling the Preview edition of Chandler last week.

I wrote a little about the saga of Chandler Preview back in January, when the OSAF team hoped to have a release out in April. As that date slipped steadily, I glanced at the calendar nervously, because I knew that sooner or later my publisher would have to close the door on any additions to the paperback. But the timing worked out: OSAF got its Preview out just in time for me to see and use it before I wrote up the new material.

For those of you who have been following the work on Chandler, Preview is what OSAF formerly called Chandler 0.7. After 0.6 shipped near the end of 2005 Mitch Kapor and the OSAF developers decided that they would plan the next big release to be a fully usable, if not feature-complete, sharable calendar and task manager with limited e-mail. You can download the result and try it out yourself.

Over the years Chandler has expanded into a small constellation of products — the desktop application, a server (formerly called Cosmo, now known as Chandler Hub), and a web interface to the server. OSAF now offers free accounts on its own Chandler Hub that you can use to sync your desktop and Web data.

On the one hand, of course, Chandler is way later than even seemed possible back in 2002 when it was first announced. How and why that occurred is the heart of my book. So much has happened on the Web and in the software industry since then that people ask, reasonably, what Chandler can possibly do that they’re not able to do already with Google Calendar or any of the other calendar/e-mail/task management offerings out there.

One big tech-industry story this week was Yahoo’s $350 million acquisition of Zimbra — an open-source Outlook replacement that started well after Chandler and delivered working software a lot sooner. Zimbra is impressive and full of nifty features, and its focus on solving a lot of the cellphone-and-handheld coordination issues for people was smart. But it didn’t try to introduce a new way of managing one’s information.

For better and worse, Chandler did. In this area, it aimed higher than Zimbra or most of the other competition; and its grand reach plainly exceeded its grasp. The Preview edition’s Dashboard provides a glimpse of the different way of organizing one’s work that Kapor and the Chandler designers propose. I don’t think it’s either as accessible for newcomers or as tractable for initiates as it needs to be. But neither is it simply an Outlook retread.

Anyone who has tried to organize the work of a small group with software knows that — even with Web 2.0 and Ajax and the best stuff we can throw at the problem in 2007 — we’ve only barely begun to leverage what computers can do in this area. Chandler deserves credit for acknowledging this and setting out to do better. Its setbacks can be chalked up in part to the choices and mistakes its developers made along their long road; but they are also a sign of just how tough the problem really is.

I’m still not ready to adopt Chandler for my own everyday use. But I’m not especially happy with what I am using, either. That means there’s still room for the sort of program Chandler has always been intended to be. The Preview release isn’t yet that program. But for the first time it’s moved close enough for anyone to play with, and see what it might someday become.
[tags]chandler, osaf, open source applications foundation[/tags]

Filed Under: Dreaming in Code, Software

Spolsky on Web app development

September 19, 2007 by Scott Rosenberg

Joel Spolsky’s latest essay, “Strategy Letter VI,” offers a smart analogy between the desktop software wars of the 1980s — when companies like Lotus bet on producing code that could run on the slow, small-memory machines of the present, only to lose as PCs got faster, quick — and the Web-based software wars of today.

I think the following passage about Web-app development today could even be read as a (partial, qualified) endorsement of Big Ball of Mud:

The developers who put a lot of effort into optimizing things and making them tight and fast will wake up to discover that effort was, more or less, wasted, or, at the very least, you could say that it “conferred no long term competitive advantage,” if you’re the kind of person who talks like an economist.

The developers who ignored performance and blasted ahead adding cool features to their applications will, in the long run, have better applications.

[tags]joel spolsky, web development[/tags]

Filed Under: Code Reads, Dreaming in Code, Software

Code Reads #12: “Big Ball of Mud”

September 16, 2007 by Scott Rosenberg

Code ReadsThis is the twelfth edition of Code Reads, a series of discussions of some of the central essays, documents and texts in the history of software. You can go straight to the comments and post something if you like. Here’s the full Code Reads archive.

“Big Ball of Mud,” a 1999 paper by Brian Foote and Joseph Yoder (pdf), sets out to anatomize what it calls “the enduring popularity” of the pattern of software construction named in its title, “this most frequently deployed of software architectures,” “the de-facto standard software architecture,” “the architecture that actually predominates in practice”: a “haphazardly structured, sprawling, sloppy, duct-tape and baling wire, spaghetti-code jungle.”

This is dire stuff, and when I first glanced at “Big Ball of Mud” I thought I was in for an amusing satire — perhaps a parody of the “software patterns” school. Instead — and what I found most fascinating about the paper — the authors actually walk a fine and narrow line between a Swiftian embrace of the mud-splat school of programming and the sort of “we know better than all those idiots” arrogance that’s found in a lot of the software literature.

Despite the best efforts of “best practices” advocates and methodology gurus, mud is everywhere you look in the software field. This cannot be a coincidence or represent mere laziness. The authors ask, “What are the people who are building [Big Balls of Mud] doing right?”

Their answer: “People build big balls of mud because they work. In many domains, they are the only things that have been shown to work.”
[Read more…]

Filed Under: Code Reads, Dreaming in Code, Software

Nothing to fear but complexity itself

September 14, 2007 by Scott Rosenberg

Over my many years at Salon — in my role as the geekiest of our editorial management team — I found myself often being asked whether some particular problem we were having with our site or our email system or something else might be the result of “hackers.”

Most of the time, I spared my inquisitors the lecture on the history and proper use of that term. Except in a tiny number of cases where there was specific evidence suggesting at least the possibility of some sort of foul play, I’d simply remind everyone how many different things could go wrong on any digital network, argue that the odds favored the likelihood of some sort of malfunction rather than malfeasance, and suggest that everyone should relax (except for our sysadmins, of course, who were busy trying to diagnose the problem).

Bugs are many, break-ins are few. John Schwartz had a good piece in the Times earlier this week offering further reinforcement of that perspective, looking specifically at the transportation system and the slow-motion train wreck of the effort to computerize our voting systems.

…Problems arising from flawed systems, increasingly complex networks and even technology headaches from corporate mergers can make computer systems less reliable. Meanwhile, society as a whole is growing ever more dependent on computers and computer networks, as automated controls become the norm for air traffic, pipelines, dams, the electrical grid and more.

“We don’t need hackers to break the systems because they’re falling apart by themselves,” said Peter G. Neumann, an expert in computing risks and principal scientist at SRI International, a research institute in Menlo Park, Calif.

It was this tension between our social dependence on complex software systems and our continuing inability to produce software in a reliable way that motivated me to write Dreaming in Code.
[tags]complexity, john schwartz, software development, dreaming in code[/tags]

Filed Under: Dreaming in Code, Software

Chatting with Josh Kornbluth

September 6, 2007 by Scott Rosenberg

Last Saturday morning I visited the nearby KPFA studios to chat with my old friend Josh Kornbluth, who was guest hosting the “Morning Talkies” show. It was a lot of fun sharing a radio studio with Josh again — we’d collaborated many years ago, in Cambridge, on an ebullient but slapdash variety show that was plagued by all sorts of live-radio mishaps. Something about our on-air reunion seemed to summon that spirit; as we started our interview, someone barged into the studio and began hauling boxes out on a dolly. Before she was done she’d even tried to nab my tote bag. It threw us off track for a spell, but we managed to regain our composure and have a great chat about Dreaming in Code, blog-reading addiction, and how to manage one’s informational diet.

You can listen to the show here. The same hour features two other great guests: Gray Brechin, author of “Imperial San Francisco,” talking about the Living New Deal project; and Berkeley philosophy professor John Campbell on the nature of perception and questions like, do colors have any reality independent of our individual perceptions? (I’m a little late posting about this — busy round here right now — but better late than never!)
[tags]dreaming in code, kpfa, josh kornbluth[/tags]

Filed Under: Dreaming in Code, Personal

Web 2.0’s five-year development cycle

August 27, 2007 by Scott Rosenberg

As David Bowie once sang:

We’ve got five years — my brain hurts a lot
We’ve got five years — that’s all we’ve got

One of the arguments I often hear raised against Dreaming in Code’s contention that “software is hard” is what I call the “Web apps solve all our problems” stance. In this view, the Web 2.0 wave is not just about user convenience and nimble companies — it represents the final triumph over the beast of software-project delays and headaches, thanks to the ease of prototyping, the fast upgrade cycle and the tight feedback loop of user input characteristic of this approach.

No sane observer denies the importance of this trend. But I’m always a little skeptical of the pollyanna-ish view that moving our software onto the network and into the browser puts all of our old problems out to pasture.

Tonight as I caught up on my feeds I noticed two items from TechCrunch that resonated. First, Yahoo has taken its revised Web-mail interface out of beta, after years of development. (Farhad Manjoo at Salon’s Machinist has a good review.) Yahoo’s new mail system is based around that of Oddpost — a small startup that pioneered the “Ajax”-style Web interface back in 2002 before being acquired by Yahoo. I remember looking at it then and thinking, wow, this is a big deal. And it was, as the concept of updating data within a browser window without refreshing the entire page quickly spread over the next several years. But it took Oddpost from 2002 to 2007 to mature into Yahoo Mail.

Meanwhile, another key Web application that started up only a little after Oddpost, Bloglines, has introduced the first major upgrade to its interface since — well, since it began. Bloglines got acquired by Ask Jeeves years ago, and has had some problems keeping up with its masses of users and data. Even now, its new design — which looks very nice on first glance — is just entering a beta phase.

Put this together and it sounds like, after the phase of “gee whiz, we got a great idea, let’s buy a domain name and put it out there” — once reality kicks in — major Web applications have an upgrade cycle of once every five years or so. Small startups get acquired and face organizational integration challenges; small applications face the uphill struggle to scale for masses of users; and services sit through long “beta” periods to test interface choices, iron out bugs and see how they can handle running under load.

I’m not knocking Yahoo Mail or Bloglines here. But this is sobering data for those who argue that the advent of Web-based apps and services drives a silver bullet through the heart of software’s problems. Five years is no sprint. Funnily enough, it’s roughly the timespan of Windows Longhorn/Vista — or Chandler, the program I wrote about in Dreaming in Code.
[tags]software development, web 2.0, bloglines, oddpost, yahoo mail[/tags]

Filed Under: Dreaming in Code, Software, Technology

Bridges and code

August 24, 2007 by Scott Rosenberg

Dreams of improvement in the software development field often take the form of “engineering envy,” and are frequently expressed, as I wrote in Dreaming in Code, with a fist pounded on the table and a cry of, “Why can’t we build software the way we build bridges?” In the wake of the recent bridge collapse in Minnesota we’re reminded that this comparison cuts in two directions.

Bridge-building may be a mature field, but it, too, still has its pitfalls and failures. Programmers have to worry about bit rot and security holes and edge-case bugs; civil engineers must make sure their formulas account for corrosion from bird poop.

With all this in mind, I read David Billington’s op-ed piece in last week’s New York Times, “One Bridge Doesn’t Fit All,” with great interest. One of the ways many programmers wish they could be more like bridge-builders is in the way so much of the latter’s work now consists of reusable forms and designs. Programming, by contrast, has yet to achieve the “code is Lego” dream — or rather, though developers have greatly benefited from what Robert Glass calls “reuse in the small” (pulling small bits of code, libraries of routines or objects, off the shelf), “reuse in the large” remains mostly out of reach.

Billington’s argument is that design-by-committee procedures and reuse of one-size-fits-all plans have impoverished bridge-building in America. He wants to see bridge projects led by “one engineer who makes the conceptual design, understands construction and has a strong aesthetic motivation and personal attachment to the work.” While many programmers yearn to increase the ratio of science to art in their field, Billington is urging the bridge-builders in the opposite direction.

It’s not easy to balance demands for safety and beauty and thrift. “American bridge engineering largely overlooks that efficiency, economy and elegance can be mutually reinforcing ideals,” Billington writes. “This is largely because engineers are not taught outstanding examples that express these ideals.” He wants engineers to study great examples of bridge design just as Richard Gabriel wants programmers to study great classics of code. (The new book Beautiful Code opens some doors in that direction.)

While the civil engineers seek to learn from the Minnesota collapse, software developers can, for a moment at least, set aside their “bridge envy,” and think about some of the ways the two fields resemble each other. For instance: Disasters can never be eliminated. But at least we can keep improving our ability to postpone them.
[tags]bridges, software development, programming, dreaming in code, david billington[/tags]

Filed Under: Dreaming in Code, Software

Berkeley talk, Chandler, Barcamp, Citizen Josh

August 17, 2007 by Scott Rosenberg

I have been hunkered down getting my life (and a mountain of notes and research) in order. Here’s a grab-bag of items:

  • On Wednesday I spent the afternoon at UC/Berkeley at the kind invitation of Bill Allison, and talked with a thoughtful, interested group of faculty, administrators and IT people about Dreaming in Code and the wider topic of software’s innate difficulties. Berkeley, along with a number of other institutions, is about to kick off an ambitious project to build a new platform for much of its underlying digital infrastructure. Chandler, whose slow progress Dreaming in Code chronicled, has a university tie-in as well, and these folks are smart and foresightful enough to want to try to understand what pitfalls they might be facing.

    Too often, groups embark on big new software ventures as if they are the first pioneers ever to walk down their particular path, when in fact most of the field is full of well-worn roads (and the roads usually lead into one or another ditch). So hats off to my Berkeley neighbors for wanting to study an at least partial map of the terrain.

  • Speaking of Chandler, the folks at OSAF are closing in on a major release, called Preview, later this month. I’ll be writing more about it here as it unfolds.
  • Barcamp Block: This marks the second anniversary of Barcamp, a self-organizing conference for geeks, startup companies and related phenomena. It’s down in Palo Alto this coming weekend, it looks like great fun and interesting people, and I’m planning to be there, at least for the first day.
  • Also here in Berkeley, my friend Josh Kornbluth‘s great show “Citizen Josh” (I wrote about it when it opened) is settling in for a three-week run over at Berkeley Rep. Worth seeing if you missed it across the Bay when it played the Magic Theater earlier this year.

[tags]uc berkeley, chandler, barcamp, josh kornbluth[/tags]

Filed Under: Culture, Dreaming in Code, Events, Software

« Previous Page
Next Page »