Yesterday afternoon I hopped over to Emeryville to hear Joel Spolsky talk. He’s on the road promoting the new, 6.0 version of Fog Creek Software’s bug-tracking product. I’d paid little attention to the evolution of this product — Salon’s team long ago chose the open-source Trac, OSAF used Bugzilla, and when I first looked over FogBugz ages ago it looked like a perfectly serviceable Windows-based bug-tracking tool, no more.
Well, in the intervening time, the thing has gone totally Web-based and AJAX-ified, and it’s pretty cool just on those terms. It’s also grown a wiki and become more of a full-product-lifecycle project management tool, with integration for stuff like customer service ticket management.
Still, what’s most interesting about the new FogBugz is what Spolsky and his team are calling “Evidence Based Scheduling” (or — because everything must have an acronym — EBS). Now, anyone who’s read Dreaming in Code knows that I devote considerable verbiage to the perennial problem software teams face in trying to estimate their schedules. This is in many ways the nub of the software problem, the gnarly irreducible core of the difficulty of making software.
With EBS, FogBugz keeps track of each individual developer’s estimates (i.e., guesses) for how long particular tasks are going to take, then compares those estimates with the actual time the task took to complete. Over time it develops a sense of how reliable a particular developer is, and how to compensate for that developer’s biases (i.e., “Ramona consistently guesses accurately except that things always take her 20 percent longer than she guesses”).
With this information in place — and yes, that’s right, to use this system the developers have to keep track of how much time they spend on each task — the software can turn around and provide managers with a graph of ship-date likelihoods. You can’t say for sure, “The product will ship by March 31,” but you could say, “We have a 70 percent likelihood of shipping y March 31,” and then you can fiddle with variables (like “Let’s only fix priority one bugs”) and test out different outcomes.
Spolsky explained how FogBugz uses a Monte Carlo simulation algorithm to calculate these charts. (He provided a cogent explanation that my brain has now partially scrambled, but I think it’s like running a large number of random test cases on the data to generate a probability curve.) In any case, while I’m sure many managers will be interested in the prospect of a reliable software-project estimation tool, what I find intriguing is the chance that any reasonably wide deployment of FogBugz might yield some really valuable field data on software schedules.
The sad truth is that there’s very little good data out there. As far as I understand it, the CHAOS report is all self-reported (i.e., CTOs filling out surveys). To the extent that users of FogBugz are working from the hosted service rather than on their own installations of the software, the product will gradually produce a fascinating data set on programmer productivity. If that’s the case, I hope Spolsky and his company will make the data available to researchers. Of course, you’d want all the individual info to be anonymized and so on.
As I said, all of this depends on developers actually inputting how they spend their time. They’ll resist, of course — time sheets are for lawyers! Spolsky said Fog Creek has tried to reduce the pain in several ways: The software makes it easy to enter the info, you don’t worry about short interruptions and “bio-breaks,” i.e., bathroom runs (hadn’t heard that term before!), you just try to track tasks at the hourly or daily level, and you chunk all big tasks down to two-day or smaller size pieces. Still, I imagine that if evidence-based scheduling doesn’t catch on, this will be its point of failure. Otherwise, it sounds pretty useful.
UPDATE: Rafe Colburn is starting to use FogBugz 6.0 and has more comments…
[tags]software development, project management, joel spolsky, fogbugz[/tags]
Post Revisions:
There are no revisions for this post.