Wordyard

Hand-forged posts since 2002

Archives

About

Greatest hits

Perfect software? More on Cantrill and Dreaming

November 13, 2007 by Scott Rosenberg

Bryan Cantrill has now fleshed out his critique of Dreaming in Code that I recently addressed. I want to thank him for returning to the topic and giving it serious consideration. I’m going to respond at length because I think, after clearing away a lot of smaller points, we’ve actually found an interesting point to dispute.

As far as I can tell, Cantrill is now saying that the problem with Dreaming is that it starts from the notion that “software is hard” and then explains why it’s hard by exploring what makes it unique — and he would have preferred a book that started from the notion that software is unique and then explored why its uniqueness makes it hard. At some point this becomes pretty abstruse, and I’ll leave it to those of you who want to compare what Cantrill says with what’s in the book to weigh our different perspectives.

I do want to correct his statement that I “picked a doomed project” from the start, as if that was my intention, in order to support a pessimistic view of software. At the time I began following Chandler, it had high hopes and a lot of enthusiastic support. I chose it because I cared about the problems it set out to solve, I thought the people involved were interesting, and I thought it had a reasonable chance of success. 20/20 hindsight leads Cantrill to dismiss Chandler as an ill-fated “garbage barge” from the start. But that’s hardly how it looked in 2002. It attracted people with considerable renown in the field, like Andy Hertzfeld and Lou Montulli, as well as other, equally smart and talented developers whose names are not as widely known. Nor, even at this late date, do I consider Chandler in any way to be definitively “doomed” — though certainly it has failed to live up to its initial dreams. There are too many examples of projects that took long slogs through dark years and then emerged to fill some vital need for anyone to smugly dismiss Chandler, even today.

Also, I need to say that my interest in the difficulty of software did not emerge as some ex post facto effort to justify the problems that OSAF and Chandler faced. In fact, as I thought I wrote pretty clearly, it emerged from my own experience in the field at Salon, where we had our own experience of a software disaster at the height of the dotcom boom.

Finally, in substantiating his criticism of what he calls my “tour de crank” of software thinkers, Cantrill still doesn’t make a lot of sense to me. In the two chapters near the end of the book that depart from Chandler to explore wider issues, I discuss the work and ideas of Edsger Dijsktra, Watts Humphrey, Frederick Brooks, Ward Cunningham, Kent Beck, Joel Spolsky, Jason Fried and Nick Carr (in Chapter 9), and Charles Simonyi, Alan Kay, Jaron Lanier, Richard Gabriel and Donald Knuth (in Chapter 10). (Marvin Minsky and Bill Joy, whom Cantrill mentions, are each cited only in passing.) Anyone might view some number of these figures skeptically — I do, too — but cranks, all?

Anyway, these are side issues. It’s later in Cantrill’s argument that we get to the heart of a real disagreement that’s worth digging into. Despite the evident pragmatism that’s on display in his Google talk, halfway through his post Cantrill reveals himself as a software idealist.

…software — unlike everything else that we build — can achieve a timeless and absolute perfection….software’s ability to achieve perfection is indeed striking, for it makes software more like math than like traditional engineering domains.

And once software has achieved this kind of perfection, he continues, it “sediments into the information infrastructure,” becoming a lower-level abstraction for the next generation to build on.

Ahh! Now we’re onto something substantial. Cantrill’s view that aspects of software can achieve this state of “perfection” is tantalizing but, to me, plain wrong. He makes a big point of insisting that he’s not talking simply about “software that’s very, very good” or “software that’s dependable” or even “software that’s nearly flawless”; he really means absolutely perfect. To me, this insistence — which is not at all incidental, it’s the core of his disagreement with me — is perplexing.

Certainly, the process by which some complex breakthrough becomes a new foundational abstraction layer in software is real and vital; it’s how the field advances. (Dreaming in Code, I think, pays ample attention to this process, and at least tries to make it accessible to everyday readers.) But are these advances matters of perfection? Can they be created and then left to run themselves “in perpetuity” (Cantrill’s phrase)?

On its own, an algorithm is indeed pure math, like Cantrill says. But working software instantiates algorithms in functional systems. And those systems are not static. (Nor are the hardware foundations they lie upon — and even though we’re addressing software, not hardware, it’s unrealistic to assume that you will never need to alter your software to account for hardware changes.) Things change all the time. This mutability of the environment is not incidental; it’s an unavoidable and essential aspect of any piece of software’s existence. (We sometimes jokingly refer to this as “software rot.“) Some of those changes break the software — if not over a matter of weeks or months, then over years and decades. It is then, I think, no longer accurate to call it “perfect” — unless you want to take the pedantic position that the software itself remains “perfect,” it’s the accursed world-around-the-software that’s broken!

This, of course, is a version of Joel Spolsky’s Law of Leaky Abstractions argument, which I present at length in Dreaming in Code: “perfect” abstractions that you can ignore are wonderful until something happens that makes it impossible to keep ignoring them. Such things happen with predictable regularity in the software world I know. I don’t know how you can discuss software as if this issue does not lie at its heart.

The strange thing about this disagreement is that, as far as I can tell, Cantrill is — like the engineers I know a lot better than I know him — a hands-on kind of guy. And DTrace, the project he’s known for, is by most accounts a highly useful tool for diagnosing the myriad imperfections of real-world software systems — for navigating those trips down the ladder of abstraction that software developers must keep making.

All of which leaves me scratching my head, wondering where the world is in which Cantrill has found software of “absolute perfection,” and which programs it is that have achieved such a pristine state, and how they — unlike all other programs in existence — escape the creep of software rot.
[tags]bryan cantrill, software, software development, dreaming in code[/tags]

Filed Under: Dreaming in Code, Software

Does “Dreaming in Code” suck?

November 2, 2007 by Scott Rosenberg

Early in my online career — this goes back to around 1990 — I learned a basic principle about off-the-cuff criticism online: No flame flares in a vacuum. In other words, don’t be too glib with your put-downs — because before you know it, the person you’re putting down will find your comment and call you on it.

I recalled this today when I encountered (thanks to a mention on Sumana’s blog) a drive-by attack on Dreaming in Code. In the opening couple of minutes of a talk at Google this past summer, it seems, a Sun engineer named Bryan Cantrill declared, with some vociferousness, that my book — I quote — “sucks.” This judgment is now preserved in the perpetuity that is Google Video.

Now, Cantrill is one of the creators of DTrace, a popular, award-winning and innovative tool for managing Solaris, and my hat is instantly removed to anyone who bears responsibility for a successful piece of software. I am also not particularly shocked to hear that a smart programmer didn’t like my book; he’s neither the first nor the last in that group.

What’s just plain puzzling is exactly what Cantrill has to say in his handful of complaints about Dreaming in Code. Because every point he makes in explaining the basis for the book’s suckiness turns out to be a point that I have made at length in the book itself and in my talks this year about the book — including at Google, several months before Cantrill’s appearance there. Of course I’m not suggesting that he borrowed from me — he almost certainly hasn’t heard my presentation! But I am puzzled how he could so completely have missed my argument, and misrepresented my position, when it seems to be so close to his own.

As best I can make out, Cantrill believes that Dreaming in Code fails to acknowledge that software is uniquely different from other creative endeavors because (a) it’s not a physical entity; (b) we can’t see it; (c) it’s really an abstraction. These factors cause all the analogies that we draw to things like building bridges to break down. Cantrill describes himself as a “finisher” of books and I’ll take his word, but I’m flummoxed how anyone who has finished the book can knock it for failing to understand or express this view of software.

The critique gets sketchy from here on in; Cantrill draws some sort of analogy between Dreaming in Code and Apocalypse Now (a comparison I’ll gladly accept — it’s a reference I make in the book myself) and suggests that I got “hoodwinked” by “every long-known crank in software” (the lone “crank” cited is Alan Kay).

It’s true that the final section of the book surveys both the nuts-and-bolts methodologies that try to alleviate software’s practical difficulties and a whole gallery of software philosophers from both the mainstream and the fringes — people like Kay, Charles Simonyi, Donald Knuth and Jaron Lanier. If even discussing these people’s ideas constitutes “hoodwinking” I guess I’m guilty.

From here Cantrill wanders into his own case for software’s uniqueness, which as far as I can tell is nearly identical to the one I make in Dreaming in Code. “All the thinking around software engineering has come from the gentlemanly pursuit of civil engineering,” Cantrill says. “That’s not the way software is built.” Indeed.

So I’m not sure what the complaint is. Maybe analogies are so odious to Cantrill that he feels they should not even be discussed, even if the discussion is intended to expose their limitations. Maybe the notion of software’s uniqueness and its intractability to old-fashioned physical-world engineering principles seems so obvious to Cantrill that he is appalled anyone would even bother to explore it in a book. But there’s still an enormous amount of attention and money being applied to the effort to transform software development into a form of reliable engineering. I found thoughtful arguments on several different sides of the matter and thought it was worth the ink, although my own conclusion — that software is likely to remain “hard” and not become an easily predictable undertaking — is pretty clear.

Anyway: Go ahead and tell me my book sucks — I can take it! But don’t tell me that it sucks because it fails to acknowledge an argument that actually forms its very heart. Say that and, well, I’m just not going to be able to resist a retort.
[tags]dreaming in code, bryan cantrill, software engineering[/tags]

Filed Under: Dreaming in Code, Software

Code Reads #13: “The Inevitable Pain of Software Development”

October 31, 2007 by Scott Rosenberg

Code ReadsThis is the thirteenth edition of Code Reads, a series of discussions of some of the central essays, documents and texts in the history of software. You can go straight to the comments and post something if you like. Here’s the full Code Reads archive.

This month’s paper, Daniel Berry’s “The Inevitable Pain of Software Development, Including of Extreme Programming, Caused by Requirements Volatility,” is a sort of update and latter-day restatement of Frederick Brooks’s classic “No Silver Bullet” argument — that the traits of software development work that make it difficult are inherent in the enterprise and extremely unlikely to be vanquished by some breakthrough innovation.

Berry is, as he admits, not the first to locate the source of software’s essential difficulty in “requirements volatility” — unpredictable fluctuations in the list of things that the software being built is expected to be able to do (including variations in user behavior scenarios, data types and all the other factors that a working piece of software must take into account). Read any development manual, listen in on any software team’s gripe session and you will hear curses directed at “changing requirements.”

Every new approach to improving the software development process includes a proposed method for taming this beast. These methods all fail, Berry maintains, leaving software development just as much of a “painful” exercise as it was before their application.

In each case, Berry locates this failure in some aspect of or practice dictated by a particular method that programmers find to be too much of a pain to actually perform.

Every time a new method that is intended to be a silver bullet is introduced, it does make many part of the accidents easier. However, as soon as the method needs to deal with the essence or something affecting or affected by the essence, suddenly one part of the method becomes painful, distasteful, and difficult, so much so that this part of the method gets postponed, avoided and skipped….

Each method, if followed religiously, works… However, each method has a catch, a fatal flaw, at least one step that is a real pain to do, that people put off. People put off this painful step in their haste to get the software done and shipped out or to do more interesting things, like write more new code.

So that, for instance, the method of “requirements engineering” (exhaustively “anticipate all possible requirements and contingencies” before coding) offers many benefits, but “people seem to find haggling over requirements a royal pain.” Also, it demands that “people discover requirements by clairvoyance rather than by prototyping.”

Similarly, Extreme Programming (XP) depends on programmers writing test cases first. That’s a step that in itself seems to be painful for many developers. When requirements change, XP calls for frequent refactoring of existing code. “Refactoring itself is painful,” Berry notes. “Furthermore, it may mean throwing out perfectly good code whose only fault is that it no longer matches the architecture, something that is very painful to the authors of the code that is changed. Consequently, in the rush to get the next release out on time or early, refactoring is postponed and postponed, frequently to the point that it gets harder and harder.”

Berry goes right down the list and confirms that the pain he diagnoses is a condition universal in the field.

The situation with software engineering methods is not unlike that stubborn chest of drawers in the old slapstick movies; a shlimazel pushes in one drawer and out pops another one, usually right smack dab on the poor shlimazel’s knees or shins. If you find a new method that eliminates an old method’s pain, the new method will be found to have its own source of pain.

Berry’s paper concludes with an alternative version of the principle that in Dreaming in Code I dubbed, tongue-in-cheek, Rosenberg’s Law:

To the extent that we can know a domain so well that production of software for it becomes almost rote, as for compiler production these days, we can go the engineering route for that domain, to make building software for it as systematic as building a bridge or a building. However, for any new problem, where the excitement of innovation is, there is no hope of avoiding relentless change as we learn about the domain, the need for artistry, and the pain.

Berry writes about the creation of software as much from the vantage of psychology as from that of engineering, and that gives his observations a dimension of bracing realism. In “The Inevitable Pain of Software Development” I found a willingness to examine the actual behavior of working programmers that’s rare in the software-methodology literature.

Too many authors are all too eager to pronounce what developers should do without considering the odds that any particular developer actually will do these things. Berry is a realist, and he keeps asking us to consider the cascade of consequences that flows from each method’s weak spots.

His case against “pain” seems not to be a naive attitude of trying to take a process that’s fundamentally difficult and somehow conjure the hardness right out of it. Instead, he asks us to note carefully the location of the “pain points” in any particular approach to software creation — because, given human nature, these are its most likely points of failure.
[tags]code reads, software development, software methodologies, daniel berry[/tags]

Filed Under: Code Reads, Dreaming in Code, Software

“Evidence-based” software scheduling a la FogBugz

October 11, 2007 by Scott Rosenberg

Yesterday afternoon I hopped over to Emeryville to hear Joel Spolsky talk. He’s on the road promoting the new, 6.0 version of Fog Creek Software’s bug-tracking product. I’d paid little attention to the evolution of this product — Salon’s team long ago chose the open-source Trac, OSAF used Bugzilla, and when I first looked over FogBugz ages ago it looked like a perfectly serviceable Windows-based bug-tracking tool, no more.

Well, in the intervening time, the thing has gone totally Web-based and AJAX-ified, and it’s pretty cool just on those terms. It’s also grown a wiki and become more of a full-product-lifecycle project management tool, with integration for stuff like customer service ticket management.

Still, what’s most interesting about the new FogBugz is what Spolsky and his team are calling “Evidence Based Scheduling” (or — because everything must have an acronym — EBS). Now, anyone who’s read Dreaming in Code knows that I devote considerable verbiage to the perennial problem software teams face in trying to estimate their schedules. This is in many ways the nub of the software problem, the gnarly irreducible core of the difficulty of making software.

With EBS, FogBugz keeps track of each individual developer’s estimates (i.e., guesses) for how long particular tasks are going to take, then compares those estimates with the actual time the task took to complete. Over time it develops a sense of how reliable a particular developer is, and how to compensate for that developer’s biases (i.e., “Ramona consistently guesses accurately except that things always take her 20 percent longer than she guesses”).

With this information in place — and yes, that’s right, to use this system the developers have to keep track of how much time they spend on each task — the software can turn around and provide managers with a graph of ship-date likelihoods. You can’t say for sure, “The product will ship by March 31,” but you could say, “We have a 70 percent likelihood of shipping y March 31,” and then you can fiddle with variables (like “Let’s only fix priority one bugs”) and test out different outcomes.

Spolsky explained how FogBugz uses a Monte Carlo simulation algorithm to calculate these charts. (He provided a cogent explanation that my brain has now partially scrambled, but I think it’s like running a large number of random test cases on the data to generate a probability curve.) In any case, while I’m sure many managers will be interested in the prospect of a reliable software-project estimation tool, what I find intriguing is the chance that any reasonably wide deployment of FogBugz might yield some really valuable field data on software schedules.

The sad truth is that there’s very little good data out there. As far as I understand it, the CHAOS report is all self-reported (i.e., CTOs filling out surveys). To the extent that users of FogBugz are working from the hosted service rather than on their own installations of the software, the product will gradually produce a fascinating data set on programmer productivity. If that’s the case, I hope Spolsky and his company will make the data available to researchers. Of course, you’d want all the individual info to be anonymized and so on.

As I said, all of this depends on developers actually inputting how they spend their time. They’ll resist, of course — time sheets are for lawyers! Spolsky said Fog Creek has tried to reduce the pain in several ways: The software makes it easy to enter the info, you don’t worry about short interruptions and “bio-breaks,” i.e., bathroom runs (hadn’t heard that term before!), you just try to track tasks at the hourly or daily level, and you chunk all big tasks down to two-day or smaller size pieces. Still, I imagine that if evidence-based scheduling doesn’t catch on, this will be its point of failure. Otherwise, it sounds pretty useful.

UPDATE: Rafe Colburn is starting to use FogBugz 6.0 and has more comments…
[tags]software development, project management, joel spolsky, fogbugz[/tags]

Filed Under: Dreaming in Code, Software

How hard is a simple web app?

September 20, 2007 by Scott Rosenberg

On the continuing subject of “just how hard / easy is it to create a Web application, anyway?”, Aaron Swartz offers some thoughts, centered on the launch of his new Jottit service. Swartz seems to be on the other side of the fence from the Joel Spolsky essay that I wrote about yesterday. (Although I bet there’s a lot these two agree on, as well.)

There are two ways I look at it. One is: It took us five months to do that? And the other is: We did that in only five months?

When you look at what the site does, it seems pretty simple. It has few features, no complex algorithms, little gee-whiz gadgetry. It just takes your text and puts it on the Web. And considering how often I do that every day, it seems a bit odd that it took so long to create yet another way. And then I check the todo list.

As I’ve said, this is a site I wanted to get every little detail right on. And when you start sweating the small stuff, it’s frankly incredible just how much of it there is. Even our trivial site is made up of over two dozen different screens. Each one of those screens has to be designed to look and work just right on a wide variety of browsers, with a wide variety of text in them.

And that’s just making things look good — making them work right is much harder…

Read the whole thing, and then recall it the next time someone tells you how simple it is to throw up a Web 2.0 site. Of course, Swartz is proclaimedly trying to “get every little detail right.” I gather he is not a Big Ball of Mud kind of guy.
[tags]aaron swartz, jottit, web 2.0, software development, wep applications[/tags]

Filed Under: Dreaming in Code, Software

Chandler Preview: from dream toward reality

September 20, 2007 by Scott Rosenberg

It feels like only yesterday I was staring in disbelief at the first hardcover copies of Dreaming in Code, but now we’re getting the paperback edition ready (for release in early 2008). I’d always wanted the chance to write a new postscript to the book, bringing the Chandler story up to date. The timing turned out to be fortuitous: the Open Source Applications Foundation released what they’re calling the Preview edition of Chandler last week.

I wrote a little about the saga of Chandler Preview back in January, when the OSAF team hoped to have a release out in April. As that date slipped steadily, I glanced at the calendar nervously, because I knew that sooner or later my publisher would have to close the door on any additions to the paperback. But the timing worked out: OSAF got its Preview out just in time for me to see and use it before I wrote up the new material.

For those of you who have been following the work on Chandler, Preview is what OSAF formerly called Chandler 0.7. After 0.6 shipped near the end of 2005 Mitch Kapor and the OSAF developers decided that they would plan the next big release to be a fully usable, if not feature-complete, sharable calendar and task manager with limited e-mail. You can download the result and try it out yourself.

Over the years Chandler has expanded into a small constellation of products — the desktop application, a server (formerly called Cosmo, now known as Chandler Hub), and a web interface to the server. OSAF now offers free accounts on its own Chandler Hub that you can use to sync your desktop and Web data.

On the one hand, of course, Chandler is way later than even seemed possible back in 2002 when it was first announced. How and why that occurred is the heart of my book. So much has happened on the Web and in the software industry since then that people ask, reasonably, what Chandler can possibly do that they’re not able to do already with Google Calendar or any of the other calendar/e-mail/task management offerings out there.

One big tech-industry story this week was Yahoo’s $350 million acquisition of Zimbra — an open-source Outlook replacement that started well after Chandler and delivered working software a lot sooner. Zimbra is impressive and full of nifty features, and its focus on solving a lot of the cellphone-and-handheld coordination issues for people was smart. But it didn’t try to introduce a new way of managing one’s information.

For better and worse, Chandler did. In this area, it aimed higher than Zimbra or most of the other competition; and its grand reach plainly exceeded its grasp. The Preview edition’s Dashboard provides a glimpse of the different way of organizing one’s work that Kapor and the Chandler designers propose. I don’t think it’s either as accessible for newcomers or as tractable for initiates as it needs to be. But neither is it simply an Outlook retread.

Anyone who has tried to organize the work of a small group with software knows that — even with Web 2.0 and Ajax and the best stuff we can throw at the problem in 2007 — we’ve only barely begun to leverage what computers can do in this area. Chandler deserves credit for acknowledging this and setting out to do better. Its setbacks can be chalked up in part to the choices and mistakes its developers made along their long road; but they are also a sign of just how tough the problem really is.

I’m still not ready to adopt Chandler for my own everyday use. But I’m not especially happy with what I am using, either. That means there’s still room for the sort of program Chandler has always been intended to be. The Preview release isn’t yet that program. But for the first time it’s moved close enough for anyone to play with, and see what it might someday become.
[tags]chandler, osaf, open source applications foundation[/tags]

Filed Under: Dreaming in Code, Software

Spolsky on Web app development

September 19, 2007 by Scott Rosenberg

Joel Spolsky’s latest essay, “Strategy Letter VI,” offers a smart analogy between the desktop software wars of the 1980s — when companies like Lotus bet on producing code that could run on the slow, small-memory machines of the present, only to lose as PCs got faster, quick — and the Web-based software wars of today.

I think the following passage about Web-app development today could even be read as a (partial, qualified) endorsement of Big Ball of Mud:

The developers who put a lot of effort into optimizing things and making them tight and fast will wake up to discover that effort was, more or less, wasted, or, at the very least, you could say that it “conferred no long term competitive advantage,” if you’re the kind of person who talks like an economist.

The developers who ignored performance and blasted ahead adding cool features to their applications will, in the long run, have better applications.

[tags]joel spolsky, web development[/tags]

Filed Under: Code Reads, Dreaming in Code, Software

Code Reads #12: “Big Ball of Mud”

September 16, 2007 by Scott Rosenberg

Code ReadsThis is the twelfth edition of Code Reads, a series of discussions of some of the central essays, documents and texts in the history of software. You can go straight to the comments and post something if you like. Here’s the full Code Reads archive.

“Big Ball of Mud,” a 1999 paper by Brian Foote and Joseph Yoder (pdf), sets out to anatomize what it calls “the enduring popularity” of the pattern of software construction named in its title, “this most frequently deployed of software architectures,” “the de-facto standard software architecture,” “the architecture that actually predominates in practice”: a “haphazardly structured, sprawling, sloppy, duct-tape and baling wire, spaghetti-code jungle.”

This is dire stuff, and when I first glanced at “Big Ball of Mud” I thought I was in for an amusing satire — perhaps a parody of the “software patterns” school. Instead — and what I found most fascinating about the paper — the authors actually walk a fine and narrow line between a Swiftian embrace of the mud-splat school of programming and the sort of “we know better than all those idiots” arrogance that’s found in a lot of the software literature.

Despite the best efforts of “best practices” advocates and methodology gurus, mud is everywhere you look in the software field. This cannot be a coincidence or represent mere laziness. The authors ask, “What are the people who are building [Big Balls of Mud] doing right?”

Their answer: “People build big balls of mud because they work. In many domains, they are the only things that have been shown to work.”
[Read more…]

Filed Under: Code Reads, Dreaming in Code, Software

Nothing to fear but complexity itself

September 14, 2007 by Scott Rosenberg

Over my many years at Salon — in my role as the geekiest of our editorial management team — I found myself often being asked whether some particular problem we were having with our site or our email system or something else might be the result of “hackers.”

Most of the time, I spared my inquisitors the lecture on the history and proper use of that term. Except in a tiny number of cases where there was specific evidence suggesting at least the possibility of some sort of foul play, I’d simply remind everyone how many different things could go wrong on any digital network, argue that the odds favored the likelihood of some sort of malfunction rather than malfeasance, and suggest that everyone should relax (except for our sysadmins, of course, who were busy trying to diagnose the problem).

Bugs are many, break-ins are few. John Schwartz had a good piece in the Times earlier this week offering further reinforcement of that perspective, looking specifically at the transportation system and the slow-motion train wreck of the effort to computerize our voting systems.

…Problems arising from flawed systems, increasingly complex networks and even technology headaches from corporate mergers can make computer systems less reliable. Meanwhile, society as a whole is growing ever more dependent on computers and computer networks, as automated controls become the norm for air traffic, pipelines, dams, the electrical grid and more.

“We don’t need hackers to break the systems because they’re falling apart by themselves,” said Peter G. Neumann, an expert in computing risks and principal scientist at SRI International, a research institute in Menlo Park, Calif.

It was this tension between our social dependence on complex software systems and our continuing inability to produce software in a reliable way that motivated me to write Dreaming in Code.
[tags]complexity, john schwartz, software development, dreaming in code[/tags]

Filed Under: Dreaming in Code, Software

Ecco Pro — back from the dead, again

September 4, 2007 by Scott Rosenberg

Longtime readers here know of my interest in the subject of outliners and in particular my dedication to an old program called Ecco Pro. I used it as my main organizer for my first book, and now, as I begin work on a new one, I find myself turning to it once again. (If you want to understand why, Andrew Brown’s recent piece in the Guardian offers a thorough explanation.)

Ecco devotees long hoped the program might be open-sourced, but the hopes never materialized. Nonetheless, in one of those twists and turns that keep the software world interesting, there has been much movement in the Ecco world in recent months — and, even without the code being open-sourced, there’s the first significant new work on the program in years.

Here, as far as I can tell, is what happened: A programmer who goes by the handle “slangmgh” posted a message to the Yahoo Group “ecco_pro” on April 16th: “I write little utility, have upload to the files directory! It’s only work for EccoPro v4.01.”

The file was called “EccoPro extension.” It included a half-dozen significant fixes and upgrades to the program. A day later, he’d uploaded a 1.1 version of his “little utility.” Today, he is on 3.6 or so. His furious pace of development has involved, if I understand correctly, the incorporation of the Lua scripting language into the extension. It’s all made possible by the essential solidity of the original program and the API hooks its creators provided — so that, even though the original Ecco code can’t be changed, it can still be built upon.

The only downside to the whole thing is that “slangmgh” is plainly not a native English speaker and so his explanations of the changes and features are sometimes difficult to follow. In recent weeks, other members of the Ecco support group have stepped forward to provide better documentation.

There you have it: an orphaned program that hasn’t been touched in a decade but that still has a devoted community of users suddenly starts evolving again in the hands of an energetic programmer. I don’t know where the Ecco story will ultimately lead but I’m delighted to see it still unfolding.
[tags]ecco pro, pims, outliners[/tags]

Filed Under: Software, Technology

« Previous Page
Next Page »