Wordyard

Hand-forged posts since 2002

Scott Rosenberg

  • About
  • Greatest hits

Archives

Dear publishers: When you want to switch platforms and “redesign” too? Don’t

April 9, 2014 by Scott Rosenberg 11 Comments

4344254749_b400919e68_o

In my work at Grist, I had a rare experience: We moved an entire publishing operation — with a decade of legacy content, in tens of thousands of posts — from one software platform to another. And yet, basically, nothing broke. Given the scars I bear from previous efforts of this kind, this was an exhilarating relief.

I promised my former colleague Matt Perry (then technical lead at Grist, who bears much responsibility for our success in that move, along with my other former colleague Nathan Letsinger) that I’d share notes with the world on what we learned in this process. It’s taken me forever, but here they are.

Say you run a website that’s been around the block a few times already. You’re going to move your operation from one content management platform to another. Maybe you’ve decided it’s time to go with WordPress. Or some other fine system. Or you’re lucky enough, or crazy enough, to have a developer or a team of coders who’ve built you a custom system.

Then you look at your site’s design: the templates, the CSS, the interface, the structure and navigation all the stuff that makes it look a certain way and behave a certain way. You think, boy, that’s looking old. Wouldn’t it be great to spiff everything up? And while you’re at it, that new platform offers so many exciting new capabilities — time to show them off!

It seems so obvious, doesn’t it? You’re already taking the time away from publishing, or community-building, or advocacy, or monetizing eyeballs, or whatever it is you do with your site, to shore up its technical underpinnings. Now is surely the perfect moment to improve its public face, too.

This is where I am going to grab you by the shoulders and tell you, sadly but firmly and clearly: NO. Do not go there.

Redesigning your site at the same time you’re changing the software it runs on is a recipe for disaster. Here Be Train Wrecks.

Don’t believe me? Go ahead then; do your redesign and your platform move at the same time! Here’s what you may find.

You’ve just split your team’s focus and energy. Unless you have a lot of excess capacity on the technical side — and every online publisher has, like, technical folks sitting around with nothing to do, right? — your developers and designers are already stretched to the limit putting out everyday fires. Any major project is ambitious. Two major projects at once is foolhardy.

You’re now stuck creating a big new design in the dark. That new platform isn’t live yet, so you can’t take the sane route of implementing the new design in bits and pieces in front of real live users. Your team is free to sit in a room and crank out work, sans feedback! Good luck with that.

You’re now working against the clock. Back-end platform changes are full of unpredictable gotchas, and almost always take longer than you think. That doesn’t have to matter a great deal. But the moment you tie the move to a big redesign project, you’re in a different situation. More often than not, the redesign is something that everyone in your company or organization has an investment in. Editors and creators have work with deadlines and must-publish-by dates. Business people have announcements and sales deals and marketing pushes that they need to schedule. The stakes are in the ground; your small-bore back-end upgrade is now a major public event. This is where the worst train wrecks (like that one at Salon over a decade ago that still haunts my dreams) happen.

Painful as it may be, and demanding of enormous self-restraint, the intelligent approach is to move all your data over on the back end first, while duplicating your current design on the new platform. Ideally, users won’t notice anything different.

I’m fully aware that this recommendation won’t come as news to many of you. It’s simple science, really: Fiddle with only one variable at a time so you can understand and fix problems as they arise. I’m happy to report that this approach not only makes sense in the abstract, but actually works in the field, too.

(Of course, you may wish to go even further, and eliminate the whole concept of the site redesign as a discrete event. The best websites are continuously evolving. “Always be redesigning.”)

Filed Under: Media, Personal, Software, Technology

New bridge, old book: the shape of software progress

September 2, 2013 by Scott Rosenberg 3 Comments

New Bay Bridge east spanThe Bay Bridge’s new eastern span is about to open. When they started building it over a decade ago, I was beginning work on my book Dreaming in Code. As I began digging into the history of software development projects and their myriad delays and disasters, I kept encountering the same line: Managers and executives and team leaders and programmers all kept asking, “Why can’t we build software the way we build bridges?”

The notion, of course, was that somehow, we’d licked building bridges. We knew how to plan their design, how to organize their construction, how to bring them in safely and on time and within budget. Software, by contrast, was a mess. Its creators regularly resorted to phrases like “train wreck” and “death march.”

As I began my research, I could hear, rattling the windows of my Berkeley home, the deep clank of giant pylons being driven into the bed of San Francisco Bay — the first steps down the road that ends today with the opening of this gleaming new span. I wrote the tale of the new bridge into my text as an intriguing potential contrast to the abstract issues that beset programmers.

As it turned out, of course, this mammoth project proved an ironic case in any argument involving bridge-building and software. The bridge took way longer than planned; cost orders of magnitude more than expected; got hung up in bureaucratic delays, political infighting, and disputes among engineers and inspectors; and finally encountered an alarming last-minute “bug” in the form of snapped earthquake bolts.

So much for having bridges down. All that the Bay Bridge project had to teach software developers, really, was some simple lessons: Be humble. Ask questions. Plan for failure as well as success.

Discouraging as that example may be, I’m far more optimistic these days about the software world than I would ever have expected to become while working on Dreaming. Most software gets developed faster and in closer touch with users than ever before. We’ve turned “waterfall development” into a term of disparagement. No one wants to work that way: devising elaborate blueprints after exhaustive “requirements discovery” phases, then cranking out code according to schedules of unmeetable precision — all in isolation from actual users and their changing needs. In the best shops today, working code gets deployed regularly and efficiently, and there’s often a tight feedback loop for fixing errors and improving features.

My own recent experiences working closely with small teams of great developers, both with MediaBugs and now at Grist, have left me feeling more confident about our ability to wrestle code into useful forms while preserving our sanity. Software disasters are still going to happen, but I think collectively the industry has grown better at avoiding them or limiting their damage.

While I was chronicling the quixotic travails of OSAF’s Chandler team for my book, Microsoft was leading legions of programmers down a dead-end path named Longhorn — the ambitious, cursed souffle of an operating system upgrade that collapsed into the mess known as Windows Vista. At the time, this saga served to remind me that the kinds of delays and dilemmas the open-source coders at OSAF confronted were just as likely in the big corporate software world. Evidently, the pain still lingers: When Steve Ballmer announced his retirement recently, he cited “the loopedy-loo that we did that was sort of Longhorn to Vista” as his biggest regret.

But Longhorn might well have been the last of the old-school “death marches.” Partly that’s because we’ve learned from past mistakes; but partly, too, it’s because our computing environments continue to evolve.

Our digital lives now rest on a combination of small devices and vast platforms. The tech world is in the middle of one of the long pendulum swings between client and server, and right now the burden of software complexity is borne most heavily on the server side. The teeming hive-like cloud systems operated by Google, Facebook, Amazon and their ilk, housed in energy-sucking server farms and protected by redundancy and resilient thinking, are among the wonders of our world. Their software is run from datacenters, patched at will and constantly evolving. Such systems are beginning to feel almost biological in their characteristics and complexities.

Meanwhile, the data these services accumulate and the speed with which they can extract useful information from it leave us awe-struck. When we contemplate this kind of system, we can’t help beginning to think of it as a kind of crowdsourced artificial intelligence.

Things are different over on the device side. There, programmers are still coping with limited resources, struggling with issues like load speed and processor limits, and arguing over hoary arcana like memory management and garbage collection.

The developers at the cloud platform vendors are, for the most part, too smart and too independent-minded to sign up for death marches. Also, their companies’ successes have shielded them so far from the kind of desperate business pressures that can fuel reckless over-commitment and crazy gambles.

But the tech universe moves in cycles, not arcs. The client/server pendulum will swing back. Platform vendors will turn the screws on users to extract more investor return and comply with increasingly intrusive government orders. Meanwhile, the power and speed of those little handheld computers we have embraced will keep expanding. And the sort of programmers whose work I celebrated in Dreaming in Code will keep inventing new ways to unlock those devices’ power. It’s already beginning to happen. Personal clouds, anyone?

Just as the mainframe priesthood had to give way to the personal-computing rebels, and the walled-garden networks fell to the open Internet, the centralized, controlled platforms of today will be challenged by a new generation of innovators who prefer a more distributed, self-directed approach.

I don’t know exactly how it will play out, but I can’t wait to see!

Filed Under: Dreaming in Code, Software

Some software use notes

October 5, 2010 by Scott Rosenberg 9 Comments

A miscellany today: Amazon’s Kindle for the Web, WordPress’s new Offsite Redirects feature, and a little complaint about iTunes.

  • Kindle for the Web
    Kindle for the Web lets you embed a chunk of a book onto a Web page. I thought it would be a fun thing to experiment with here and played with it a bit this morning but it turns out to look lousy in narrow column — it really needs a full-page width, which is hard on any page with a sidebar (i.e., gazillions of Web pages). So either I’m doing it wrong or it needs some tweaking.
  • WordPress Offsite Redirects
    One of the toughest choices you make as you step out onto the Web is where to put your writing. Lots of choices today, sure, from self-hosted to free or paid hosted services. But what happens if you need to move? People still need to find you, your stuff is embedded in the Web with tons of links, you’ve got some rank in Google… you don’t want to throw any of that away.

    This is called lock-in, and it’s how too many Web and software businesses hold onto customers — not, in other words, by real loyalty, but by inertia and inconvenience.

    So super kudos to the WordPress.com team for offering a new feature that lets you move away from WordPress.com and point your incoming traffic forward to your new home. It’s not a free service (don’t know how much it costs). But the most common scenario is for someone who started a free blog at WordPress.com who’s now planning to operate it as more of a business and needs the freedom and versatility of hosting their own site. That kind of user isn’t going to mind paying a small fee, whatever it is, to hold onto the links and traffic she’s already accumulated.

    As WordPress’s Matt Mullenweg said on his blog, quoting Dave Winer: “The easier you make it for people to go, the more likely they are to stay.” Indeed!

  • Irksome iTunes
    iTunes is now an almost-decade-old tool, one that supports an ever-wider array of Apple products, and that groans beneath the weight. What I don’t understand is why, in all this time, they haven’t fixed what I find to be the single most annoying problem with the interface, one that still trips me up nearly every day. It’s with how the search box works.

    Here’s the scenario:

    1. I type a search in the box at the upper right of the window — say, “Mountain Goats.”
    2. I realize I’m not finding what I’m after because the left-hand column selecter is not on my “music library” but on some playlist.
    3. I click “music library” at the top of the left column.
    4. The search term disappears from the box and so I HAVE TO TYPE IT AGAIN.

    This is a recurring irritation. Surely it’s possible to keep the search term loaded and apply it to the new choice in the left-hand column? I mean, I don’t know, maybe it’s not a really simple problem, maybe it’s even a big hairy problem. But Apple has now had how many years to fix it?

    Maybe there is some logical basis for viewing this as a feature and not a bug. If so, I certainly can’t see it!

Filed Under: Blogging, Software

Change is good, but show your work: Here’s a WordPress revisions plugin

August 3, 2010 by Scott Rosenberg 27 Comments

A couple of weeks ago I posted a manifesto. I said Web publishers should let themselves change published articles and posts whenever they need to — and make each superseded version accessible to readers, the way Wikipedians and software developers do.

This one simple addition to the content-management arsenal, known as versioning, would allow us to use the Web as the flexible medium it ought to be, without worrying about confusing or deceiving readers.

Why not adopt [versioning] for every story we publish? Let readers see the older versions of stories. Let them see the diffs. Toss no text down the memory hole, and trigger no Orwell alarms.

Then I asked for help creating a WordPress plugin so I could show people what I was talking about. Now, thanks to some great work by Scott Carpenter, we have it. It’s working on this blog. (You can get it here.) Just go to the single-page form of any post here (the one that’s at its permalink URL, where you can see the comments), and if the post has been revised in any way since I published it, you can click back and see the earlier versions. You can also see the differences — diffs — highlighted, so you don’t have to hunt for them.

The less than two weeks since my post have given us several examples of problems that this “show your work” approach would solve. One of them can be found in the story of this New York Times error report over at MediaBugs.

An anonymous bug filer noticed that the Times seemed to have changed a statistic in the online version of a front-page story about where California’s African Americans stood on pot legalization. As first published, the story said blacks made up “only” or “about 6 percent” of the state population; soon after it was posted, the number changed to “less than 10 percent.” There’s a full explanation of what happened over at MediaBugs; apparently, the reporter got additional information after the story went live, and it was conflicting information, so reporter and editor together decided to alter the story to reflect the new information.

There is nothing wrong with this. In fact, it’s good — the story isn’t etched in stone, and if it can be improved, hooray. The only problem is the poor confused reader, who saw a story that read one way before and now reads another way. The problem isn’t the change; it’s the failure to note it. Showing versions would solve that.

Another Times issue arose yesterday when the paper changed a headline on a published story. The original version of a piece about Tumblr, the blogging service, was headlined “Facebook and Twitter’s new rival.” Some observers felt this headline was hype. (Tumblr is successful but in a very different league from the vastness of Facebook and Twitter.) At some point the headline was rewritten to read “Media Companies Try Getting Social With Tumblr.” Though the article does sport a correction now fixing some other errors, it makes no note of the headline change.

I don’t know what official Times policy is on headline substitution. Certainly, Web publications often modify headlines, and online headlines often differ from print headlines. Still, any time there’s an issue about the substance of a headline, and the headline is changed, a responsible news organization should be forthright about noting the change. Versioning would let editors tinker with headlines all they want.

I do not mean to single out the Times, which is one of the most scrupulous newsrooms around when it comes to corrections. Practices are in a state of flux today. News organizations don’t want to append elaborate correction notices each time they make a small adjustment to a story. And if we expect them to, we rob ourselves of the chance to have them continuously improve their stories.

The versioning solution takes care of all of this. It frees writers and editors to keep making their work better, without seeming to be pulling a fast one on their readers. It’s a simple, concrete way to get beyond the old print-borne notion of news stories as immutable text. It moves us one decent-sized step toward the possibilities the Web opens up for “continuing stories,” iterative news, and open-ended journalism.

How the plugin happened: I got some initial help from Stephen Paul Weber, who responded to my initial request to modify the existing “post revision display” plugin so as to only list revisions made since publication. Weber modified the plugin for me soon thereafter (thank you!). Unfortunately, I failed to realize that that plugin, created by D’Arcy Norman, only provided access to version texts to site administrators, not regular site visitors.

Scott Carpenter, the developer who’d originally pointed out the existing plugin to me, stepped up to the plate, helped me work up a short set of requirements for the plugin I wanted, and set to work to create it. Here’s his full post on the subject, along with the download link for the plugin. We went back and forth a few times. He thought of some issues I hadn’t — and took care of them. I kept adding new little requirements and he knocked them off one by one. I think we both view the end-product as still “experimentally usable” rather than a polished product, but it’s working pretty well for me here.

As the author of a whole book on why making software is hard, I’m always stunned when things go really fast and well, as they did here. Thanks for making this real, Scott!

If you run WordPress and like the idea of showing your work, let us know how it goes.

Filed Under: Media, Mediabugs, Software

Help with a WordPress plugin for published versions

July 23, 2010 by Scott Rosenberg 10 Comments

My “versioning for all news stories!” manifesto inspired lots of feedback. A good amount of it was along the lines of, “What are you talking about? How would this work?” I’ve been pointing people to Wikipedia’s “view history” tabs, which are a great start. (I also notice that the Guardian UK now posts, on each article, a story history, which tells you that the article was modified, but doesn’t actually show you the different versions.)

What I’d like to do now is pursue this at the level of a live demo right here on this blog. So I put out a call on Twitter for help in creating a WordPress plugin that would let me expose every version of each post. I only want to show the versions since publication — a rough draft pre-publication should remain for the author’s (and editor’s, if there is any) eyes only.

Scott Carpenter helpfully pointed me to this existing plugin, which outputs a list of all versions of each post.

This is a great start. All I need now is to add a little code to the plugin that gets it to show only the post-publication versions.

I know just enough about PHP to mess around with templates and cut-and-paste code snippets, but not enough, I think, to do this right. Anyone interested in helping out on this little project?

Someday, when this versioning thing catches on and becomes a universal practice, you’ll be able to say to yourself, with a little smile of satisfaction, “I was there when it all began.”

Filed Under: Blogging, Media, Software

A geeky problem with Mac scripting

November 22, 2009 by Scott Rosenberg 12 Comments

Here’s what turns out to be the most intractable problem I’ve encountered in my move to OSX as my primary work platform:

For years I used a programmers’ text editor tool in Windows called Ultraedit. It worked great and allowed me to record macros. The most indispensible one, which I used constantly, was for automating the creation of HTML links. I would store the link-to URL in a clipboard, select some link text and start the macro. The macro would magically surround the link text with the proper HTML code to link it to the URL in the clipboard.

I achieved this by
(a) copying the link text to a second clipboard;
(b) typing the <a href=”

(c) pasting in the URL from the first clipboard;
(d) closing the tag with “>
(e) pasting the link text from clipboard #2;
(f) ending the link with </a>

It sounds kinda complicated but it worked beautifully, and Ultraedit’s macro recorder simply “got it.” I created the macro years ago, and its keyboard shortcut became hardwired in my memory.

Now I’m using TextWrangler and, alas, AppleScript doesn’t seem to get it at all. The AppleScript recorder seems to grab the actions at too specific a level — i.e., it doesn’t capture “switch to next clipboard” but records the specific clipboard number; it doesn’t capture “current active document” but records the specific document name that I happen to be using while I’m recording the script.

I was gearing myself up to learn enough AppleScript to try to write the script (or edit a recorded script well enough to make it work). Then I discovered that, perhaps thanks to Snow Leopard upgrade, the entire AppleScript recorder in TextWrangler doesn’t seem to work at all. When I record a script and try to save it I get the following error message: (MacOS Error code: -4960). As far as I can tell, I can’t save any scripts at all, making any AppleScript solution to this problem seem hopeless.

I know, I know, if I had learned emacs years ago I wouldn’t have any of these problems. But I didn’t. I welcome any tips/suggestions! Is there a text-editor for Mac that will make my life easier? (I used to use the full version of BBEdit, and, back in those days, it wasn’t any easier to script than Textwrangler.) Is there some obvious solution I’m missing?

Filed Under: Software

Mac life after Ecco

November 9, 2009 by Scott Rosenberg 23 Comments

For years I organized my life with the wonderful, now-orphaned and somewhat antiquated Windows outliner Ecco Pro. For me Ecco was versatile enough to function effectively as both a todo-list manager and a repository for random information, scattered ideas and research. It really could do it all.

I’ve always used both Macs and PCs but this year I’ve migrated my main workspace over to OS X. There were many compelling reasons to do this, but I’ve had to struggle with finding an Ecco replacement. (Yes, I could run it on my Mac in a Windows virtual machine, but it’s a bit kludgy, and it’s time for me to move away from this program that, despite the efforts of many devotees, doesn’t look like it will ever be fully modernized.)

So far, it’s looking to me like there is no one Mac application that can serve in both roles (todo list and information organizer). OmniOutliner is a pretty good all purpose outliner, and it has a companion, “Getting Things Done”-based todo list program called OmniFocus. Though I’ve made my peace with OmniOutliner, I have not fallen in love with OmniFocus. It follows the David Allen GTD approach a little too rigidly for me, it has various features I don’t need and it’s missing some that I do want (as far as I’ve been able to tell, for instance, it lacks the ability to make some item vanish until a certain date when it reappears–what I call the “out of my face” tool).

So I’ve begun exploring various combinations of other tools. Right now, it’s Evernote for research/information and Things for todo management. I’m also going to look into Tinderbox, Yojimbo and some other applications that look promising. I know the Mac ecosystem is full of great products that sometimes have only small followings, so if there’s one you’re especially enamored of, do let me know.

I’ve also been playing around with Thinklinkr, a new Web-based outliner. It has one huge plus: It’s got an absolutely top-notch browser interface (it’s the only browser-based outlining tool I’ve found that is as responsive and fast as Ecco on the desktop — bravo for that!). At the moment, though, it’s a somewhat rudimentary tool; it lacks various features one might want, and it looks like it’s being aimed at the (important but different) market for collaborative outlining rather than personal information management. But it’s definitely worth a look if you’re into outlining.

Filed Under: Personal, Software

iBank failure: reporting problems

June 1, 2009 by Scott Rosenberg 14 Comments

Besides Ecco, Quicken is really the last app that I still need Windows for. (Quicken for the Mac is way inferior.) So I thought I’d finally figure out which of the Mac personal-finance contenders would best suit my needs: simple budget and expense tracking on several checking accounts and a credit card or two. All evidence pointed to iBank. I downloaded the program on free trial and checked it out. The register worked nicely, the interface was smooth, and it seemed like importing my 12 years’ worth of Quicken data could be accomplished. So I plunked down the not inconsiderable charge for the program, spent an hour or two figuring out how to avoid having transfers appear twice after the import, and thought I’d solved my problem.

Then I tried to create a report. And the program that had until that moment seemed well-built and -designed turned to sand between my fingers. Report? iBank basically says. What’s that? Oh, you have to create a chart and then you can generate a report? That seems silly — I don’t need a pie chart, it doesn’t tell me what I need to know, but if I have to pay the pie chart tax before I can get to my report, OK! I’ll make some pies! So finally I click the button to make a report and wait for the program to ask me some questions about, you know, which categories and dates and accounts I want to include in the report. But there is no dialogue box. The program grinds through its data and a minute later it spits out a clumsily formatted PDF. Wait a minute; I can customize the chart, and that should then change the report, right? But no, that would be too logical. Whatever I do to the chart, the report is still the same useless, largely unreadable junk.

This is a problem, because, really, the only point to the tedium of entering all these transactions is that at the end of the labor you can click a few buttons and actually gain some insight into where and how you are spending your money. iBank is like a financial-software roach motel: you can get your data in easily enough, but just try getting useful information out the other side!

My guess is that coding up a useful report generator must’ve fallen off the developers’ feature list somewhere along the way and keeps dropping off the upgrades list. Obviously I’m hugely disappointed, particularly since the trial version of iBank doesn’t let you enter more than a handful of transactions, so you never really have the chance to test out the report quality.

I think the next step is to give up on this category altogether and experiment with the online/cloud-based alternatives. Of the available choices, Wesabe, which I’ve begun playing with, and Mint appear to be the likeliest contenders. I’ll let you know how it goes, and welcome any tips and experiences you may have.

Filed Under: Business, Personal, Software, Technology

Ecco in the cloud with Amazon

March 24, 2009 by Scott Rosenberg 12 Comments

Late last night — because late night is the time to tinker with software! — I decided to test drive Dave Winer’s recent crib sheet on setting up an Amazon Web Services cloud-based server. Dave called it “EC2 for Poets” (EC2 is the name of Amazon’s service), and I’ve always been a fan of “Physics for Poets”-style course offerings, so — though I do not write poetry — he lured me in.

For the uninitiated, Amazon has set up a relatively simple way for anyone to purchase and operate a “virtual server” — a software-based computer system running in their datacenter that you access across the Net. It’s like your own Windows or Linux box except there’s no box, just code running at Amazon. If you’ve ever run one of those arcade video-game emulators on your home computer, you get the idea: it’s a machine-within-a-machine, like that, only it’s running somewhere else across the ether.

Dave provided crystal clear step-by-step instructions for setting up and running one of these virtual servers. (Writing instructions for nonprogrammers is, as they say in software-land, non-trivial. So a little applause here.) The how-to worked hitch-free; the whole thing took about a half-hour, and by far the longest part was waiting for Amazon to launch the server, which took a few minutes.

But what should one do with such a thing? Dave’s sample installation runs a version of his OPML editor, an outlining tool. That gave me an idea.

Regular readers here know of my dependence on and infatuation with an ancient application called Ecco Pro. It’s the outliner I have used to run my life and write my books for years now. It has been an orphaned program since 1997 but it still runs beautifully on any Win-32 platform; it’s bulletproof and it’s fast. My one problem is that it doesn’t share or synchronize well across the Net (you need to do Windows networking to share it between machines, and I just don’t do that, it’s never made sense to me, as a one-man shop with no IT crew).

But what if I were running Ecco on an Amazon-based server? Then I could access the same Ecco document from any desktop anywhere — Macs too. So I downloaded the Ecco installer (using a browser running on the Amazon-server desktop, which you access via the standard Windows Remote Desktop Connection tool), ran it, and — poof! — there it was, a 12-year-old software dinosaur rearing its ancient head into the new Web clouds:

eccoincloud

What you see here in the innermost window is Ecco itself (displaying some of the sample data it installs with). Around that is the window framing the remote desktop — everything in there represents Windows running in the cloud. The outermost frame is just my own Windows desktop.

This remains very much in Rube-Goldberg-land at this point. Accessing this remote server still requires a few more steps than you’d want to go through for frequent everyday use. (To me it felt like it was about at the level that setting up your own website was in 1994 when I followed similar cribsheets to accomplish that task.) And the current cost of running the Amazon server — which seems to be about 12.5 cents per hour, or $3 a day, or over $1000 a year — makes it prohibitive to actually keep this thing running all the time for everyday needs.

On the other hand, you have to figure that the cost will keep dropping, and the complexity will get ironed out. And then we can see one of many possible future uses for this sort of technology: this is where we’ll be able to run all sorts of outdated and legacy programs when we need to access data in different old formats. Yesterday’s machines will virtualize themselves into cloud-borne phantoms, helping us keep our digital memories intact.

Filed Under: Net Culture, Software, Technology

Chandler 1.0 ships

August 10, 2008 by Scott Rosenberg

When I first began reporting on Chandler for Dreaming in Code at the very start of 2003, there was talk of shipping a 1.0 version within a year. Then, in following years, the project got so bogged down that at times it was hard to imagine it ever arriving at such a milestone.

Well, on Friday, the OSAF team released a 1.0 version of Chandler. At the moment I am too deep in the swamps of blog history circa 2001 to do full justice to this news, but must take note nonetheless.

Chandler, of course, is the personal-information-management application whose story sat at the center of my first book. I last checked in on the project at the start of this year, when OSAF and Kapor parted ways.

It’s been close to six years since Mitch Kapor first announced plans for Chandler, and the application today is quite different from what was envisioned then. But it does fulfill at least a portion of the ambitious agenda Kapor set: It’s fully cross-platform, and, from the user side, it takes a very flexible approach to data. The program was once positioned as a calendar with email and task capabilities, and it’s still got those features, but it’s now presented as a notebook program — it’s “The Note-To-Self Organizer.” You store information free-form and then can organize it according to now/later/done triaging, turn items into tasks and schedule them on the calendar, group data in multiple collections, and share it across the web via the Hub server. I’m looking forward to experimenting more with it.

The OSAF blog post announcement includes some more detail. And James Fallows has a good post up at the Atlantic.

Filed Under: Dreaming in Code, Software

Next Page »