We arrived today in Estes Park, Colorado. We’re here at the YMCA of the Rockies because my wife’s company puts on a big yoga conference here every year. This is the first time we’ve all gone, including the kids. We walked out from dinner and were greeted by a trio of elk, just hanging out by the side of the road. The two big guys were locking horns and obviously engaged in some sort of vying-for-the-female ritual. They were all oblivious to the gathering crowd of human gawkers. They behaved as if they owned the place. Which in a way, of course, they do.
Archives for September 2007
Blogging will be lighter over the next week as I’ll be on the road — family vacation (at least for me and the kids) in Colorado near Rocky Mountain National Park, then on to Dallas for a talk, hosted by the Society of Information Management.
In the meantime, I’m going to queue up the next Code Reads — one that I have not yet read, so it’ll be new to me as perhaps to some of you: Daniel Berry’s The Inevitable Pain of Software Development, Including of Extreme Programming, Caused by Requirements Volatility. Thanks to Will Sargent for the suggestion.
It’s now about a year that we’ve been doing this series and I’ve completed 12 installments, so the appropriate thing to do is to stop fighting the inevitable and accept that this is a monthly schedule! What I will try to do is keep that monthliness honest. So this paper will be the October edition. That should give me plenty of time…
Nick Carr on 8/31/07, writing about the effort to change how the Internet domain system’s “WHOIS” records work:
What makes the WHOIS deadlock interesting is that it reveals, in microcosm, the great and ever widening divide that lies at the net’s heart — the divide between the network as a platform for commerce and the network as a forum for personal communication. The way that tension is resolved — or not resolved — will go a long way toward determining the ultimate identity and role of the internet.
Carr’s succinct (and I think accurate) anatomy of the couer d’Net caught my eye and echoed something just beyond my memory’s grasp. Then I realized, right, this is very much the same dichotomy that I wrote about a long time ago in one of the annual “state of the Web” pieces (from October 1996) that I used to write for Salon:
Two very different groups are emerging with different ideas of how to drive the Web forward: call them the information peddlers and the community builders. The former see the Web as a conduit to distribute information and sell products on a few-to-many pattern; the latter see it as a place to exchange information, many-to-many — to yak.
Not only does this tension between what Carr calls “a platform for commerce” vs. “a forum for personal communication,” or what I called “the information peddlers” vs. “the community builders,” remain prevalent; it is a fissure cutting right through the center of what we’ve come to call Web 2.0.
Here’s a link to the full piece, headlined “After the Gold Rush.” Yes, we were saying that the Web gold rush was behind us. In 1996.
- Kevin Kelly appears to be blogging, and, unsurprisingly, in just a few posts he’s providing considerable food for thought. In this post, he describes his (successful) effort at creating a sort of desktop memento mori:
I decided to take the idea of number days seriously, and to revisit my earlier experience of counting down my remaining time on this lovely mortal plane. My hope was that a reckoning of my numbered days would help me account for how I spend each precious 24 hours, and to focus my attention and energy on those few tasks and projects I deem most important to me. Indeed, it might help me decide which ones are most important, which is the harder assignment.
- David Edelstein, my favorite film critic (I’m biased, as we’re old friends and former colleagues), has begun a blog called The Projectionist for New York magazine’s Web site:
Cyberspace being infinite, at Slate I had license to write between 250 and 2,500 words on a movie, and no digression was too digressive. Now, there’s the horror, the horror of eliminating whole paragraphs to fit the page — in addition to changing, for example, “did not” to “didn’t” to pick up a line and removing anything in parentheses. I do not always want to use contractions, and I like parentheses. You never know where they might lead.
And who knows where this might lead? Movies connect with us on an unconscious level, and blogging is a pipeline to the id.
- Finally, Bill Wyman, who I worked with for many years at Salon, has a fine new blog on the entertainment industry — with a heavy emphasis on music — at Hitsville.
Jeff Jarvis reminds us that Moore’s Law is not: “Chips double in speed every 18 months.” Gordon Moore first predicted that the power of microprocessors (as measured by the number of transistors you could cram into a particular space on a chip) would double once every year; later he revised it to once every two years. Somehow — most likely, thanks to careless popular journalism — in the popular imagination this has become set in stone as an every-18-month prediction about chip speed.
So I raise again the question of how we can better map content and corrections. How does Moore assure there is a definitive statement of his law? How do we know it comes from him? Once it’s acknowledged as correct, how do we notify those who got it wrong so the can correct it and start spreading the right meme? Truth is a game of wack-a-mole.
I’ve been playing that game for a decade. Here’s a Salon column from October 1997 that addresses it. Here’s a post from just this past spring.
Here’s two pointers for good reference information on Moore’s Law: one from Greg Papadopoulos at Sun and the other from ExtremeTech.
If we all keep repeatedly linking to the good information maybe we can demonstrate that Gresham’s Law does not apply to information, and that good info can drive out bad.
But, you know, I won’t hold my breath.
[tags]jeff jarvis, moore’s law, gordon moore[/tags]
Not everyone who was spared in the Business 2.0 meltdown is going to Fortune.
Erick Schonfeld, who was an editor-at-large based in New York, has decided to end his 14-year career and jump to Michael Arrington’s influential blog, TechCrunch.
“It’s true,” said Schonfeld, “I’ve accepted a position to be co-editor at TechCrunch.”
“There was a ‘Schindler’s List’ [of Business 2.0 staffers who would be spared] at one point, but I took my name off it so I’d be eligible for a severance package,” he said
Mr. Schonfeld, as someone who left the comforting rituals of the print world for the wilds of the Web many years ago, I can assure you that career continuation remains a possibility. But even at this late date, I guess, there remains the possibility that colleagues and peers will consider you to have fallen off the edge of the earth…
(Here’s Schonfeld’s post about his move.)
[tags]media, journalism, errors[/tags]
Because I am always behind reading my feeds (aren’t you?) I only just read this post by Doc Searls from a week ago. Coming from a slightly different angle, using his increasingly valuable VRM argument, Doc’s “Toward a New Ecology of Journalism” arrives at a similar place to where I ended up earlier this week in the Times Select discussion:
…The larger trend to watch over time is the inevitable decline in advertising support for journalistic work, and the growing need to find means for replacing that funding — or to face the fact that journalism will become largely an amateur calling, and to make the most of it.
This trend is hard to see. While rivers of advertising money flow away from old media and toward new ones, both the old and the new media crowds continue to assume that advertising money will flow forever. This is a mistake. Advertising remains an extremely inefficient and wasteful way for sellers to find buyers. I’m not saying advertising isn’t effective, by the way; just that massive inefficiency and waste have always been involved, and that this fact constitutes a problem we’ve long been waiting to solve, whether we know it or not.
Google has radically improved the advertising process, first by making advertising accountable (you pay only for click-throughs) and second by shifting advertising waste from ink and air time to pixels and server cycles. Yet even this success does not diminish the fact that advertising itself remains inefficient, wasteful and speculative. Even with advanced targeting and pay-per-click accountability, the ratio of ‘impressions’ to click-throughs still runs at lottery-odds levels.
…The result will be a combination of two things: 1) a new business model for much of journalism; or 2) no business model at all, because much of it will be done gratis, as its creators look for because effects — building reputations and making money because of one’s work, rather than with one’s work. Some bloggers, for example, have already experienced this….
Just don’t expect advertising to fund the new institutions in the way it funded the old.
I think this is right, though the long-term-ness of the vision will have most hard-hearded business people smirking their disbelief as they point to corporate-media revenue numbers with long strings of zeroes dangling from them.
I also think that, frightening as it can look, this is ultimately a great opportunity for journalists. We have the chance to invent new ways to support our work — ways that don’t depend on the essential bait-and-switching of old-fashioned advertising.
We can also give up the contortions and distortions of the old-school “Chinese walls,” the barrier erected between the journalists who create the news reports that have value and the people who sell…other stuff that ends up paying the salaries of the journalists. In any case, I’ve long thought that this beloved wall — for all its ethical value, when it worked — had an insidious side-effect of allowing journalists to pretend that they weren’t working for businesses at all. This innocence (or naivete) has left many of them ill-equipped to do more than rend their garments as their industry undergoes slow-motion collapse.
[tags]vrm, doc searls, advertising, times select, future of journalism[/tags]
On the continuing subject of “just how hard / easy is it to create a Web application, anyway?”, Aaron Swartz offers some thoughts, centered on the launch of his new Jottit service. Swartz seems to be on the other side of the fence from the Joel Spolsky essay that I wrote about yesterday. (Although I bet there’s a lot these two agree on, as well.)
There are two ways I look at it. One is: It took us five months to do that? And the other is: We did that in only five months?
When you look at what the site does, it seems pretty simple. It has few features, no complex algorithms, little gee-whiz gadgetry. It just takes your text and puts it on the Web. And considering how often I do that every day, it seems a bit odd that it took so long to create yet another way. And then I check the todo list.
As I’ve said, this is a site I wanted to get every little detail right on. And when you start sweating the small stuff, it’s frankly incredible just how much of it there is. Even our trivial site is made up of over two dozen different screens. Each one of those screens has to be designed to look and work just right on a wide variety of browsers, with a wide variety of text in them.
And that’s just making things look good — making them work right is much harder…
Read the whole thing, and then recall it the next time someone tells you how simple it is to throw up a Web 2.0 site. Of course, Swartz is proclaimedly trying to “get every little detail right.” I gather he is not a Big Ball of Mud kind of guy.
[tags]aaron swartz, jottit, web 2.0, software development, wep applications[/tags]
It feels like only yesterday I was staring in disbelief at the first hardcover copies of Dreaming in Code, but now we’re getting the paperback edition ready (for release in early 2008). I’d always wanted the chance to write a new postscript to the book, bringing the Chandler story up to date. The timing turned out to be fortuitous: the Open Source Applications Foundation released what they’re calling the Preview edition of Chandler last week.
I wrote a little about the saga of Chandler Preview back in January, when the OSAF team hoped to have a release out in April. As that date slipped steadily, I glanced at the calendar nervously, because I knew that sooner or later my publisher would have to close the door on any additions to the paperback. But the timing worked out: OSAF got its Preview out just in time for me to see and use it before I wrote up the new material.
For those of you who have been following the work on Chandler, Preview is what OSAF formerly called Chandler 0.7. After 0.6 shipped near the end of 2005 Mitch Kapor and the OSAF developers decided that they would plan the next big release to be a fully usable, if not feature-complete, sharable calendar and task manager with limited e-mail. You can download the result and try it out yourself.
Over the years Chandler has expanded into a small constellation of products — the desktop application, a server (formerly called Cosmo, now known as Chandler Hub), and a web interface to the server. OSAF now offers free accounts on its own Chandler Hub that you can use to sync your desktop and Web data.
On the one hand, of course, Chandler is way later than even seemed possible back in 2002 when it was first announced. How and why that occurred is the heart of my book. So much has happened on the Web and in the software industry since then that people ask, reasonably, what Chandler can possibly do that they’re not able to do already with Google Calendar or any of the other calendar/e-mail/task management offerings out there.
One big tech-industry story this week was Yahoo’s $350 million acquisition of Zimbra — an open-source Outlook replacement that started well after Chandler and delivered working software a lot sooner. Zimbra is impressive and full of nifty features, and its focus on solving a lot of the cellphone-and-handheld coordination issues for people was smart. But it didn’t try to introduce a new way of managing one’s information.
For better and worse, Chandler did. In this area, it aimed higher than Zimbra or most of the other competition; and its grand reach plainly exceeded its grasp. The Preview edition’s Dashboard provides a glimpse of the different way of organizing one’s work that Kapor and the Chandler designers propose. I don’t think it’s either as accessible for newcomers or as tractable for initiates as it needs to be. But neither is it simply an Outlook retread.
Anyone who has tried to organize the work of a small group with software knows that — even with Web 2.0 and Ajax and the best stuff we can throw at the problem in 2007 — we’ve only barely begun to leverage what computers can do in this area. Chandler deserves credit for acknowledging this and setting out to do better. Its setbacks can be chalked up in part to the choices and mistakes its developers made along their long road; but they are also a sign of just how tough the problem really is.
I’m still not ready to adopt Chandler for my own everyday use. But I’m not especially happy with what I am using, either. That means there’s still room for the sort of program Chandler has always been intended to be. The Preview release isn’t yet that program. But for the first time it’s moved close enough for anyone to play with, and see what it might someday become.
[tags]chandler, osaf, open source applications foundation[/tags]
Some experts suggest that human nature also just resists bad news. Dan Heath, coauthor of Made to Stick: Why Some Ideas Survive and Others Die, observed in an e-mail to me that columnists who inflict hard truths on readers
have to make deposits along with the withdrawals. Otherwise, if they cause us hurt twice a week, we instinctively look away, like smokers who don’t want to look at blackened-lung photos. Conversely, if Dave Barry took a stand on health care, I think it’d be fixed overnight … he’s made so many deposits and so few withdrawals that millions feel like they owe him something.
I imagine this principle applies even more heavily to bloggers.
[tags]wasington monthly, bob herbert, dan heath[/tags]