Wordyard

Hand-forged posts since 2002

Archives

About

Greatest hits

Most useful hardware tool no one tells you about

August 24, 2006 by Scott Rosenberg

Today I sing the praises of my IDE to USB adapter kit, a generic bit of hardware that makes life easier and cheaper for those of us in the PC world that have accumulated spare bits of hardware over the years.

IDE to USB kit This little tool allows you to take any IDE device (mostly hard drives and optical CD/DVD drives) designed for internal use in a PC and run it externally. It’s just a power supply with the right sort of four-pins-in-a-line connector for internal drives, and a cable with a standard IDE connector on one end and a USB connector on the other. Presto! That old hard drive you’ve still got lying around from your generic PC that died four years ago can now be used for backup or moving files or whatever you need. You don’t have to pay the considerable premium for an external hard drive.

This thing came to my rescue this week in another way: I use an ultralight Thinkpad laptop without a CD drive. Every now and then I use an external drive to load software. My drive is an old one; it plugs in via a PCMCIA card. Somehow, the card is now missing. The drive is useless. How do I load software? Ahh: just pop out an old internal CD drive from a dead computer and use the IDE to USB kit. Problem solved.

These things start at about $15. (Here’s some from NewEgg.) You can spend more if you also want an enclosure or you want something that will work with newer SATA hard drives. For long term use, an enclosure probably makes sense. But if you do occasional backups on loose hard drives, and you’re reasonably careful about static and handling, then the kit is all you need.

There are some gadgets you hear about because someone stands to make a big profit. This is one that, I think, you may not have heard about because, really, it just cuts into somebody’s profits.
[tags]tips, lifehacks, hardware, adapters, howto[/tags]

Filed Under: Technology

Kiko’s calendar auction and the old “incremental change” song

August 18, 2006 by Scott Rosenberg

Kiko is an Ajax-style Web-based calendar service. (It’s also the title of a fantastic album by Los Lobos.) Kiko’s developers, only a few months after unveiling it, have put it up for sale on Ebay for $50,000. So far, despite wide linkage, no takers.

Robert Scoble says this presages a Web 2.0 shakeout: “There are simply too many companies chasing too few users…. Getting the cool kids to try your technology isn’t the same thing as having a long-term business proposition.”

Could be. With Google’s new calendar gobbling up mindshare in an already crowded space (haven’t you heard that “Google is the New Microsoft“?), Kiko didn’t seem to have much chance.

The problem is that, unlike photo-sharing or video-staring or link-listing or news-rating, activities that have provided grist for successful Web 2.0 mills, calendaring doesn’t easily lend itself to large-scale social interaction and wisdom-of-crowds behavior. Calendars are either personal or apply to small, well-defined workgroups or personal circles. The piece of calendaring that’s most amenable to wide Web networking — the listing and sharing of information about public events — is already being pursued by several ambitious companies (Eventful, Zvents, etc.).

But even if calendars aren’t going to fuel the next Web 2.0 wunder-company, we still need them. The future for calendar software, as Scott Mace keeps reminding us, is more about interoperability than about snazzy Ajax features. Making sophisticated calendar-sharing work, and multi-authoring possible, and import-export painless — these are the things that will matter in this category (as the folks working on Chandler whose work I followed for Dreaming in Code understand so well).

Meanwhile, Justin Kan, a Kiko founder, lists his own set of lessons from the experience. They include the following: “Build incrementally. We tried to build the ultimate AJAX calendar all at once. It took a long time. We could have done it piece by piece. Nuff said.”

But it’s not nuff said, it’s never said ’nuff, it needs to be said over and over until you’re blue in the face and all your coworkers hate you and think you’re a monomaniac who has gotten this word “incremental” implanted in his neurons like some sort of development-process idee fixe. It is an important but counter-intuitive insight. It’s not how businesspeople want things to be. It’s not how developers are used to thinking. So if you actually understand that an incremental process for building an ambitious program or Web site is the best approach, you will have to be insufferable about it.

My friend Josh Kornbluth (who recently recounted some ancient tales from our collaboration 20 years ago on a low-rent radio drama show in the Boston area) once wrote a song titled “Incremental Change.” It was a cappella, it lasted all of 25 seconds and its entire lyric consisted of the following:

I think incremental change is a good thing
I think incremental change is a good thing
Incremental change: good thing!

Software development was almost certainly not on his mind at the time of writing. But the sentiment holds across a surprisingly broad range of fields.

POSTSCRIPT: Paul Graham, whose Y Combinator funded Kiko, says the company spent so little money the failure’s no big deal: “This is not an expensive, acrimonious flameout like used to happen during the Bubble. They tried hard; they made something good; they just happened to get hit by a stray bullet.”
[tags]web 2.0, calendars, software development[/tags]

Filed Under: Business, Dreaming in Code, Personal, Software, Technology

Firefox leak plugged — open browser tabs spared

August 14, 2006 by Scott Rosenberg

Opera is my primary browser, but I increasingly use Firefox because some Ajax-y sites work better in it, and because sometimes (for testing and such) you need multiple browsers. Anyway, my Firefox, I found, kept getting awfully slow, and sometimes would seem to put a drag on my system. That didn’t make sense.

It turned out not to be the fault of the browser itself but instead of a memory leak in a plug-in called Session Saver that I’d installed so I could shut down and restart Firefox with the same set of open browser tabs. Thanks to the invaluable Lifehacker I discovered that (a) Session Saver was the culprit, and (b) I could replace it with a different plugin called Tab Mix Plus that offered more options and no memory leak.

Of course, Opera is the original session-saving champion. Since Opera stabilized this feature several years ago I have never lost my open tab set to a program crash or system freeze. And I’m afraid my work habits involve some pretty serious open tabbing. At the moment, for instance, I’ve got seven separate Opera windows with a total of 79 open tabs. The open tabs represent my “to read” queue, my “maybe I’ll blog about this” pile, and sometimes just my “gee, forgot to close that search” residue. In other words, the current browser session is my work, in progress. Losing it is not an option. Thankfully, I never need to think about that any more.
[tags]browsers, opera, firefox, open tabs[/tags]

Filed Under: Software, Technology

My ancient cellphone: All that is clunky eventually becomes cool again

August 10, 2006 by Scott Rosenberg

I’m not a serious cellphone user; it’s basically a necessity for certain mundane family management tasks, and that’s all I use it for. Email, that’s my bag. (Yes, I know that tags me as the fortysomething I am.)

So I’m still using this fairly clunky old Motorola V-60 that Verizon gave me way back when. No camera, not even color, but hey, it works.

And now, it turns out, this very phone has made Kevin Kelly’s excellent Cool Tools site:

Motorola V60

If 1960s cars can be fashionable in Hollywood, surely late-1990s phones must stage a comeback at some point. When people look with surprise at my “piece of junk,” I tell them I’m just ahead of my time.

Filed Under: Personal, Technology

Size of the blogosphere: 50 million or bust

August 9, 2006 by Scott Rosenberg

Kevin Burton questioned the logic behind Dave Sifry’s latest report on the size of the blogosphere based on Technorati’s feed index, and now there’s a fascinating discussion going on based on his post. Burton questions Sifry’s claim that there are 50 million blogs. But look over at Sifry’s report and you see that he’s careful enough to write, “On July 31, 2006, Technorati tracked its 50 millionth blog.”

So we’re back in 1997 or so when search sites would report on the exploding number of Web sites they had in their indexes and those of us in the industry actually building large sites would think, hmmm, things are growing like gangbusters, but are we really going to count every abandoned Geocities page as a bona fide Web site?

There’s no right or wrong here. What you count depends on why you’re counting it. As Kevin Marks points out to Burton, an “abandoned blog” — one that’s no longer being updated — isn’t necessarily a worthless blog. Sometimes, for instance, people post for a discrete period of time to record an event, then move on. On the other hand, that 50 million number probably includes the test blog I set up one day over on Blogger just to learn how the system works, and, you know, there’s nothing to see there. I assume that despite Technorati’s best efforts some significant portion of that 50 million number also includes spamblogs (“splogs”) and the like. Sifry discusses this at length (he says that over 70% of the pings his service receives are from “known spam sources” — sheesh!).

What I find interesting is the sense I get that people are crestfallen at the notion that, gee, there might be only, say, a couple million really active bloggers, and maybe twice that number of occasional active bloggers. In the history of media and human expression, a couple million people regularly and actively publishing their writing to a globally accessible network is extraordinary, unprecedented and likely to have vast consequences we can’t foresee.

In other words, if Burton is right and the growth in the actual, active, committed blogosphere is linear rather than exponential, it doesn’t really matter. There’s still a revolution going on.

Filed Under: Blogging, Media, Technology

Business Week followup: Valuing assets

August 7, 2006 by Scott Rosenberg

Following up on Business Week’s bubble-logic cover story on Digg, Techdirt offers a good roundup, suggesting that the $60 million figure was the last-minute work of “higher-up” editors, and noting that it does not appear in the text of the print edition, only on the Web (suggesting a late edit).

That’s certainly possible. When I was Salon’s technology editor I had to do my share of reality-checking the direction that “higher-up” editors wanted to take when promoting my stories on the cover. If this is what happened at Business Week, though, it’s really no defense; it’s a sign of organizational dysfunction. Either the “not-so-higher-up” editor of the piece didn’t object to the misleading headline, in which case he is complicit, or he did object and was overruled by “higher-ups” who showed they don’t trust their own people. Neither scenario is to the publication’s credit.

Then there’s a half-hearted effort on the part of Business Week blogger Stephen Baker to defend the $60-million-out-of-a-hat headline itself. My mistaken idea, Baker writes, “shared by many, is that money is not ‘made’ until an asset is sold in one marketplace or another. But if you look at the rankings of everything from executive compensation to individual wealth, they’re based on valuations of diverse assets. Many are open to question and just as tenuous as the valuation of this New Jersey bubble-inflated split-level I’m typing in at this very moment.”

By that logic, then, Business Week is abandoning any attempt at mooring valuation to the reality of market exchange. Companies are worth whatever anyone says they’re worth so long as there is some fig-leaf of math involved. I can say that every visitor to my site is worth X, multiply X by my traffic, and — hooray! — I’ve “made” that amount of money. Why? Because I — excuse me, the phrase from the BW article is “people in the know” — said so. This is how the original Web bubble got blown up, and that’s why so many people who lived through it are appalled at Business Week’s gaffe.

Sober-minded businesspeople, analysts and journalists rely on more stringent standards of valuation. Baker and I might each own a “diverse” asset in our homes, but the bank will give us a loan based on that ownership, because there is a reasonable market for homes, even though it may greatly fluctuate. Stock options vary in actual value depending on the ups and downs of a stock price, and executives’ opportunity to exercise them is constrained in various ways, but they bear some relationship to an active equity market, so they’re not entirely vaporous. But an ownership stake in a small private company that’s had great success building Web traffic but little or no record of profitability doesn’t meet the “collateral” test; it’s certainly not something you can count on to buy a house or send kids to college (I don’t think Rose is worrying about that one yet).

Digg is a great site and a great service, and someday it may be worth a big pile of actual dollars, and many of those dollars may end up in Kevin Rose’s pocket. But until then he has simply not “made” millions of dollars. Until then, his share of the company is an asset, certainly, but not one anyone should hang a dollar figure on, and Business Week should never have tried, or taken a wild speculative guess and turned it into a sure-thing headline.

Now the magazine can either publish a correction, which I doubt it will ever do, or live with the diminished credibility it deserves. Ed Cone agrees: “BusinessWeek’s best bet is to say, ‘We goofed. We wrote an interesting article about an interesting subject, but we made a pretty bad mistake in the way we headlined the story.’ ” Let’s see if they really understand anything about “Web 2.0.”

UPDATE: At a different Business Week blog, Rob Hof takes a more nuanced stance: “Now, reasonable minds can disagree on the meaning of ‘made.’ …But unlike my colleague Steve Baker and some others on the magazine, I think the fact that a lot of intelligent people read ‘made’ to mean something different [from] what the magazine intended to convey is prima facie evidence that the cover language didn’t hit the mark… We hear the criticisms, even if not everyone here agrees with them. I also know that, contrary to the beliefs of some critics, the words on the cover are something that folks here take very seriously and debate vociferously.” Hof’s entry is a good example of how someone blogging from within an institution can tactfully criticize it without getting (figuratively) beheaded.
[tags]Digg, Web 2.0, bubble, businessweek[/tags]

Filed Under: Business, Media, Technology

Business Week on Digg: Smells like bubble spirit

August 4, 2006 by Scott Rosenberg

Kevin Rose on Business Week cover

Late last night I clicked on a link to the new Business Week cover story about Digg and its founder, Kevin Rose, and read the cover’s headline: “How this kid made $60 million in 18 months.” Gee, I thought, bleary-eyed, I guess I missed the story about how they sold the company. Good for them.

This morning I started reading the piece, and, after scanning quickly through it hunting for the graph about how Digg had sold out and to whom, realized that the $60 million figure was not the proceeds from a sale, and not even a valuation that a prospective buyer had offered, but an almost entirely fictional number.

Was it something that some irresponsible coverline writer had slapped on the piece, that the responsible writer was horrified to see? I don’t think so. The second paragraph of the article, referring to a recent redesign of the Digg site, reads: “At 29, Rose was on his way either to a cool $60 million or to total failure.”

The $60 million number is never explained in the piece; the only real numbers are contained in this sentence: “So far, Digg is breaking even on an estimated $3 million annually in revenues. Nonetheless, people in the know say Digg is easily worth $200 million.” Elsewhere the article says Rose owns 30 to 40 percent of the company. Hence, $60 million.

There is a word for this kind of business journalism, and it is: awful. The reader has no idea who these “people in the know” are; they could easily be people associated with the company who have an interest in inflating its worth.

There’s no question that Digg is a successful site that might be on its way to building a real business. It might be worth more than $200 million someday. I’m not slighting them in any way; I’ve been visiting the site almost since it started. But plastering imaginary dollar figures on its forehead is not the way to help Rose and his colleagues build a real business. “On paper” means just that. “People in the know” can say whatever they want, but your business, like your house, is only worth what someone is actually willing to pay for it.

The Business Week piece itself acknowledges this in places: “This time around, the entrepreneurs worry that, within a moment, the money — and their projects — could vanish… it’s still only paper wealth, which [Rose] and many others have learned can evaporate.”

Right. So why is Business Week insisting that Rose has made $60 million? If this callow 29-year-old understand that it’s “only paper,” why are the editors of one of our best-known business journals being so stupid about it?

Techdirt calls the article “the ultimate Web 2.0 hype piece,” but I think it’s not even that up to date; it’s the same old dotcom-bubble piece dragged from the attic and retrofitted for today’s Web. It is just as mindless about the nature and meaning of company valuations as the dumbest purchaser of TheGlobe.com IPO shares was.

POSTSCRIPT: Jason Fried of 37Signals comes at Business Week from the perspective of a successful entrepreneur who is also a member of the tech industry’s reality-based community.
[tags]digg, web2.0, bubble[/tags]

Filed Under: Business, Media, Technology

The Technorati dance

August 3, 2006 by Scott Rosenberg

I have been using Technorati since it was running on servers powered by Dave Sifry’s hamsters, and it remains an essential part of my blogging existence. The company recently rolled out a spiffy new design for its service. Hooray.

But: Why are the results still so…unstable? Since I am the perpetrator of a recent blog-address move I’ve been trying to keep an eye on how many, and which, other bloggers have updated the address that they link to me with. (I know it’s a pain; I’ve been guilty of plenty of blogroll-rot myself, though it’s an easier job keeping it up to date now that I’ve outsourced it to Bloglines’ widget.)

What I’m finding is that, depending on the hour of the day, sometimes I will get a list of results from T-rati that’s reasonably up to date and trustworthy, and sometimes I will get a list that’s just wacky — full of results that just don’t seem to have anything to do with my blog, no links evident, no overlapping subject matter, nothing. Furthermore, the results that I get from the T-rati site sometimes differ significantly from those that turn up in the RSS feed that represents that search.

Is this fallout from the monumental war I know Technorati must be waging on the depredations of blog-spammers and spam-blogs? Is it a symptom of some general structural problem with the service’s design, or just side-effects of the company’s constant scaling-up efforts to keep pace with the blogosphere’s exponential growth?

Or is there some deeper logical pattern hidden within the seemingly irrelevant pages T-rati is claiming point to my blog — some guy’s Nirvana playlist; A non-English-language page with a photo of Andrea Bocelli singing “Besame Mucho”; Debby’s World’s list of “34 things worth knowing” — and if only I could decipher that pattern, I could achieve perfect bliss, or at least a more rarefied Technorati ranking?
[tags]technorati, blogging[/tags]

Filed Under: Blogging, Personal, Technology

Standish’s CHAOS Report and the software crisis

August 2, 2006 by Scott Rosenberg

Whenever there is an article about software failure, there is a quotation from the venerable CHAOS Report — a survey by a Massachusetts-based consultancy called the Standish Group, first conducted in 1994 and regularly updated since. The CHAOS Report presented dire statistics about the high failure rate of software projects: 31.1 percent of projects cancelled, 52.7 percent “challenged” (completed only way over budget and/or behind schedule), and only 16.2% deemed a success.

There aren’t a whole lot of other statistics out there on this topic, so the numbers from Standish get big play. I used them myself in my book proposal, and returned to the report as I researched the book, interested in finding out more about the methodology the researchers used — and also curious about what “CHAOS” actually stood for: Combinatorial Heuristic Algorithm for the Observation of Software? Combine Honnete Ober Advancer…?

Nope. As far as I could tell, CHAOS is an acronym for nothing at all. I tried to contact Standish for more information by telephone and e-mail but they never responded. It wasn’t essential to my work — there’s only a handful of sentences on the subject in Dreaming in Code — so I didn’t push hard. I thought, maybe this was the sort of consultancy that was only interested in the paying customers.

In the August issue of Communications of the ACM, Robert Glass has a column about Standish and the CHAOS Report that suggests my failure to get a response from this organization was hardly unique.

Several researchers, interested in pursuing the origins of this key data, have contacted Standish and asked for a description of their research process, a summary of their latest findings, and in general a scholarly discussion of the validity of the findings. They raise those issues because most research studies conducted by academic and industry researchers arrive at data largely inconsistent with the Standish findings….
Repeatedly, those researchers who have queried Standish have been rebuffed in their quest….

Glass is a widely known and respected authority on the software development process (I read, and can recommend, his book Facts and Fallacies of Software Engineering as part of my book research), and the Communications of the ACM is the centerpiece journal of the computing field’s main professional organization. So maybe the Standish group will respond to the plea with which Glass closes his column:

it is important to note that all attempts to contact Standish about this issue, to get to the heart of this critical matter, have been unsuccessful. Here, in this column, I would like to renew that line of inquiry. Standish, please tell us whether the data we have all been quoting for more than a decade really means what some have been saying it means. It is too important a topic to have such a high degree of uncertainty associated with it.

Indeed. The Standish numbers are precisely the sort of statistic that journalists in need of background “facts” — and scholars, too, for that matter — will quote in an endless loop of repetition, like Newsweek’s infamous stats showing that thirtysomething women were more likely to be killed by terrorists than to find husbands. The loop keeps repeating until someone provides a definitive debunking — and even then it doesn’t always stop.

So it’s ironic but hardly surprising to find the same magazine that contains Glass’s complaint also featuring a cover story on “The Changing Software Engineering Paradigm” that parrots the Standish numbers for the umpteenth time.
[tags]dreaming in code, software development, software failures, chaos report[/tags]

Filed Under: Dreaming in Code, Software, Technology

Cloudy Vista

August 1, 2006 by Scott Rosenberg

Windows Vista may not be ready on time after all, says one knowledgeable observer (and another agrees).

But then, it may not matter, because who is going to buy this thing while the bugs are still being squashed? Not I, said the CTO. Not I, said the home user.
[tags]microsoft, vista[/tags]

Filed Under: Software, Technology

« Previous Page
Next Page »