Wordyard

Hand-forged posts since 2002

Archives

About

Greatest hits

Taggers vs. spammers

January 17, 2005 by Scott Rosenberg

Technorati’s new tag feature is the talk of the moment, and rightly so. First we had the Semantic Web, with its notion of using RDF metadata to organize the universe. But RDF’s complexity seemed to daunt even the uber-geeks, and it’s still not easy to find an RDF-based project in wide usage outside of research environments. As the Semantic Web’s formalisms failed to catch on, the human-readable simplicity of RSS and the informal folksonomy approach of Flickr and Del.icio.us took off like gangbusters. Now Technorati is trying to pull together various islands of simple, bottom-up metatagging into one big information pool.

It’s fun and interesting and worth following. (David Weinberger’s comments are valuable.) My big doubt arises from my memory of a previous metadata experiment. As the Web took off in the mid-90s, many of you will recall, Web publishers were encouraged to tag their own pages with keyword metadata to help search engines organize them. We dutifully did so, but the whole thing got polluted very quickly by metatag hijackers — the metadata equivalent of spammers — who tried to boost the visibility of their pages by appending high-profile metatags (inevitably, most of them were X-rated) to every page in sight. (I’m sorry to say that even Salon, under the prodding of a long-departed marketing executive, briefly participated in this self-destructive game, though that’s now thankfully ancient history.) Things got ugly so fast that the search engines quickly started ignoring metatags; finally Google came along with a better, harder-to-game system, which today legions are still hard at work trying to undermine.

What’s not clear to me is how the 2005 version of keyword metatags can avoid this fate. The moment financial value starts to be associated with the new folksonomies, won’t the spammers come out of the woodwork? If they can debase something as simple and seemingly non-commercial as blog comments, they can debase anything.

In pessimistic moments, I sometimes think that every online enterprise must sooner or later sink into the spamosphere. When I’m feeling sunnier, I simply conclude that any networked technology designed to be open enough to harness contributions from multitudes will inevitably also be open to spam-style manipulation, and that this struggle — what my colleague Andrew Leonard long ago labeled as “the techno-dialectic” — is simply as open-ended as life itself. The trick is to enjoy those parts of the cycle where legitimate users have gained a lap or two on the forces of spam evil. Now seems to be one of them.

POSTSCRIPT: After writing this, I see Technorati’s Dave Sifry offers some arguments for why tagging might be less prone to spam pollution than meta-keywords for Web pages. I hope he’s right…

Filed Under: Blogging, Technology

More core

January 13, 2005 by Scott Rosenberg

Thanks for all the thoughtful comments on my post about software keeping up with multi-core processing.

Jon Udell posts some more on the subject in response to a Register piece by Shahin Khan and a comment by Patrick Logan.

Systems with 7000 CPUs? Do we then need 7000 heatsinks? Or do we consider it a feature and throw away our heaters? (I know, he’s talking about a “miniaturized big-iron system” for users to share — but the hardware advances that go into the servers first usually end up on the desktop a couple of years later.)

Filed Under: Software, Technology

Multi-core competency

January 10, 2005 by Scott Rosenberg

Fascinating piece by Herb Sutter, The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software, says that, with Moore’s Law plateauing short of 4 GHz, and the processor universe moving to “multi-core” designs to squeeze better performance from chips, software developers are going to have to learn a whole new ballgame.

Predictions that Moore’s Law is going to hit a wall have regularly proven mistaken over the past decade or two, but that doesn’t mean that this time they’re wrong too, and the news from Intel et al. over the past year suggests that the stall in processor-speed increases is real. So the hardware firms’ “multi-core” plan means that the next generation of processor speedsters will try to gain their oomph not by running one processor’s queue of instructions faster — that’s become tough as higher speeds have meant more heat, more power use, and more energy leakage (all, obviously, connected phenomena) — but rather by running multiple queues.

In layman’s terms: If your corner store experiences huge growth in customer volume, it can keep its one cashier working harder and faster, but only up to a point. Once that person hits his limit, the only way you can move more customers out the door faster is by adding a second register. (Unless you completely change the rules, by, say, asking the customers to check themselves out — in this comparison, the technology equivalent of “invent a new processor paradigm” to bust open the Moore’s Law logjam once more.)

In my everyday example, the “coordination cost” is fairly low — you just have to assume that the customers will figure out how to organize themselves into two separate lines. Or maybe if your store’s set up the right way you can have one line feed both registers. To adapt software to the multicore universe, though, Sutter’s analysis suggests, the costs are more complex, and programmers need to get good at thinking about a new set of problems — otherwise software won’t be able to take advantage of the new chips, and programs designed by developers who don’t really understand the new world will fall into new kinds of traps like “races” and “deadlocks.” Sutter writes that “The vast majority of programmers today don’t grok concurrency, just as the vast majority of programmers 15 years ago didn’t yet grok objects.” So maybe there’ll be work for programmers after all!

Meanwhile, when Intel decides that multi-core is what the public must buy, look for it to push software vendors to rewrite popular applications in new versions marketed under whatever ad-friendly moniker the new multi-core architecture is festooned with. (We went through this with MMX in the mid-’90s and again, on a smaller scale, with Centrino.) “Multi-core” and “hyperthreading” are sexless technical terms, so we can expect trademarks like “Maxium” or “CoreSwarm” and slogans like “Two is better than one!” or “The Power of Many” (no, wait, that’s taken).

The typical user will say, “Why do I need this stuff? My word processor is fast enough and my Web pages load fine.” But within three years the new architecture will be standard anyway, and within ten years the world will actually find something to do with the new processor power — like, say, distribute the work of 23 million video mashup artists simultaneously to your desktop, then catalog them and re-edit them according to your preferences on the fly! And the Silicon Valley cycle will grind forward.

Filed Under: Software, Technology

The folk tag game

January 10, 2005 by Scott Rosenberg

There’s a useful and engaging discussion unfolding about “folksonomies” — emergent, user-shaped taxonomies of metadata like those in Flickr and Delicious (Adam Mathes’ thorough and detailed paper is here, Lou Rosenfeld offers measured dissent here, Clay Shirky fires back here). This topic reminds me of discussions we had back in 1999 and 2000, when we were building the Salon Directory.

We needed a tagging scheme for Salon articles, and some of the software developers felt that we should just generate a list of categories and build drop-down menus into the content management system. We had hired a smart consultant who argued that we should instead just let our editors add tags to stories in a free-form way, and allow the resulting categories to shape the “back catalog” of stories. As long as we occasionally did some gardening of the resulting keyword list — combining duplicate categories and handling complex issues (which “president bush”, exactly?) as they arose — we’d have a flexible, expandable schema naturally emerging from our daily work flow.

Our consultant was plainly right, and whatever problems the Salon Directory has had over the years have been more the result of limited software development resources on our part than of any fundamental mistakes in its conception. So within the confines of the Salon staff we had our own little “folksonomy” growing.

The biggest problems have not been those of organization, classification or structure but the simpler ones of time and effort. We made it relatively easy on our staff to add keywords, and some are added automatically, but it’s still a constant struggle to make sure every story is well keyworded. Some editors are more conscientious than others; all are on deadline much of the time; and the pressures of an “all-the-time” publishing schedule mean that today’s good intention of going back and fixing yesterday’s metadata failure usually falls prey to the demands of tomorrow’s stories.

I think this is what Shirky is getting at when he talks about how expensive it is to “build, maintain and enforce a controlled vocabulary.” Here’s in-the-field evidence that it’s not so cheap or easy to “build, maintain and enforce” even a within-the-firewall folksonomy. And so, much as I love the approaches of Flickr and Delicious, I also worry that the value of the tagging ecosystems emerging on those services will grow for a while and then, sadly, decline. Early adopters are enthusiastic and willing to take the time to tag; as the services grow, people are less likely to devote that time and care.

Which of course does not mean that these aren’t great projects — I agree with Shirky that they are far better than the alternative because the alternative, most often, is nothing at all. But when Rosenfeld and others wonder about the scalability of folksonomies, I think the issue may be less the scale of individual tags (50 billion “cat” photos!) than the scale of human enthusiasm for doing the slog-work of classification. Geeks love tidying up their personal dataspaces because, obviously, they’re geeks. For the rest of the world, my hunch is that — even when they’re only classifying the tiny sliver of stuff that’s their own — most people would rather do almost anything else.

Filed Under: Salon, Technology

iPod fascism

January 9, 2005 by Scott Rosenberg

There, I’ve got your attention now!

Like so many of us, I love my iPod. It liberated my digital music collection from the desktop and moved it out to my BART commute and into my running routine; someday I’m sure it will plug into my car (but I wouldn’t buy a BMW just do do so even if I could afford to). It looks great and sounds far better than I would ever have expected. I can forgive my iPod its flaws (the well-known battery problems, the fact that my early-model version’s flywheel has now lost its bearings a bit so that the volume drifts unless I lock out the controls); nothing is perfect.

But the iPod’s roach-motel design — songs check in but they can’t check out! — is a small outrage, a deliberate crippling of a natural technical capacity for the sake of a legal-commercial agenda that runs directly counter to the interests of the device’s users.

Here’s what I mean. I run Windows for my day-to-day affairs (obligatory bow to the fact that I lived on a Mac for a decade until Apple dropped the operating-system ball in the mid-90s and my Macs starting eating my work), and when I first bought my iPod two and a a half years ago, there was no sanctioned “Windows version.” I bought a simple utility called Xplay that allowed me to connect my Windows 2000 filesystem to the iPod. It treats the iPod as what it is: a portable external hard disk. Put music on; take music off. And that’s the way I’ve grown used to thinking of my iPod.

My wife works on a Mac, and I got her an iPod recently as a birthday gift. She doesn’t have any digital music on her computer, and I do, so I filled up her iPod with stuff I thought she’d like. These are, I should be clear, all legally obtained MP3 files, most of them ripped from my own CD collection or purchased (as non-DRMed MP3s) from EMusic. A decade ago I shared music with her by making her mix tapes; today, I copy files to her iPod.

You iPod veterans know where I’m headed here. As far as I can tell — and I freely admit that I’m no OSX expert, so if I’m wrong, correct me! — there is no simple way to get that music off her iPod and onto her Mac. Yes, I can download one or another of a variety of little renegade utilities that Apple has been trying to stamp out (each time Apple upgrades iTunes it tweaks the code to break these programs). I tried a couple, but they were awkward and counter-intuitive.

The point is, I shouldn’t have to hack my way through the software jungle just to share music with my own spouse. As it is, if we sync the iPod to iTunes on her Mac, the files I’ve moved onto the portable device get overwritten. It’s clumsy and rude: the syncing should be two-way.

A hard drive is a read/write device; there is no technical or moral reason why the iTunes software should not be able to copy my files off an iPod and onto a Mac. But, apparently, in the world according to Apple — the particular deal-with-the-devil that Steve Jobs seems to have made in order to get the cooperation of the record companies in licensing songs to the iTunes music store — what we want to do, even though there is no conceivable ethical argument against it, is so wrong or dangerous that the technology has had to be deliberately broken in order to prevent it. The music industry is obviously afraid that a two-way-syncing iPod would become their worst file-trading nightmare. Memo to music execs: Your nightmare already exists. It is called the portable hard drive, and the college students who pass them around are your future customers.

So I’ll think twice before buying another iPod. Beautiful as the whole iPod/iTunes combo is in so many ways, it is flawed in a way that says to the user, “You are not in control here” — a message that directly contradicts the fundamental promise of personal digital technology. You’d expect precisely the opposite from Apple, which has long benefited from an image as the user’s champion; but the company’s dedication to usability broke down somewhere between the DRM scheme that limits your ability to use purchased iTunes music files and the roach-motel approach to the player’s hard-drive. The future only holds more, and worse, along these lines, as personal video recording becomes more commonplace.

Bonus links: Apple sues the media; users sue Apple. No one here is winning except the lawyers.

Filed Under: Technology

Tail gunning

January 3, 2005 by Scott Rosenberg

Wired editor Chris Anderson has started a good blog to follow up on his Long Tail essay and seed the ground for a book on the subject. Cory Doctorow takes Anderson to task for his “middle-of-the-road” stance on efforts to lock down intellectual property via increasingly desperate and continuingly futile technical schemes for digital rights management (DRM) — schemes that tip the balance between propertyholders and the public way too far.

Anderson is dead right in elucidating the way the Net economy restores market value to works that are not big hits. The story of the next few years will be one about whether that market in “long tail” intellectual goods (I wrote about its promise in October) thrives in the same open environment that allowed the Net itself to evolve and prosper — or shrivels under the furious weight of technical and legal efforts to squeeze every last dollar from every last little hair on the long tail. My money is on the former, happier outcome. But it won’t turn out that way without persistent and stubborn resistance — which we can thank Doctorow and the EFF for ringleading — to the “we control the horizontal, we control the vertical” paternalism and anti-consumerism of the DRM mafia.

(For a little example of what happens when rights holders hold too many cards, check out the sad saga of “Eyes on the Prize,” the documentary that is the “principal film account of the most important American social justice movement of the 20th century,” in a Stanford professor’s words from Wired News’ account. “Eyes on the Prize” can’t be publicly shown or distributed because “the filmmakers no longer have clearance rights to much of the archival footage used in the documentary.” You want your audiovisual history? Pay up first!)

Assuming the Long Tail isn’t clipped by DRMania, we face an ever-expanding banquet of media goods. The BBC sounds an alarm. We are coming
face to face with the scourge of “digital obesity”:

  Gadget lovers are so hungry for digital data many are carrying the equivalent of 10 trucks full of paper in “weight”. Music, images, e-mails, and texts are being hoarded on mobiles, cameras laptops and PDAs (Personal Digital Assistants), a Toshiba study found. It found that more than 60% kept 1,000 to 2,000 music files on their devices, making the UK “digitally fat”.

Or maybe not. The term is a ludicrous oversimplification and distortion; we keep all this stuff around precisely because we can now — because it doesn’t fill trucks, it fills infinitesimal chips and drives, and it’s easier to keep everything around than to worry about cleaning house. Carrying the stuff around? No problem. Finding it? Harder. Finding time to absorb it all? There’s our rub.

Obesity is simply the wrong metaphor. This post by Rajat Paharia hits closer to the mark:

 

I’m finding that the “digital photo effect” is starting to make its way into my music and video experiences as well. What’s the DPE? My ability to produce and acquire has far outstripped my ability to consume. Produce from my own digital camera. Acquire from friends, family, Flickr, etc. This has a couple of ramifications:

1. I feel behind all the time.
2. Because there is so much to consume, I don’t enjoy each individual photo as much as I did when they were physical prints. I click through fast.
3. Because of 1 and 2, sometimes I don’t even bother.

I first noticed this phenomenon back in the late ’80s, when I switched from buying music on vinyl to CDs, and noticed how quickly I stopped listening to an entire 50-60 minute CD if the first track or two didn’t grab me. Of course, this kind of impatience coincided with the speeding up of my professional life and my crossing the threshold into my 30s. Something tells me that the problems Paharia and I and perhaps you are facing in this realm of overload may not feel so dire to today’s teenagers and twenty-somethings, for whom this thick soup is a native muck.

Still, the “I feel behind all the time” phenomenon is real enough, as today’s RSS addicts know — and as indicated by the rising popularity among the geeknoscenti of David Allen’s “Getting Things Done” methodology, with its promise of liberation from uncomfortable behind feelings.

I’m not liberated yet. Behindness surrounds me on all sides. But finding stuff is getting easier. I’m slowly trying to teach myself the methodology that Doctorow has modeled for several years now: If you want to be able to find something in the future, don’t bury it in your files — blog about it, put it out on the Net, where Google will never lose it, and if for some reason you can’t find it, someone else will probably have picked it up and saved it for you.

So to hell with bookmarks, and long live the blogmark. Here’s a handful:

Lexis Nexis Alacarte: No longer the preserve of big-media newsrooms — now in handy personal-journalism size.

For years, I tuned my guitar with one of those little electronic tuners in a plastic box; but when they were two, my kids decided that it made a great toy and disembowelled it. Well, all that is solid melts into Net: Today you don’t need a physical object, all you need is a Net connection and a browser. Just Google “guitar tuner” for a bunch of options; I liked this one for its retro look.

Feel-good link of the day: First it was the beer and wine, now it’s spicy food! Curry may help block Alzheimer’s disease. (It’s the turmeric.)

Filed Under: Food for Thought, Media, Technology

Hyperlink hyperbole

December 23, 2004 by Scott Rosenberg

Jeremy Zawodny is scratching his head over an odd thread in the Slate/Washington Post coverage:

 

I’m catching up on e-mail as my flight is delayed in O’Hare and came across the following tidbit about Slate Magazine in the latest Edupage mailing:

“Although the magazine only recently achieved break-even status on revenue of about $6 million per year, Slate won a National Magazine Award for its editorial content, and mainstream news organizations frequently cite it. The publication is also given credit for shaping Web publishing and introducing the use of hyperlinks and Web logs.“

(Emphasis mine.)

Am I reading that right? Edupage wants me to believe that Slate is responsible for introducing hyperlinks to the world?

I’m having a very, very hard time believing that.

Am I alone?

No, Jeremy, you’re not alone. The source of this odd statement is almost certainly David Carr’s New York Times piece, which included the following passage: “Although Slate has never achieved steady profitability, it is credited with helping to shape Web publishing as well as pioneering the use of hyperlinks and Web logs.”

Carr’s “pioneering” was marginally closer to reality than Edupage’s feeble substitution of “introducing.” But neither is particularly correct.

I sincerely doubt anyone at Slate would have claimed to have introduced either hyperlinks or blogs to the world. Slate was in fact rather shy of linking for the longest time — in the early days, the links in each article were typically segregated in a little afterword section. As for blogs, Slate gave Mickey Kaus’s blog a home at a time when, quite possibly, only three people in the Washington Post newsroom knew what a blog was; but at the same time, blogs were already a widespread format, and widely known to the web-aware world.

Slate deserves tons of credit for many things; after a lot of false starts in the first few years, it became quite adept at devising creative Web-native formats for writers (like the e-mail exchanges). But “pioneering the use of hyperlinks and Web logs” is just not an accurate statement.

I imagine Carr meant to write something more like “The publication is also given credit for raising the profile of hyperlinks and blogs in the media and government circles that constitute some of its core readership.” Or if he didn’t, he should have.

Filed Under: Media, Technology

The browser war in the rear-view mirror

December 20, 2004 by Scott Rosenberg

Randall Stross’s piece on Firefox in the Sunday Times business section, with its comical quotes from a Microsoft spokesman who suggests that unhappy users buy themselves new computers, brought a little wisp of browser-war nostalgia to mind.

It’s undeniable that, today, if you want to protect your computing life and you run Windows, you’re insane to continue running basic Microsoft applications like Internet Explorer and Outlook. (Firefox and Thunderbird are great alternatives in the open source world. I’m still wedded to Opera and Eudora out of years-long habits. Opera does a great job of saving multiple open windows with multiple open tabs from session to session, even when you suffer a system freeze.) These programs function together in a variety of ways that Microsoft presented as good ideas at the time they were written. Hey, integration means everything works seamlessly, and everyone knows how highly the business world prizes the word “seamless.”

Today it is precisely the same integration — the way, for instance, that ActiveX controls and other code pass freely across the borders of these applications, allowing them to work together in potentially useful but hugely insecure ways — that make IE and Outlook such free-fire zones for viruses and other mischief. (It’s certainly true that the Microsoft universe is targeted by virus authors because it’s where the most users are; but it’s also true that Microsoft’s products are sitting ducks in a way that its competitors in the Apple and open source worlds simply are not.) If you’re willing to turn on Microsoft’s auto-update to keep up with the operating system patches, and to abandon Outlook and IE for your day-to-day work, you can rest relatively easy. But you never know when some other application is calling on that “embedded browser functionality,” when you’re using that Outlook code without even realizing it.

Stross is strangely mum on the antitrust background of these matters. It’s the ultimate, though not entirely unforeseen, irony of the Microsoft saga that the very integration-with-the-operating-system that enabled Microsoft to “cut off the air supply” of its Netscape competition is now looking more and more like the franchise’s Achilles heel. Microsoft fought a tedious, embarrassing and costly legal war with the government to defend its right to embed Web browser functionality in the heart of the operating system. “Our operating system is whatever we say it is! How dare government bureaucrats meddle with our technology!” was the company’s war cry.

Now it turns out that if Gates and company had paid a little more heed to the government they might have done their users, and their business, a favor. Microsoft’s tight browser/operating system integration helped spell Netscape’s corporate doom; today it is one of the biggest gaping holes in Windows security, and a legion of hostile viruses swarms through it.

Stross writes, “Stuck with code from a bygone era when the need for protection against bad guys was little considered, Microsoft cannot do much. It does not offer a new stand-alone version of Internet Explorer. Instead, the loyal customer must download and install the newest version of Service Pack 2. That, in turn, requires Windows XP. Those who have an earlier version of Windows are out of luck if they wish to stick with Internet Explorer.”

But it’s not quite that simple. Microsoft’s reluctance to invest in browser development has stemmed only partly from the kind of inertia that comes from having won a war in a previous generation (“The browser? We own that space, we don’t have to keep improving it”). Even more deeply, Microsoft has been reluctant to make the browser better — more reliable, more secure, more flexible as an interface for more kinds of applications — because its leaders understood very well what that would mean: The better the browser is, the less dependent people are on the operating system’s features — as today’s users of well-designed Web applications like Gmail, Flickr and Basecamp demonstrate every day. This is not where Microsoft wants to see the computing world go, so why, once it gained a stranglehold on the browser market, would it help the process along?

In other words, what happened once Microsoft left the courtroom was precisely and exactly what the government’s antitrust lawyers said would happen: Microsoft’s goal in integrating the browser was not to serve the public and the users, but to shut down further innovation and development. Netscape argued that Microsoft wanted to control browsers because it wanted to make sure they did not emerge as a platform for applications that would undermine Windows’ importance. Netscape, the record now shows, was right.

We lost three or four years of Internet time (from the collapse of the bubble to this year’s Renaissance of Web applications) thanks to Microsoft’s stonewalling and the Bush administration’s unwillingness to represent the public interest in this matter. The next time a worm comes crawling through your Windows, curse the Justice Department’s settlement — and go download Firefox.

Filed Under: Business, Technology

Ecco unchained

December 14, 2004 by Scott Rosenberg

Ecco Pro — the outliner/PIM that I have written about periodically and am still using today, despite the fact that it has been orphaned by its owners and not modified since 1997 or so — looks like it may be released as open source. (Thanks to Andrew Brown for the link.) Whether this means that the heart of Ecco will be transplanted by enterprising programmers into some newer, modern body — or just that Ecco devotees will have an opportunity to tweak and debug the trusty application — it’s wonderful news, if it actually happens.

Filed Under: Software, Technology

Google and the public good

December 14, 2004 by Scott Rosenberg

For those of us who are still consumers of those bundles of printed content known as books, the importance of today’s news of Google’s library deal is almost impossible to overstate. It’s just huge.

While the Web has represented an enormous leap in the availability of human knowledge and the ease of human communication, its status as a sort of modern-day Library of Alexandria has remained suspect as long as nearly the entire corpus of human knowledge pre-Web remained locked away off-line between bound covers. “All human knowledge except what’s in books” is sort of like saying “All human music except what’s in scores.” There’s lots of good stuff there, but not the heart of things. Your Library of Alexandria is sort of a joke without, you know, the books.

Now Google, in partnership with some of the world’s leading university libraries (including Stanford and Harvard), is undertaking the vast — but not, as Brewster Kahle reminded us at Web 2.0, limitless — project of scanning, digitizing and rendering searchable the world of books.

Google’s leaders are demonstrating that their corporate mission statement — “to organize the world’s information and make it universally accessible and useful” — is not just empty words. If you’re serious about organizing the world’s information, you’d better have a plan for dealing with the legacy matter of the human species’ nearly three millennia of written material. So, simply, bravo for the ambition and know-how of a company that’s willing to say, “Sure, we can do it.”

Amazon’s “look inside the book” feature provides a limited subset of this sort of data. But where Amazon has seemed mostly interested in providing limited “browsability” as a marketing tool, Google has its eye on the more universal picture. And so the first books that will be fully searchable and readable through this new project are books that are old enough to be out of copyright. The public domain just got a lot more public. (And presumably, as John Battelle suggests, we’ll see a new business ecosystem spring up around providing print-on-demand physical copies of these newly digitized, previously unavailable public-domain texts.)

This is all such a Good Thing for the public itself that we may be inclined to overlook some of the more troubling aspects of the Google project. Google is making clear that, as it digitizes the holdings of university libraries, it’s handing the universities their own copies of the data, to do with as they please. But apparently the Google copies of this information will be made widely available in an advertising-supported model.

For the moment, that seems fine: Google’s approach to advertising is the least intrusive and most user-respectful you can find online today; if anyone can make advertising attractive and desirable, Google can.

But Google is a public company. The people leading it today will not be leading it forever. It’s not inconceivable that in some future downturn Google will find itself under pressure to “monetize” its trove of books more ruthlessly.

Today’s Google represents an extremely benign face of capitalism, and it may be that the only way to get a project of this magnitude done efficiently is in the private sector. But capitalism has its own dynamic, and ad-supported businesses tend to move in one direction — towards more and more aggressive advertising.

Since we are, after all, talking about digitizing the entire body of published human knowledge, I can’t help thinking that a public-sector effort — whether government-backed or non-profit or both — is more likely to serve the long-term public good. I know that’s an unfashionable position in this market-driven era. It’s also an unrealistic one given the current U.S. government’s priorities.

But public investment has a pretty enviable track record: Think of the public goods that Americans enjoy today because the government chose to seed them and insure their universality — from the still-essential Social Security program to the interstate highway system to the Internet itself. In an ideal world, it seems to me, Google would be a technology contractor for an institution like the Library of Congress. I’d rather see the company that builds the tools of access to information be an enabler of universal access than a gatekeeper or toll-taker.

The public has a big interest in making sure that no one business has a chokehold on the flow of human knowledge. As long as Google’s amazing project puts more knowledge in more hands and heads, who could object? But in this area, taking the long view is not just smart — it’s ethically essential. So as details of Google’s project emerge, it will be important not just to rely on Google’s assurances but to keep an eye out for public guarantees of access, freedom of expression and limits to censorship.

Filed Under: Media, Technology

« Previous Page
Next Page »