Wordyard

Hand-forged posts since 2002

Archives

About

Greatest hits

Saying everything in Albany

October 31, 2010 by Scott Rosenberg

A week ago Wednesday I traveled to Albany, N.Y. at the kind invitation of some professors at the College of St. Rose. (Thanks, Cailin Brown and Dan Nester!)

Turns out that Say Everything is being used as a text in a half-dozen classes at that school. As part of the college’s participation in the National Council of Teachers of English’s National Day on Writing, and also part of a cool writer’s series called Frequency North, St. Rose asked me to come talk about blogging and writing. Which I always love to do.

The poster for the talk, reproduced above, caused me to do a doubletake, which I’m sure was the point. I’m slow, sometimes, so it took me a minute before I registered the “hanging your laundry in public” concept. Nice work, and probably the one and only time my name will share a billboard with panties.

Anyway, I had a great day at St. Rose talking with students and faculty and chatting on the local public radio affiliate.

The college has posted a complete video of the talk. (Or here’s just a three-minute taste of the audio, with some optimistic observations on the concept of information overload.) Also, I worked from pretty extensive notes, and I’ve cleaned them up, filled them out a bit and posted them on a separate page. Here it is — Large Blocks of Uninterrupted Text: A Talk on Blogging and ‘Say Everything.’

This is a pretty extensive update on the blogging talk that I was giving back when Say Everything first came out. I start with the Onion, proceed to the death of culture, and discuss the rise of blogging just a bit. Then I use the remarkable saga of Joey DeVilla the Accordion Guy and his New Girl — a story that didn’t make it into Say Everything — as a way to discuss a whole series of critiques of blogging and online discourse along some familiar vectors: truth and trust; anonymity and civility; serendipity; narcissism; shallowness and substance; attention and overload.

Filed Under: Blogging, Events, Say Everything

E-book Links Oct. 18-29: Zimmer goes indie, Negroponte buries print, Nook goes color, Kindle goes on loan

October 29, 2010 by Scott Rosenberg

  • Carl Zimmer on “Brain Cuttings” and the Future of Books [Steve Silberman, NeuroTribes]: "I saw people eating up books with their Kindles and iPads. I looked at the numbers and realized that there’s a real ecosystem taking root. I saw other writers saying, 'If I don’t have to deal with paper and glue and binding, I’ll just write something and sell it.' There’s a lot of writing that we all do that could be read by more people.”
  • Will physical books be gone in five years? [CNN.com]: Nicholas Negroponte, founder of One Laptop per Child, said the physical book's days are numbered. "It will be in five years," said Negroponte. "The physical medium cannot be distributed to enough people. When you go to Africa, half a million people want books … you can't send the physical thing."
  • Barnes & Noble Updates Nook E-Reader [Wall Street Journal]: B&N's new $250 1-lb Nook uses Android, aims for niche between Kindle and iPad.
  • Amazon to Introduce Lending for Kindle [Jason Boog, GalleyCat]: “Later this year, we will be introducing lending for Kindle, a new feature that lets you loan your Kindle books to other Kindle device or Kindle app users.”
  • Ebook Go-To Guide [Eric Griffith, PC Magazine]: Useful overview of state of commercial ebook world, late 2010.
  • iPad Week: E-Books [Nicholas Jackson, The Atlantic]: "You can get a variety of e-book reader apps for your iPad, including Apple's iBooks, Amazon's Kindle, Barnes & Noble's eReader, and Lexcycle's Stanza. Here's the rub: Except for Stanza, each app is tied to one specific online bookstore."
  • Part Two of My TOC Frankfurt "Ignite" Session [Joe Wikert]: "What if we could turn this model upside down and enable students to resell their textbooks for more than what they paid? How? By including all their notes in them as e-textbooks…. What I'm suggesting is a reseller model where the student can package all their notes together with their version of the ebook and sell it at whatever price they feel is appropriate. The key here is to include the publisher and author in the revenue stream; neither of them share in the proceeds of the used book market today but there's no reason they couldn't in the future.”

Filed Under: Books, Links

What if the future of media is no “dominant players” at all?

October 28, 2010 by Scott Rosenberg

The New Yorker’s John Cassidy recently concluded a skeptical review of the finances of Gawker Media (which I caught up with late) — a piece somewhat ludicrously headlined “Is Nick Denton Really the New Rupert Murdoch?” — by asking the following question:

Can Gawker Media (and other blogging outfits such as the Huffington Post) translate their rapid audience growth into big streams of revenue and profits, thereby becoming dominant players in the news-media business? Or will the established players, which now have sizeable online arms as well as other sources of income (and costs), ultimately come out on top? Therein lies the future.

“The future” has been lying “therein” over and over for the last 15 years, yet it never seems to turn out that way. This kind of thinking drives me nuts — it’s always a zero-sum battle for dominance. (Can the scrappy little new guys grow so powerful that they’ll replace the big old guys? Or will the lumbering big old guys survive and “ultimately come out on top”?) And it always misses the point.

There are many other imaginable scenarios. Here’s the one I think is most likely.

Denton’s Gawker, Huffington Post, and similar-scale ventures won’t “become dominant players.” But those that husband their resources and play their cards smartly will survive, continuing to grow and to figure out the contours of the new media we are all building. They’ll be active, important players, without “dominating” the way the winners of previous era’s media wars did.

Meanwhile, “the established players” will fall into two groups. Many will collapse under the weight of their legacy costs and dwindling revenues, as so many are already starting to. Others will survive by figuring out, in time, how to cut costs while expanding their online reach.

The survivors in the second group will find that they can be profitable and do good work, but they will hardly have “come out on top.” In fact, as companies, they will come out looking much more like Gawker Media and Huffington Post than today’s Time Inc. or New York Times Company.

(The other factor here is that new “dominant players” may enter from other quarters — just look at the investments AOL and Yahoo are making in content. But I think they’ll find dominance elusive, too.)

In other words, this is a future with no small group of “dominant players,” but maybe a much broader spectrum of modestly successful players. This is because, in a world awash in content, the media business is never going to be as profitable as it was in a world of scarce content. It will be sustainable, but it won’t support the sort of monopoly profits that made it so attractive for seekers after dominance, you can check this website to find more info

It is also a world where there are no more Rupert Murdochs, which would come as a relief.

This outcome is almost entirely inconceivable to New York media insiders and to the reporters whose job, like Cassidy’s, is to cover their world.

The rest of us should cross our fingers and hope that…therein lies the future.

Filed Under: Business, Media

Chris Gulker, 1951-2010

October 28, 2010 by Scott Rosenberg

I know some of you have been following, as I have, the posts by Web pioneer Chris Gulker about his illness over the past couple years. Over the summer, Chris told us that there was nothing more to be done about his brain tumor, and he proceeded to settle his online affairs in the same thoughtful and careful way he seemed to approach everything he did. He died last night. Of course, you can read about it on his blog.

It’s a trip we’ll all take sooner or later, but few of us venture to do so as publicly as Chris did. His posts chronicling his state of mind and health in recent months and weeks have been graceful and courageous.

I wasn’t a close friend of Chris’s, but I tried to keep up with him over the years, because I owed him a great debt (which I talked with him about last year): he is more responsible than any other individual for turning me on to the Web fairly early, and the Web has been at the center of my work ever since.

In September or October of 1994 Chris showed me the Electric Examiner, the SF Examiner’s Web playground that he ran off a Sun server sitting in an empty hallway behind the Examiner’s press room. I said, “This is cool. I’ve heard HTML isn’t that hard — can I, like, do something here?” He told me that, if I knew FTP, I could just download an HTML guide and be off and running. I already had Internet access through the WELL, so that’s what I did. And he was right: It really was easy! Anyone could do it. I got excited about that, and I’m still excited. A few weeks later the Examiner staff went on strike and I had the chance to use that HTML knowledge as part of the San Francisco Free Press effort. Soon after that I built my first personal website, and within a year I’d left the Examiner (as Chris had) and moved to the Web.

Chris went on to a long career at Apple and Adobe. He was also a top-notch photographer, and one of the very early bloggers. Rudolf Ammann’s article traces some of his very significant role. Rudolf (with a tiny assist from me and some others) also built a Wikipedia page about Chris. He will be missed by me and many, many others.

Here’s Dave Winer’s post about Chris’s passing.

UPDATE: Here’s a full obit for Chris at InMenlo. And Rudolf Ammann has a page with lots of other links to reminiscences and articles about Chris.

Filed Under: People, Personal

MediaBugs — now available in 50 states!

October 27, 2010 by Scott Rosenberg

When MediaBugs.org went live earlier this year it explicitly served only the San Francisco Bay Area community. This was partly because we wanted to test our model and our technology out in a manageable area, and also because our Knight Foundation funders emphasized serving specific geographical communities.

This worked out well for us in some ways. We got to introduce and refine our idea in a place where we could meet in person with a lot of newsroom managers and present the project at small meetings and face-to-face gatherings.

But it was also limiting. I found that a lot of the exchanges I had with people once I explained MediaBugs to them went something like this. My listener would say, “What a great idea! You know, just the other day I saw this really unfortunate error in the X News about Y” — where both X and Y lie outside the Bay Area. And I’d have to say, “That’s really interesting, but unfortunately we are only covering the Bay Area right now.” Both of us would look glum and the conversation would move on.

Now, instead, we can say: Go for it — file that bug!

We’re excited to announce that, as of today, MediaBugs is a nationwide service. Wherever you are in the U.S., and wherever in the country you find a media organization that you think has made a correctable error, MediaBugs is now available for you to use to try to get those errors corrected. You file an error report; we’ll make sure the media outlet knows about it, and try to get someone to respond.

We’ve also made a bunch of serious improvements to the MediaBugs site and service. We’re incorporating a lot more data about each news organization in our database and presenting it in a new format. Check out the Browse by Media Outlet page to see more:

Our Browse Bugs feature now highlights a map to group media outlets and error reports by region. The map pops up when you roll over the “Browse bugs” link on the navigation menu, along with a complete key to our status icons:

There’s also a nifty new bookmarklet that you can install in most browsers, so you’re never more than a click away from filing an error report (prepopulated with the headline and URL of the page you’re reading):

(This one’s just a picture of the real button, which sits on top of every MediaBugs page, and which you can drag onto your browser toolbar.)

We’ve got all sorts of other stuff in the pipeline over the next few weeks to make MediaBugs a more useful and usable service. Give it a whirl if you haven’t already, and help us fix the news.

We’ve got more details over at the MediaBugs Blog.

Filed Under: Mediabugs

When campaign spending is anonymous, reality gets slippery

October 24, 2010 by Scott Rosenberg

I still get both the New York Times and the Wall Street Journal on paper, and every morning I have the opportunity to compare their front pages, and thereby, their world views. Increasingly, it looks like the US’s two weightiest national papers are presenting fundamentally different pictures of the world to their readers.

Friday offered a particularly striking contrast: Both papers led with stories about campaign finance.


If you read the Times, you came away with the impression that the US Chamber of Commerce, a business lobby, was blowing out the gaskets this cycle. The chart accompanying the Times’ lead story identified the Chamber as “The top non-party spender” in the election, having spent $21.1 million, an amount raised largely from “a relatively small collection of big corporate donors” who have been able to remain anonymous.

Meanwhile, over at the Wall Street Journal, the lead story painted a vastly different picture: “Public-Employees Union Now Leads All Groups in Independent Election Outlays,” the headline reads. “The American Federation of State, County and Municipal Employees is now the biggest outside spender of the 2010 elections,” according to the Journal, with a war chest of $87.5 million. The Times chart, by contrast, has AFSCME spending only $7.9 million.

There are any number of possible explanations for this discrepancy. I’m no campaign finance expert, but I assume it has to do with different sourcing; different definitions of “outside group” and “independent” or “non-party” status; different timespans aggregated in the totals; and no doubt other factors.

Observant readers will note that each paper’s version of this story neatly maps to the ideological positions their critics have assigned them. Blue-state liberals are outraged that the Supreme Court has allowed business to pour anonymous millions into this election cycle; red-state conservatives have long believed that business cash is only a necessary counterweight to the mighty electoral power of union dollars. The Times and the Journal are both playing the roles their opponents have cast them for in this partisan drama.

Still: campaign spending is one of those matters of fact that we ought to be able to nail. Somebody is the biggest “outside spender” in this cycle — either it’s a union, or it’s some conservative lobby like the Chamber of Commerce. Or it’s some anonymous group. Which raises the question of how either paper can make a claim to knowing who the top outside spender is in the elections, since it seems pretty clear that astroturf groups flush with unmarked bills are flooding these elections with unprecedentedly huge sums that no one has been able even to begin to count.

In order to argue about this picture with any confidence, you need data. You need to know who is spending what. And of course that is the problem with this election cycle: Thanks to the Supreme Court’s decision to overturn our already highly inadequate campaign finance rules, we voters don’t have even the most basic information about who is spending how much on the elections.

You can argue that “money is speech” from now till doomsday. We aren’t anywhere close to the stage of having the important discussion of how we actually restrict this kind of spending. All we’re saying is: surely the American people have a right to know who is buying its lawmakers.

Right now this demand comes from the left, but I have a feeling we might hear a little more of it from the Tea Party types after this election, when they see how effectively all that corporate cash deep-sixes their hopes of dynamiting the status quo.

As Frank Rich points out in his column today, the Tea Party’s angry populists are in for a rude surprise when they discover just how completely the candidates they aim to elect are owned by deep-pocketed contributors:

Even as the G.O.P. benefits from unlimited corporate campaign money, it’s pulling off the remarkable feat of persuading a large swath of anxious voters that it will lead a populist charge against the rulers of our economic pyramid — the banks, energy companies, insurance giants and other special interests underwriting its own candidates.

Those candidates were bought with unmarked bills. This campaign money is now as hard to trace as the mortgage dollars that two years ago blew up the economy and that are now jamming the works of the foreclosure machine.

How can you even begin to claim to have fair elections or an honest government without transparency in political spending? Why should the right to free political speech also cover the right to anonymous political speech by the million-dollar-load? Until we repair this colossal breakdown of our system, we’ll be stuck in the 2010 cycle’s banana-republic mode.

UPDATE: For another slice of campaign-finance reality, read Greg Sargent’s Washington Post piece:

According to data from the nonpartisan Sunlight Foundation, conservative groups that have spent significant sums have plowed nearly $75 million in undisclosed donations alone into this election. By contrast, liberal groups have spent under $10 million…

Filed Under: Media, Politics

Thanks for the memories! Why Facebook “download” rocks

October 19, 2010 by Scott Rosenberg

At Open Web Foo I led a small discussion of what I called the “Social Web memory hole” — the way that social networks suck in our contributions and then tend to bury them or make them inaccessible to their authors. It was a treat to share my ideas with this crowd of super-smart tech insiders, though I did have to spell out the Orwell reference (ironic nod to 1984, not joke about memory leaks in program code!).

What I heard was that this problem — which I continue to find upsetting — is most likely a temporary one. Twitter, I was assured, understands the issue and views it as a “bug.” Which is encouraging — except how many years do we wait before concluding that the bug is never going to be fixed?

Meanwhile, the same weekend, Facebook had just introduced its new “download your information” feature. Which is why, at this moment of Wall Street Journal-inspired anti-Facebook feeding frenzy, I want to offer a little counter-programming.

I do not intend to argue about whether Facebook apps passing user IDs in referrer headers is an evil violation of privacy rules, or just the way the Web works. There are some real issues buried in here, but unfortunately, the Journal’s “turn the alarms to 11” treatment has made thoughtful debate difficult. (This post over at Freedom to Tinker is a helpfully sober explanation of the controversy.)

So while the Murdoch media — which has its own axes to grind — bashes Facebook, I’m here today to praise it, because I finally had a chance to use Facebook’s “Download Your Information” tool, and it’s a sweet thing.

I have been a loud voice over the years complaining that Facebook expects us to give it the content of our lives and offers us no route to get that content back out. Facebook has now provided a tool that does precisely this. And it’s not just a crude programmer’s tool — some API that lets developers romp at will but leaves mere mortals up a creek. Facebook is giving real-people users their information in a simple, accessible format, tied up with a nice HTML bow. What you get in Facebook’s download file is a Web directory that you can navigate in your browser, with all your posts, photos and other contributions, well-presented and well-organized.

In my case, I don’t have vast quantities of stuff because I haven’t been a very active Facebook user. The main thing I do on Facebook, in fact, is automatically cross-post my Twitter messages so my friends who hang out on Facebook can see them too. Twitter, of course, still has that “bug” that makes it really hard for you to access your old messages. But now, I actually have an easily readable and searchable archive of my Twitter past — thanks to Facebook! Which, really, is both ironic and delicious.

Here’s what Facebook’s Mark Zuckerberg had to say about the Download feature in a Techcrunch interview:

I think that this is a pretty big step forward in terms of making it so that people can download all of their information, but it isn’t going to be all of what everyone wants. There are going to be questions about why you can’t download your friend’s information too. And it’s because it’s your friend’s and not yours. But you can see that information on Facebook, so maybe you should be able to download it… those are some of the grey areas.

So for this, what we decided to do was stick to content that was completely clear. You upload those photos and videos and wall posts and messages, and you can take them out and they’re yours, undisputed — that’s your profile. There’s going to be more that we need to think through over time. One of the things, we just couldn’t understand why people kept on saying there’s no way to export your information from Facebook because we have Connect, which is the single biggest large-scale way that people bring information from one site to another that I think has ever been built.

So it seems that Zuckerberg and his colleagues felt that they already let you export your information thanks to Facebook Connect. Again: True for developers but useless for everyday users, unless and until someone writes the code that lets you actually get your data — which is what Facebook itself has now done.

I think this means Facebook is beginning to take more seriously its aspiration to be the repository of our collective memory — a project that Zuckerberg lieutenant Christopher Cox has rapturously described but that Facebook has never seemed that serious about.

I still have questions and concerns about Facebook as the chokepoint-holder of a new social-network-based Web. I’d really rather see things go in the federation direction that people like Status.net, Identi.ca and Diaspora are all working on.

Still, Facebook isn’t going anywhere. It’s a fact of Web life today, and so its moves towards letting users take their data home with them deserve applause.

What I’d like to see next is an idea that came out of that Open Web Foo session: As we turn Facebook and other social services into the online equivalent of the family album, the scrapbook and the old shoebox full of photos, we’re going to need good, simple tools for people to work with them — to take the mountains of stuff we’re piling up inside these services and distill memorable stories from them.

The technologists in the room imagined an algorithmic way to do this — some version of Flickr’s “interestingness” rating, where the service could essentially do the work for you by figuring out which of your photos and posts had the most long-term value.

I’m sure there’s a future in that. My vision, as a writer, is something simpler: a tool that would let us easily assemble photos and text and video from our Facebook troves and turn them into pages that tell stories from our own and our friends’ lives. Something like Storify, maybe. I think we’re going to need this, whether from Facebook itself or from a third-party app developer.

That “cloud” we’re seeding with our memories? Let’s make it rain the stories of our lives.

UPDATE: Om Malik has some insights into some of the other companies involved in the Facebook-shares-your-ID story. And if you want to play with FB’s “Download” tool, you’ll find it in Facebook under Account –> Account settings –> Download your information.

Filed Under: Events, Media, Net Culture, Technology

E-book Links for October 12-17: Kindle Singles, pricing insanity, eSuckers, iBookstore flopping?

October 17, 2010 by Scott Rosenberg

  • E-Books: No Friends of Free Expression [Ted Striphas, The Late Age of Print] “I argue that however convenient a means Kindle may be for acquiring e-books and other types of digital content, the device nevertheless disposes reading to serve a host of inconvenient—indeed, illiberal—ends. Consequently, the technology underscores the growing importance of a new and fundamental right to counterbalance the illiberal tendencies that it embodies—a 'right to read,' which would complement the existing right to free expression."
  • eBook Pricing Goes Outright Insane! [Mike Cane’s xBlog]: "Pay more and get less! Tell me how that isn’t having contempt for all of us eBook buyers! Never in the history of American business has one industry done so much to guarantee its own failure."
  • The iBookstore six months after launch: One big failure [David Winograd, TUAW]: "Unless Apple and Random House can make nice, there are a ton of books that won't be sold by Apple, and customer expectations of getting anything they want, when they want it, fade away."
  • This Way To The eGress eBook eSuckers [Mike Cane, the Digital Reader]: "Going with pay-for services such as these are just a sucker’s game. You lose control of proper book formatting, you lose control of your ISBN and metadata ownership, and you’re forever giving someone else a cut of your money for work you could have done yourself."
  • How Writers Can Turn Their Archives into eBooks [Carl Zimmer, The Atlantic]: "if you're an author with an ill-fitting piece of writing you think is good — good enough that people might want to buy it — you can just publish it yourself and put your hunch to the test. No warehouse required."
  • Authors and ebook problems: expanding the net of responisbility [Rich Adin, TeleRead]: "Too many ebooks are being released that are poorly formatted and rife with errors that could easily be corrected just by proofreading the converted version before releasing the ebook on the unsuspecting public. This should be of primary importance to authors."
  • Kindle Singles: A new potential home for in-depth news? [Josh Benton, Nieman Lab]: "Not many people are willing to read 15,000 words on a laptop screen, and it’s not surprising that many great newspaper series don’t get great traffic online. But shift that narrative to a Kindle or an iPad, and maybe more people are willing to invest the time. Maybe even the money, too."
  • Kindle Singles Will Bring Novellas, Chapbooks and Pamphlets to E-Readers [Tim Carmody, Wired]: "Individual writers may benefit the most from the program, as it makes it easier for them to self-publish works that precisely for reasons of length can’t find support from traditional publishers."
  • Amazon Introduces The Digital Pamphlet With ‘Kindle Singles’ [TechCrunch]: "A perfect, natural length to lay out a single killer idea, well researched, well argued and well illustrated."

Filed Under: Books, Links

How to turn a paper of record into a website of record

October 15, 2010 by Scott Rosenberg

Last week Arthur Brisbane, the new public editor of the New York Times, posted an illuminating exchange between a reader of the paper and one of its top editors.

The reader asked: What’s with the way stories change all the time on the website? “How does the newspaper of record handle this? I read something, and now poof, it’s gone without a trace.”

Jim Roberts, the paper’s associate managing editor, responded: “We are constantly refining what we publish online.” He added that the paper often”uses the final printed version as the final archived version that stays on the Web.” But not always! There are “many exceptions.”

The headline over the column reads “Revising the Newspaper of Record.” But what the exchange reveals is that, right now, there is no record of the newspaper of record. The Times is revising its copy online all the time. No doubt the vast majority of these “refinements” are trivial or uncontroversial. But some of them are likely substantial. (Here was one right on the edge that was filed at MediaBugs.) If I understand Times policy correctly, when a change fixes an outright error, it is supposed to be marked with a correction notice. But there’s no record of these changes, so the Times could be cutting corners here and we’d never know.

When I raise this issue I sometimes hear back some variation on “What’s the big deal? Wire services change their copy all the time. Newspapers have always revised stories from edition to edition. How’s the Web different?”

I’ll tell you how: When newspapers change a story from the early to the late edition, the early edition is still out there for people to read and compare. When you change a Web page, the older version disappears, unless you take active steps to save it.

That, of course, is precisely what the Times — along with every other news outlet that’s committed to accountability — ought to do. Whenever a published story is changed, the paper should make the previous versions available to its readers. (I’ve outlined this idea here, written about it more here, and there’s now a WordPress plugin to demonstrate it in action.)

Let the world see the changes. This is all published, public material; there’s nothing to hide. With this one change to its publishing practices online, the Times can make good on the promise of the old “paper of record” moniker and become a website of record — while giving itself real freedom to keep improving the stories it has already posted.

Here are some potential objections, and my responses:

(1) What about actual errors that you’ve corrected? Unless they’re libelous and there’s some legal need to take them down, you can leave the errors visible behind a “revised version link” — while clearly marking them as errors that have been corrected. This is the most foolproof way of keeping your correction process transparent and trustworthy. When material is removed for legal reasons, a note can indicate that.

(2) Why provide so much excess material to readers? Aren’t we all drowning in too much information already? You can hide the previous versions behind links, the way Wikipedia (or our WordPress plugin) does. Most readers will ignore them — except every now and then when they notice that something’s changed and they want to see why. The Web has more than enough “space” for this data; it’s all just files on disk drives or data in databases.

(3) What you ask for makes sense, but it’s a ton of work to make that sort of change on a website as complex as a major newspaper’s! Right. Sure. I don’t expect this to happen tomorrow. But it’s worth beginning to plan now. I’m firmly convinced that this is an essential “best practice” for trustworthy news publishing online. It will happen, eventually. Why not get the ball rolling?

UPDATE: Mahendra Palsule pointed me to his account of a situation last month where the Wall Street Journal’s modifications to live stories made it look as if it might have scrubbed a controversial quote from a story (though it hadn’t done that at all). In comments there, Zach Seward of the Journal’s online team mentions that the paper is discussing the revisions-display idea. Maybe a little healthy competition will get this practice adopted!

Filed Under: Media, Mediabugs

The Web Parenthesis: Is the “open Web” closing?

October 12, 2010 by Scott Rosenberg

Heard of the “Gutenberg parenthesis”? This is the intriguing proposition that the era of mass consumption of text ushered in by the printing press four centuries ago was a mere interlude between the previous era of predominantly oral culture and a new digital-oral era on whose threshold we may now sit.

That’s a fascinating debate in itself. For the moment I just want to borrow the “parenthesis” concept — the idea that an innovative development we are accustomed to viewing as a step up some progressive ladder may instead be simply a temporary break in some dominant norm.

What if the “open Web” were just this sort of parenthesis? What if the advent of a (near) universal publishing platform open to (nearly) all were not itself a transformative break with the past, but instead a brief transitional interlude between more closed informational regimes?

That’s the question I weighed last weekend at Open Web Foo Camp. I’d never been to one of O’Reilly’s Foo Camp events — informal “unconferences” at the publisher’s Sebastopol offices — but last weekend had the pleasure of hanging out with an extraordinary gang of smart people there. Here’s what I came away with.

For starters, of course everyone has a different take on the meaning of “openness.” Tantek Celik’s post lays out some of the principles embraced by ardent technologists in this field:

  • open formats for freely publishing what you write, photograph, video and otherwise create, author, or code (e.g. HTML, CSS, Javascript, JPEG, PNG, Ogg, WebM etc.).
  • domain name registrars and web hosting services that, like phone companies, don’t judge your content.
  • cheap internet access that doesn’t discriminate based on domains

But for many users, these principles are distant, complex, and hard to fathom. They might think of the iPhone as a substantially “open” device because hey, you can extend its functionality by buying new apps — that’s a lot more open than your Plain Old Cellphone, right? In the ’80s Microsoft’s DOS-Windows platform was labeled “open” because, unlike Apple’s products, anyone could manufacture hardware for it.

“Open,” then, isn’t a category; it’s a spectrum. The spectrum runs from effectively locked-down platforms and services (think: broadcast TV) to those that are substantially unencumbered by technical or legal constraint. There is probably no such thing as a totally open system. But it’s fairly easy to figure out whether one system is more or less open than another.

The trend-line of today’s successful digital platforms is moving noticeably towards the closed end of this spectrum. We see this at work at many different levels of the layered stack of services that give us the networks we enjoy today — for instance:

  • the App Store — iPhone apps, unlike Web sites and services, must pass through Apple’s approval process before being available to users.
  • Facebook / Twitter — These phenomenally successful social networks, though permeable in several important ways, exist as centralized operations run by private companies, which set the rules for what developers and users can do on them.
  • Comcast — the cable company that provides much of the U.S.’s Internet service just merged with NBC and faces all sorts of temptations to manipulate its delivery of the open Web to favor its own content and services.
  • Google — the big company most vocal about “open Web” principles has arguably compromised its commitment to net neutrality, and Open Web Foo attendees raised questions about new wrinkles in Google Search that may subtly favor large services like Yelp or Google-owned YouTube over independent sites.

The picture is hardly all-or-nothing, and openness regularly has its innings — for instance, with developments like Facebook’s new download-your-data feature. But once you load everything on the scales, it’s hard not to conclude that today we’re seeing the strongest challenge to the open Web ideal since the Web itself began taking off in 1994-5.

Then the Web seemed to represent a fundamental break from the media and technology regimes that preceded it — a mutant offspring of the academy and fringe culture that had inexplicably gone mass market and eclipsed the closed online services of its day. Now we must ask, was this openness an anomaly — a parenthesis?

My heart tells me “no,” but my brain says the answer will be yes — unless we get busy. Openness is resilient and powerful in itself, but it can’t survive without friends, without people who understand it explaining it to the public and lobbying for it inside companies and in front of regulators and governments.

For me, one of the heartening aspects of the Foo weekend was seeing a whole generation of young developers and entrepreneurs who grew up with a relatively open Web as a fact of life begin to grapple with this question themselves. And one of the questions hanging over the event, which Anil Dash framed, was how these people can hang on to their ideals once they move inside the biggest companies, as many of them have.

What’s at stake here is not just a lofty abstraction. It’s whether the next generation of innovators on the Web — in technology, in services, or in news and publishing, where my passion lies — will be free to raise their next mutant offspring. As Steven Johnson reminds us in his new book, when you close anything — your company, your service, your mind — you pay an “innovation tax.” You make it harder for ideas to bump together productively and become fertile.

Each of the institutions taking a hop toward the closed end of the openness spectrum today has inherited advantages from the relatively open online environment of the past 15 years. Let’s hope their successors over the next 15 can have the same head start.

Filed Under: Business, Events, Media, Net Culture, Technology

« Previous Page
Next Page »