Wordyard

Hand-forged posts since 2002

Scott Rosenberg

  • About
  • Greatest hits

Archives

Mutating books, evolving authors

October 1, 2010 by Scott Rosenberg 12 Comments

The Wall Street Journal ran a lengthy and sobering piece this week about how the rise of the e-book is altering the landscape of the publishing industry. It was not, on the surface, a happy picture for authors:

The digital revolution that is disrupting the economic model of the book industry is having an outsize impact on the careers of literary writers. Priced much lower than hardcovers, many e-books generate less income for publishers. And big retailers are buying fewer titles. As a result, the publishers who nurtured generations of America’s top literary-fiction writers are approving fewer book deals and signing fewer new writers. Most of those getting published are receiving smaller advances.

The Journal piece focused on fiction writers, but the implications are similar for nonfiction authors like me. Whenever a wave of change sweeps through an industry, the old ways of making money tend to dissipate faster than the new ways coalesce. There is much wringing of hands. People panic. As a veteran of the newspaper industry I feel like I know this movie pretty well by now.

I also know this: when you do creative work, you are not owed a living. Few things are more ludicrous than a writer with a sense of entitlement. It would be wonderful if the pie available to reward authors were growing rather than shrinking. But we live in an era blessed with an abundance of opportunities to publish — and a relative scarcity of time to consume the products of publishing. Gluts make prices collapse. There’s no way an e-book can or should cost anything like what a paper book costs. Maybe volume will make up some of the difference — but, plainly, not yet.

I don’t see the point in hand-wringing. But I still plan to write long-form non-fiction and hope to earn at least some portion of my living doing it. So I’m going to do my damnedest to try to understand the changing publishing environment and figure out the smartest way for an author to navigate it. Id rather adapt and evolve than gripe my way to extinction.

To that end, I’m beginning a self-education program in the world of electronic book publishing. I know by some measures I’m coming to this absurdly late. Then again, I was worried when I started this blog in 2002 that I was late to that party, too.

So help me out. What are your favorite sources of information about e-books and e-readers? Do you just read about them as part of your wider intake of tech and gadget news? Or are there dedicated sites, publications and bloggers who you rely on?

I’m aware of the venerable Teleread. I’ve been enjoying Tim Carmody’s thoughtful posts at Wired and the Atlantic. I’ll read all the think pieces about “the future of the book” by writers like Steven Johnson and Kevin Kelly that come along. Any other useful sources out there I should know about?

I’ll collect my findings and report back!

Filed Under: Books, Business, Media, Personal

How will the App Store’s “new newsstand” be censored? We’ll know it when we see it

September 9, 2010 by Scott Rosenberg 9 Comments

For all of you out there in media-land who still think that the iPad represents salvation for old business models and who welcome the App Store as a new platform for distributing content, I recommend a reading of Apple’s new App Store Review Guidelines as helpfully summarized by Daring Fireball’s John Gruber. (It seems you have to be a registered Apple developer before you can actually read the guidelines in full, but they’re available at Gizmodo.)

Discussion of these guidelines in the tech press initially framed the move as a “relaxation” of Apple’s policies, because the company will now allow developers to use third-party frameworks and toolkits. But view the guidelines from the perspective of content publishing and “relaxation” is not the word that will spring to mind.

This item stands out:

We will reject Apps for any content or behavior that we believe is over the line. What line, you ask? Well, as a Supreme Court Justice once said, “I’ll know it when I see it”. And we think that you will also know it when you cross it.

(Gruber speculates that the my-way-or-the-highway tone of this and other passages suggests direct authorship by Steve Jobs here, and that sounds plausible, but who knows?)

Now, the App Store guidelines are designed by software developers for other software developers. The thinking is, this is our device, we want to protect our users, this ain’t no free-for-all, we’re going to police the hell out of this environment. And Apple plainly has a right to do that. It’s not the only approach to structuring a software ecosystem, but it’s certainly a legitimate one.

Trouble is, the App Store is also being framed as the New Newsstand. The idea of Apple as the keeper of such a newsstand never sat right with me: I just don’t like the idea of my information diet being regulated by any company, let alone a company as tightly wound as Apple. Now Apple has made my unease explicit. In these high-handed words, the company is saying: We will ban whoever we want. And we won’t tell you what the exact standards are. You can guess; then we’ll decide.

The immediate retort here from Apple supporters — hey, I’m one, I love my Mac and my i-devices! — will be that I’m misunderstanding the purpose of the rules, they’re meant to bar wayward code, not wayward ideas.

But how, exactly, can anyone draw a line between code and ideas today? Who says where a software tool ends and a piece of “content” begins? We’re supposed to “know” this line “when we see it,” but I don’t see it at all.

Here are some quotes from the guidelines that Engadget highlighted:

“We have lots of serious developers who don’t want their quality Apps to be surrounded by amateur hour.”
“If your app is rejected, we have a Review Board that you can appeal to. If you run to the press and trash us, it never helps.”
“This is a living document, and new apps presenting new questions may result in new rules at any time. Perhaps your app will trigger this.”
“If it sounds like we’re control freaks, well, maybe it’s because we’re so committed to our users and making sure they have a quality experience with our products.”

Now read these questions from the perspective of a writer or journalist or publisher, not a software developer, and tell me they don’t give you the willies.

It’s always seemed to me that Apple seriously underestimates how impossible it will be to sit as censor and nanny over a thriving content marketplace, if that is what the App Store is going to become. Look at the trouble it had with political cartoonist Mark Fiore: he had to win a Pulitzer before Apple would let him use its platform to practice his art, which happened to involve poking fun at public figures, something the App Store didn’t like. Such controversies will only multiply if the App Store becomes more popular as a content mart.

Now Apple is saying, explicitly, that it intends to draw lines, and those lines won’t be drawn beforehand — but hey, don’t worry, because we’ll just know it when we cross them!

Apple loves to maintain tight control of things. That’s been a hugely successful approach for its hardware business. It’s even a defensible position applied to software. But it’s a lousy model for a newsstand.

UPDATE: Nieman Lab’s Josh Benton reviews the guidelines, finds the special loophole Apple created post-Fiore for “professional political satirists and humorists,” and points out how ridiculous it is:

So a professional columnist or cartoonist can say nasty things about Obama, but Joe Citizen can’t? Defining who is a “professional” when it comes to opinion-sharing is sketchy enough, but when it includes political speech and the defining is being done by overworked employees of a technology company, it’s odious.

Filed Under: Business, Media

In Defense of Links, Part Two: Money changes everything

August 31, 2010 by Scott Rosenberg 16 Comments

This is the second post in a three-part series. The first part was Nick Carr, hypertext and delinkification. The third part is In links we trust.

The Web is deep in many directions, yet it is also, undeniably, full of distractions. These distractions do not lie at the root of the Web’s nature. They’re out on its branches, where we find desperate businesses perched, struggling to eke out one more click of your mouse, one more view of their page.

Yesterday I distinguished the “informational linking” most of us use on today’s Web from the “artistic linking” of literary hypertext avant-gardists. The latter, it turns out, is what researchers were examining when they produced the studies that Nick Carr dragooned into service in his campaign to prove that the Web is dulling our brains.

Today I want to talk about another kind of linking: call it “corporate linking.” (Individuals and little-guy companies do it, too, but not on the same scale.) These are links placed on pages because they provide some tangible business value to the linker: they cookie a user for an affiliate program, or boost a target page’s Google rank, or aim to increase a site’s “stickiness” by getting the reader to click through to another page.

I think Nick Carr is wrong in arguing that linked text is in itself harder to read than unlinked text. But when he maintains that reading on the Web is too often an assault of blinking distractions, well, that’s hard to deny. The evidence is all around us. The question is, why? How did the Web, a tool to forge connections and deepen understanding, become, in the eyes of so many intelligent people, an attention-mangling machine?

Practices like splitting articles into multiple pages or delivering lists via pageview-mongering slideshows have been with us since the early Web. I figured they’d die out quickly, but they’ve shown great resilience — despite being crude, annoying, ineffective, hostile to users, and harmful to the long-term interests of their practitioners. There seems to be an inexhaustible supply of media executives who misunderstand how the Web works and think that they can somehow beat it into submission. Their tactics have produced an onslaught of distractions that are neither native to the Web’s technology nor inevitable byproducts of its design. The blinking, buzzing parade is, rather, a side-effect of business failure, a desperation move on the part of flailing commercial publishers.

For instance, Monday morning I was reading Howard Kurtz’s paean to the survival of Time magazine when the Washington Post decided that I might not be sufficiently engaged with its writer’s words. A black prompt box helpfully hovered in from the right page margin with a come-hither look and a “related story” link. How mean to Howie, I thought. (Over at the New York Times, at least they save these little fly-in suggestion boxes till you’ve reached the end of a story.)

If you’re on a web page that’s weighted down with cross-promotional hand-waving, revenue-squeezing ad overload and interstitial interruptions, odds are you’re on a newspaper or magazine site. For an egregiously awful example of how business linking can ruin the experience of reading on the Web, take a look at the current version of Time.com.
[Read more…]

Filed Under: Business, Media, Net Culture

Dr. Laura, Associated Content and the Googledammerung

August 20, 2010 by Scott Rosenberg 6 Comments

I was on vacation for much of the last couple of weeks, so I missed a lot — including the self-immolation of Dr. Laura Schlessinger. Apparently Schlessinger was the last public figure in the U.S. who does not understand the simple rules of courtesy around racial/religious/ethnic slurs. (As an outsider you don’t get a free pass to use them — no matter how many times you hear them uttered by their targets.) She browbeat a caller with a self-righteous barrage of the “N-word” — and wrote her talk-show-host epitaph.

I shed no tears for Dr. Laura — why do we give so much air time to browbeaters, anyway? — and I don’t care much about this story. But after reading a post over at TPM about Sarah Palin’s hilariously syntax-challenged tweets defending Schlessinger, I wanted to learn just a bit more about what had happened. So of course I turned to Google.

Now, it may have been my choice of search term, or it may have been that the event is already more than a week old, but I was amazed to see, at the top of the Google News results, a story from Associated Content. AC, of course, is the “content farm” recently acquired by Yahoo; it pays writers a pittance to crank out brief items that are — as I’ve written — crafted not to beguile human readers but to charm Google’s algorithm.

AC’s appearance in the Google lead position surprised me. I’d always assumed that, inundated by content-farm-grown dross, Google would figure out how to keep the quality stuff at the top of its index. And this wasn’t Google’s general search index recommending AC, but the more rarefied Google News — which prides itself on maintaining a fairly narrow set of sources, qualified by some level of editorial scrutiny.

Gee, maybe Associated Content is getting better, I thought. Maybe it’s producing some decent stuff. Then I clicked through and began reading:

The Dr. Laura n-word backlash made her quit her radio show. It seems the Dr. Laura n-word controversy has made her pay the price, as the consequences of herbrought down her long-running program. But even if it ended her show, it may not end her career. Despite being labeled as a racist, and despite allegedly being tired of radio, the embattled doctor still seems set to fight on after she leaves. In fact, the Dr. Laura n-word scandal has made her more defiant than ever, despite quitting.

I have cut-and-pasted this quote to preserve all its multi-layered infelicities. The piece goes on in this vein, cobbled together with no care beyond an effortful — and, I guess, successful — determination to catch Google’s eye by repeating the phrase “Dr. Laura n-word” as many times as possible.

The tech press endlessly diverts itself with commentary about Google’s standing vis-a-vis Facebook, Google’s stock price, Google’s legal predicament vis-a-vis Oracle, and so forth — standard corporate who’s-up-who’s-down stuff. But this is different; this is consequential for all of us.

I was a fairly early endorser of Google back in 1998, when the company was a wee babe of a startup. Larry Page impatiently explained to me how PageRank worked, and I sang its deserved praises in my Salon column. For over a decade Google built its glittering empire on this simple reliability: It would always return the best links. You could count on it. You could even click on “I’m feeling lucky.”

I still feel lucky to be able to use Google a zillion times a day, and no, Bing is not much use as an alternative (Microsoft’s search engine kindly recommends two Associated Content stories in the first three results!). But when Google tells me that this drivel is the most relevant result, I can’t help thinking, the game’s up. The Wagner tubas are tuning up for Googledammerung: It’s the twilight of the bots.

As for Associated Content, it argues — as does its competition, like the IPO-bound Demand Media — that its articles are edited and its writers are paid and therefore its pages should be viewed as more professional than your average run-of-the-mill blogger-in-pajamas. I think they’ve got it backwards. I’ll take Pajama Boy or Girl any day. Whatever their limitations, they are usually writing out of some passion. They say something because it matters to them — not because some formula told them that in order to top the index heap, they must jab hot search phrases into their prose until it becomes a bloody pulp.

Let me quote longtime digital-culture observer Mark Dery, from his scorcher of a farewell to the late True/Slant:

The mark of a real writer is that she cares deeply about literary joinery, about keeping the lines of her prose plumb. That’s what makes writers writers: to them, prose isn’t just some Platonic vessel for serving up content; they care about words.

The best bloggers know a thing or two about this “literary joinery.” And even bad bloggers “care about words.” But the writer of Associated Content’s Dr. Laura post is bypassing such unprofitable concerns. He chooses his words to please neither himself nor his readers. They’re strictly for Google’s algorithm. The algorithm is supposed to be able to see through this sort of manipulation, to spit out the worthless gruel so it can serve its human users something more savory. But it looks like the algorithm has lost its sense of taste.

[I should state for the record that in the course of my business work for Salon.com I had occasion to meet with folks from Associated Content. They were upright and sharp and understood things about the Web that we didn’t, then. They’ve built a successful business out of “content” seasoned to suit the Googlebot’s appetite. It’s just not what we think of when we think of “writing.” And if this piece is any indication, there isn’t an editor in sight.]

BONUS LINK: If you want to understand more fully the process by which “news” publishers watch Google for trending topics and then crank out crud to catch Google’s eye, you cannot do better than this post by Danny Sullivan of SearchEngineLand. Sullivan calls it “The Google Sewage Factory”:

The pollution within Google News is ridiculous. This is Google, where we’re supposed to have the gold standard of search quality. Instead, we get “news” sites that have been admitted — after meeting specific editorial criteria — just jumping on the Google Trends bandwagon…

Filed Under: Business, Media, Technology

Could Google’s neutrality backstab be a fake?

August 5, 2010 by Scott Rosenberg 8 Comments

News that Google and Verizon are negotiating a deal to “jump the Internet line,” as the New York Times put it in a great headline, shocked people who’ve been following the Net neutrality story and upset many of Google’s true believers. Google has long been one of Net neutrality’s most reliable big-company backers.

Net neutrality — the principle that information traveling across the Internet should be treated equally by the backbone carriers that keep the packets flowing — made sense for Google’s search-and-ad business: Keep the Internet a level playing field so it keeps growing and stays open to the Googlebot. It also helped keep people from snickering too loudly at the company’s “don’t be evil” mantra.

So why would Google turn around now, at a time when the FCC is weighing exactly how to shape the future of Net neutrality regulation, and signal a course-change toward, um, evil?

Here are the obvious explanations: Google wants to speed YouTube bits to your screen. Google is in bed with Verizon thanks to Android. Google figures neutrality is never going to remain in place so get a jump on the competition.

None of these quite persuades me. But what if — here is where I pause to tell you this is total speculation on my part — it’s a fake-out? What if Google — or some portion of Google — is still basically behind the Net neutrality principle but realizes that very few people understand the issue or realize what’s at stake? Presumably Google and Verizon, which sells a ton of Android phones, talk all the time. Presumably they talk about Net neutrality-related stuff too.

Maybe someone inside Google who still believes in Net neutrality strategically leaked the fact that they’re negotiating this stuff — knowing the headlines and ruckus would follow. Knowing that this might be a perfect way to dramatize Net neutrality questions and mobilize support for strong Net neutrality rules from the public and for the FCC.

This scenario assumes a level of Machiavellian gameplaying skill on Google’s part that the company has not hitherto displayed. And if the whole story is a feint, it might well not be a strategic move on Google’s part but rather a sign of dissent inside Google, with one faction pushing the Verizon deal and another hoping to blow it up.

Still, worth pondering!

UPDATE: A tweet from Google’s Public Policy: “@NYTimes is wrong. We’ve not had any convos with VZN about paying for carriage of our traffic. We remain committed to an open internet.” [hat tip to Dan Lyke in comments]

Filed Under: Business, Politics, Technology

Careful with that ad headline!

August 5, 2010 by Scott Rosenberg Leave a Comment

Here is the front page of a flyer that recently dropped out of my newspaper. It is an ad for a certain very large PC maker whose name rhymes with that fiery place where bad people spend eternity.

We glance at ads very quickly, or we catch them from the corner of our eyes. And when this one passed through my eye and into my brainpan, what I saw was:

DOCK TO DOORSTOP IN ABOUT 48 HOURS.

Which really just doesn’t seem like enough time to enjoy your new laptop…

Filed Under: Business, Technology

You are not an eyeball: Why tracking is the ad biz’s last gasp

August 1, 2010 by Scott Rosenberg 21 Comments

Marketers are following you around on the Internet. They don’t know your name but they know what you do, what you buy, where you buy it, what you’re interested in, and more. The sites you visit collect this information on behalf of networks that then roll you up with other like-minded people in packages, as if you were a subprime mortgage, and sell your eyeballs to advertisers.

People inside the Web industry generally know all this and take it for granted. People outside mostly don’t. That explains some of the wide variation in reaction to a big package the Wall Street Journal published Saturday that chronicles how advertisers track users online.

I found it fascinating that two of the smarter Web veterans I know — Jeff Jarvis and Doc Searls — arrived at opposite perspectives on the Journal coverage. How did that happen? Let’s climb what I’ll call the ladder of reaction to this story, and we can see.

At the bottom rung, we have a simple everyday reader’s freakout. OMG They’re spying on us! This, it seems to me, is the level at which the Journal’s coverage was pitched. It’s full of loaded language: A headline that refers to “your secrets.” References to “surveillance” and “surreptitious” practices. Repeated use of the phrase “sophisticated software” to describe run-of-the-mill stuff that we’ve lived with for years, like the cookie files invented at the dawn of the Web by Lou Montulli (and that anyone can easily delete from their browser).

On the next rung up the ladder we have what I predict will be the response of the punditocracy, the editorial page writers and columnists. They will weigh in early this week, shake their heads in disapproval and demand that the government step in and pile more privacy regulations on the Internet advertising industry.

This will drive the Web industry insiders — up on the ladder’s third rung — even crazier than the Journal feature itself did. For them, the activities the Journal describes are simply old news. This is where we find Jeff Jarvis, who described the Journal feature as “the Reefer Madness of the digital age”: “I don’t understand how the Journal could be so breathlessly naive, unsophisticated, and anachronistic about the basics of the modern media business.” Similarly, Terry Heaton found the Journal’s coverage biased and behind the curve: “It’s like somebody at the paper had been sleeping for ten years and woke up to discover it’s the year 2010!”

Insiders will worry that an anti-tracking backlash might throttle the Web advertising industry at just the moment when big media institutions are praying that online ad revenue might help them make up for all the ad income they’re losing in their offline businesses.

Even more important, they will argue that tracking isn’t an invasion of privacy at all, since the advertisers mostly don’t know you by name or personal identity. Instead, they see you as a bundle of demographic traits and acquisitive tendencies. We owe the maintenance of this important distinction to an ad-tracking scare of a previous era, the great DoubleClick/Abacus controversy of 1999. Yes, this issue has been with us since 1999, which does make you wonder about the Journal’s breathless tone today.

The most important argument the insiders make is the very simple one that tracking, done right, actually performs a useful service: It helps reduce your exposure to ads you don’t care about and shows you more ads that you actually want to see.

This brings us up high to rung number four, where we meet Doc Searls, who is sitting on his own little platform that he’s built over the years, and inviting us to sit down with him and listen.

And he’s saying to the Web insiders: You guys are missing two points. The first is that “most real people are creeped out by this stuff,” even if it is old hat to you. The second is that you aren’t thinking big enough if you think that tracking users’ behavior is the best the Web can do.

You think the Web is all about making inefficient advertising more efficient, when it’s really about eliminating advertising as we have known it entirely, by giving us “better ways for demand and supply to meet — ways that don’t involve tracking or the guesswork called advertising.”

Searls has been elaborating this argument from the early days of the Cluetrain Manifesto to his current work at Project VRM. He’s saying: We know ourselves and our needs better than any third party’s guesswork. The Internet can enable us to speak directly to the marketplace about what we want. We can have a direct conversation with vendors of the things we are thinking about purchasing:

if I had exposed every possible action in my life this past week, including every word I wrote, every click I made, everything I ate and smelled and heard and looked at, the guesswork engine has not been built that can tell any seller the next thing I’ll actually want… Meanwhile I have money ready to spend on about eight things, right now, that I’d be glad to let the right sellers know, provided that information is confined to my relationship with those sellers, and that it doesn’t feed into anybody’s guesswork mill.

I find Searls’ vision appealing, even as I recognize the disruption it portends. The end of advertising also means the end of the business of delivering eyeballs to advertisers. It means that creative people and journalists and other “content creators” will need to abandon the old media’s compromised triangle trade (with creators ferrying consumers to advertisers) and learn how to fill public needs directly. That means we’ll need new ways to fund public-good information (foreign news, accountability journalism, investigations) once we can no longer pay for it with the overflow from advertising-monopoly profits.

That’s the future. Today, I actually think the Journal is doing a public service by writing about stuff industry insiders already know about — even if the paper went over the top in its intimations of dark marketing conspiracies. But it would be so much more of a service to look beyond the desperate thrashings of the badly wounded ad industry — and toward the better model that is struggling to be born.

Filed Under: Business, Media

Newspaper comments: Forget anonymity! The problem is management

April 13, 2010 by Scott Rosenberg 58 Comments

This New York Times piece Monday reflects a growing chorus of resentment among newspaper website managers against the “barroom brawl” atmosphere so many of them have ended up with in the comments sections on their sites.

They blame anonymity. If only they could make people “sign their real names,” surely the atmosphere would improve!

This wish is a pipe dream. They are misdiagnosing their problem, which has little to do with anonymity and everything to do with a failure to understand how online communities work.

It is one of the great tragedies of the past decade that so many media institutions have failed to learn from the now considerable historical record of success and failure in the creation of online conversation spaces. This stuff isn’t new any more. (Hell, this conversation itself isn’t new either — see this Kevin Marks post for a previous iteration.) There are people who have been hosting and running this sort of operation for decades now. They know a thing or two about how to do it right. (To name just a few off the top of my head — there are many more: Gail Williams of the Well. Derek Powazek of Fray.com. Mary Elizabeth Williams at Salon’s Table Talk. Caterina Fake and her (ex-)Flickr gang.)

The great mistake so many newspapers and media outlets made was to turn on the comments software and then walk out of the room. They seemed to believe that the discussions would magically take care of themselves.

If you opened a public cafe or a bar in the downtown of a city, failed to staff it, and left it untended for months on end, would you be surprised if it ended up as a rat-infested hellhole?

Comment spaces need supervision — call them hosts or moderators or tummlers or New Insect Overlords or whatever you want, but don’t neglect to hire them! These moderators need to be actual people with a presence in the conversation, not faceless wielders of the “delete” button. They welcome newcomers, enforce the local rules, and break up the occasional brawl — enlisting help from the more civic-minded regulars as needed.

Show me a newspaper website without a comments host or moderation plan and I’ll show you a nasty flamepit that no unenforceable “use your real name” policy can save. Telling Web users “Use your real name” isn’t bad in itself, but it won’t get you very far if your site has already degenerated into nasty mayhem. The Web has no identity system, and though the FBI can track you down if the provocation is dire enough, and if you get editors mad enough they can track you down, too, most media companies aren’t going to waste the time and money. So you’ll stand there demanding “real names,” and your trolls will ignore you or make up names, and your more thoughtful potential contributors will survey your site and think, “You want me to use my real name in this cesspool? No thanks.”

No, anonymity isn’t the problem. (Wikipedia seems to have managed pretty well without requiring real names, because it has an effective system of persistent identity.) The problem is that once an online discussion space gets off to a bad start it’s very hard to change the tone. The early days of any online community are formative. The tone set by early participants provides cues for each new arrival. Your site will attract newcomers based on what they find already in place: people chatting amiably about their lives will draw others like themselves; similarly, people engaging in competitive displays of bile will entice other putdown artists to join the fun.

So turning things around isn’t easy. In fact, it’s often smarter to just shut down a comments space that’s gone bad, wait a while, and then reopen it when you’ve got a moderation plan ready and have hand-picked some early contributors to set the tone you want. If I were running a newspaper with a comments problem, that’s how I’d proceed. Don’t waste your time trying to force people to use their real names in hope that this will improve the tenor of your discussion area; build a discussion area that’s so appealing from the start that it makes people want to use their real names.

Why didn’t newspapers do this to begin with? I think part of the problem is that a lot of them had only the vaguest rationale for opening up comments in the first place. Maybe some consultant told them it was a good idea. Or it looked like the right thing to do to the young members of the Web team, and the front office said “Go ahead and play, kids, just don’t spend any money.” And the comments got turned on with no one minding the store and no clear goal in mind, either on the business side or in the newsroom.

So, media website operators, I suggest that you ask yourselves:

When you opened up comments, was it really about having a conversation with the readers? Then have that conversation! Get the editors and reporters in there mixing it up with the public. Sure, there will be problems and awkward moments; there will also be breakthroughs in understanding.

Maybe, though, no one was ever really serious about that conversation. Maybe the idea was to boost ad impressions with an abundance of verbiage supplied gratis by the readership. In that case, stop complaining about the flame wars and accept that the more abusive your commenters wax, the more your crass strategy will succeed.

Whatever you do, remember that as long as you’re thinking “What’s wrong with those people?” and “What did we do to deserve this?” you’re not taking responsibility for a problem that, I’m sorry to say, you created yourselves.

Filed Under: Business, Media, Net Culture

For the media biz, iPad 2010 = CDROM 1994

March 26, 2010 by Scott Rosenberg 44 Comments

I’m having flashbacks these days, and they’re not from drugs, they’re from the rising chorus of media-industry froth about how Apple’s forthcoming iPad is going to save the business of selling content.

Let me be clear: I love what I’ve seen of the iPad and I’ll probably end up with one. It’s a likely game-changer for the device market, a rethinking of the lightweight mobile platform that makes sense in many ways. I think it will be a big hit. In the realm of hardware design, interface design and hardware -software integration, Apple remains unmatched today. (The company’s single-point-of-failure approach to content and application distribution is another story — and this problem that will only grow more acute the more successful the iPad becomes.)

But these flashbacks I’m getting as I read about the media business’s iPad excitement — man, they’re intense. Stories like this and this, about the magazine industry’s excitement over the iPad, or videos like these Wired iPad demos, take me back to the early ’90s — when media companies saw their future on a shiny aluminum disc.

If you weren’t following the tech news back then, let me offer you a quick recap. CD-ROMS were going to serve as the media industry’s digital lifeboat. A whole “multimedia industry” emerged around them, complete with high-end niche publishers and mass-market plays. In this world, “interactivity” meant the ability to click on hyperlinks and hybridize your information intake with text, images, sound and video. Yow!

There were, it’s true, a few problems. People weren’t actually that keen on buying CD-ROMs in any quantity. Partly this was because they didn’t work that well. But mostly it was because neither users nor producers ever had a solid handle on what the form was for. They plowed everything from encyclopedias to games to magazines onto the little discs, in a desperate effort to figure it out. They consoled themselves by reminding the world that every new medium goes through an infancy during which nobody really knows what they’re doing and everyone just reproduces the shape and style of existing media forms on the new platform.

You can hear exactly the same excuses in these iPad observations by Time editor Richard Stengel. Stengel says we’re still in the point-the-movie-camera-at-the-proscenium stage. We’re waiting for the new form’s Orson Welles. But we’re charging forward anyway! This future is too bright to be missed.

But it turned out the digital future didn’t need CD-ROM’s Orson Welles. It needed something else, something no disc could offer: an easy way for everyone to contribute their own voices. The moment the Web browser showed up on people’s desktops, somewhing weird happened: people just stopped talking about CD-ROMs. An entire next-big-thing industry vanished with little trace. Today we recall the CD-ROM publishing era as at best a fascinating dead-end, a sandbox in which some talented people began to wrestle with digital change before moving on to the Internet.

It’s easy to see this today, but at the time it was very hard to accept. (My first personal Web project, in January 1995, was an online magazine to, er, review CD-ROMs.)

The Web triumphed over CD-ROM for a slew of reasons, not least its openness. But the central lesson of this most central media transition of our era, one whose implications we’re still digesting, is this: People like to interact with one another more than they like to engage with static information. Every step in the Web’s evolution demonstrates that connecting people with other people trumps giving them flashy, showy interfaces to flat data.

It’s no mystery why so many publishing companies are revved up about the iPad: they’re hoping the new gizmo will turn back the clock on their business model, allowing them to make consumers pay while delivering their eyeballs directly to advertisers via costly, eye-catching displays. Here’s consultant Ken Doctor, speaking on Marketplace yesterday:

DOCTOR: Essentially, it’s a do-over. With a new platform and a new way of thinking about it. Can you charge advertisers in a different way and can you say to readers, we’re going to need you to pay for it?

Many of the industry executives who are hyping iPad publishing are in the camp that views the decision publishers made in the early days of the Web not to charge for their publications as an original sin. The iPad, they imagine, will restore prelapsarian profit margins.

Good luck with that! The reason it’s tough to charge for content today is that there’s just too much of it. People are having a blast talking with each other online. And as long as the iPad has a good Web browser, it’s hard to imagine how gated content and costly content apps will beat that.

You ask, “What about the example of iPhone apps? Don’t they prove people will pay for convenience on a mobile device?” Maybe. To me they prove that the iPhone’s screen is still too small to really enjoy a standard browser experience. So users pay to avoid the navigation tax that browser use on the iPhone incurs. This is the chief value of the iPad: it brings the ease and power of the iPhone OS’s touch interface to a full-size Web-browser window.

I can’t wait to play around with this. But I don’t see myself rushing to pay for repurposed paper magazines and newspapers sprinkled with a few audio-visual doodads. That didn’t fly with CD-ROMs and it won’t fly on the iPad.

Apple’s new device may well prove an interesting market for a new generation of full-length creative works — books, movies, music, mashups of all of the above — works that people are likely to want to consume more than once. But for anything with a shelf-life half-life — news and information and commentary — the iPad is unlikely to serve as a savior. For anyone who thinks otherwise, can I interest you in a carton of unopened CD-ROM magazines?

Filed Under: Blogging, Business, Media, Say Everything, Technology

SEO mills: That’s not fast food, it’s bot fodder

December 14, 2009 by Scott Rosenberg 11 Comments

Yesterday TechCrunch’s Mike Arrington denounced the rise of SEO-mill-driven content — the sort of business Associated Content and Demand Media are in, and AOL is going into — as “the rise of fast food content.”

This gave me a good laugh, since, of course, most journalists have long (and mostly wrongly) viewed Arrington’s own output, and that of all blog-driven enterprises, as “fast food journalism.” Arrington, rightly, I think, sees himself more as a “mom-and-pop” operation producting “hand-crafted content,” and he’s bemoaning “the rise of cheap, disposable content on a mass scale, force fed to us by the portals and search engines.”

Trouble is, Arrington’s metaphor is off. The articles produced by the SEO-driven content mills aren’t like fast food at all. Fast food works because it tastes good, even if it’s bad for us: it satisfies our junk cravings for sugar and salt and fat. We eat it, and we want more. The online-content equivalent to junk food might be a gossip blog, or photos of Oscar Night dresses, or whatever other material you read compulsively, knowing that you’re not really expanding your mind.

The stuff that Demand Media and Associated Content produce isn’t “junk-food content” because it’s not designed for human appetites at all: it’s targeted at the Googlebot. It’s content created about certain topics that are known to produce a Google-ad payoff; the articles are then doctored up to maximize exposure in the search engine. individually they don’t make much money, but all they have to do is make a little more per page than they cost. Multiply that by some number with many zeros on the end and you’ve got a business.

These businesses aren’t preying on our addictive behaviors; they’re exploiting differentials and weaknesses in Google’s advertising-and-search ecosystem. As Farhad Manjoo pointed out recently in Slate, the actual articles produced by these enterprises tend to be of appallingly poor quality. McDonald’s food may not be good for you, but it’s consistent and, plainly, appealing to multitudes. But few sane readers would willingly choose to consume an SEO mill’s take on a topic over something that was written for human consumption.

That’s why I think Arrington’s off-base. The SEO arbitrageurs may make money manipulating the search-engine bots, but they can’t “force feed” their output to real people. Doc Searls’ idealism on this point is more persuasive than Arrington’s lament.

Filed Under: Business, Technology

« Previous Page
Next Page »