Wordyard

Hand-forged posts since 2002

Scott Rosenberg

  • About
  • Greatest hits

Archives

When Google was that new thing with the funny name

July 7, 2013 by Scott Rosenberg 1 Comment

early googleOne little article I wrote 15 years ago for Salon has been making the rounds again recently (probably because Andrew Leonard recently linked to it — thanks, Andrew!).

This piece was notable because it introduced Salon’s readers to a new service with the unlikely name of Google. My enthusiastic endorsement was based entirely on my own happy experience as a user of of the new search engine, and my great relief at finding a new Web tool that wasn’t larded up with a zillion spammy ad-driven come-ons, as so much of the dotcom-bubble-driven Web was at the time. The column was one of the earlier media hits for Google — it might’ve been the first mention outside the trade press, if this early Google “Press Mentions” page is complete.

Today I see a couple of important stories buried in this little ’90s time capsule. One is about money, the other about innovation.

First, the money: A commenter over at Hacker News expressed the kind but deluded wish that I had somehow invested in Google at that early stage. Even if I had been interested (and as a tech journalist, I wasn’t going to go down that road), the company had only recently incorporated and taken on its first private investment. You couldn’t just walk in off the street and buy the place. (Though that didn’t stop Salon’s CEO at the time from trying.)

In its earliest incarnation, and for several years thereafter, the big question businesspeople asked about Google was, “How will they ever make money?” But the service that was so ridiculously appealing at the start thanks to its minimalist, ad-free start page became the Gargantua of the Web advertising ecosystem. Despite its “Don’t be evil” mantra and its demonstrable dedication to good user experience, Google also became the chief driver of the Web’s pay-per-click corruption.

I love Google in many ways, and there’s little question that it remains the most user-friendly and interoperability-minded of the big Web firms. But over the years I’ve become increasingly convinced that, as Rich Skrenta wrote a long time ago, “PageRank Wrecked the Web.” Giving links a dollar value made them a commodity.

Maybe you’ve noticed that this keeps happening. Today, Facebook is making relationships a commodity. Twitter is doing the same to casual communication. For those of us who got excited about the Web in the early ’90s because — as some smart people once observed — nobody owned it, everyone could use it, and anyone could improve it, this is a tear-your-hair-out scenario.

Or would be, except: there’s an escape route. Ironically, it’s the same one that Larry Page and Sergei Brin mapped out for us all in 1998. Which brings us to the second story my 1998 column tells, the interesting one, the one about innovation.

To understand this one, you have to recall the Web scene that Google was born into. In 1998, search was over. It was a “solved problem”! Altavista, Excite, Infoseek, Lycos, and the rest — all these sites provided an essential but fully understood service to Web users. All that was left was for the “portal” companies to build profitable businesses around them, and the Web would be complete.

Google burst onto this scene and said, “No, you don’t understand, there’s room to improve here.” That was correct. And it’s a universal insight that never stops being applicable: there’s an endless amount of room to improve, everywhere. There are no solved problems; as people’s needs change and their expectations evolve, problems keep unsolving themselves.

This is the context in which all the best work in the technology universe gets done. If you’re looking for opportunities to make a buck, you may well avoid markets where established players rule or entrenched systems dominate. But if you’re looking for better ways to think and live, if you’re inspired by ideals more than profits, there’s no such thing as a closed market.

This, I think, is the lesson that Doug Engelbart, RIP, kept trying to teach us: When it comes to “augmenting human intellect,” there’s no such thing as a stable final state. Opportunity is infinite. Every field is perpetually green.

Engelbart-Demo-Intro-9Dec68

Filed Under: Net Culture, Technology

“A large universe of documents”

April 30, 2013 by Scott Rosenberg 3 Comments

w3c and buzzfeed2

“The WorldWideWeb (W3) is a wide-area hypermedia information retrieval initiative aiming to give universal access to a large universe of documents.”

That’s how the Web first defined itself to the world.

Today is apparently the 20th anniversary of the moment when Tim Berners-Lee and his colleagues at CERN, the advanced physics lab in Geneva, made the Web’s underlying code free and public. CERN has a big project up to document and celebrate. As part of that project, it has posted a reproduction of the home page of the first public website.

The definition above is the first sentence on that page. Let’s unpack it!

The WorldWideWeb

I’m guessing this odd treatment — one word with CamelCase capitalization — was an inheritance from the Unix programming world in which Tim Berners-Lee worked and the Web hatched. It’s been years since anyone wrote it this way (even the W3C adds spaces). Spaces don’t work in old-school file names and the Web was conceived as a direct way to interconnect the file systems on networked servers, so leaving out the spaces made sense. Today it’s a style-book fight just to keep people from lower-casing “the Web.”

wide-area

The Web was all about moving our conception of a network from the thing that let one computer talk to another (or a printer) in an office to the thing that connected people and data around the world. In those days networks were considered “LANs” — local-area networks — or “WANs” — wide-area networks. LANs were in physically proximate spaces like large offices or, later, homes. WANs were bigger — computers connected first by phone lines and later by an alphabet-soup of higher-speed connections like ISDN, DSL, T1, and so forth. But it wasn’t clear what one would do with a WAN until the Web came along and showed us.

hypermedia

The term that emerged from Ted Nelson’s work on hypertext, popularized by Apple’s HyperCard, meaning texts and documents that are connected by crosslinks. The Web made links second nature for many of us, but we still haven’t fully digested all their possibilities — or stopped arguing about their pros and cons.

information retrieval

It’s fascinating to recall just how simple the Web’s bones are. Its underlying protocols provide a simple collection of action verbs — “get,” “post” and “put” — that describe sending and receiving information. That’s it. All the other stuff we do online today is built on that foundation.

initiative

The Web was not a startup. It was a collaborative “initiative.” This caused many in the tech industry to dismiss it; how could it ever compete against the mighty, money-driven behemoths like Compuserve, Prodigy and AOL, or, later, MSN?

universal access

The Web would be “free” and “open,” as the CERN page now says. No tollgates or licensing fees or dues or rent. Of course there was money in the system; the rapid commercialization of the Internet on which the Web still rests still lay in the future in 1993, but it was already in sight. But the piece of the system that made the Web the Web was going to be free of charge and free to tinker with.

With the right networking technology, it’s easy to make something universally available; it’s much harder to create something that the universe actually wants. That was the genius of the Web.

large universe of documents

This is the phrase that still excites and haunts me. The Web was originally about “documents,” not functional code. It was a publishing platform for the sharing of what we now refer to as “static files.” The phrase reminds us of the irresistible invitation the Web made to non-programmers: you too can contribute! You don’t need to code! HTML is a “markup language” and can be learned in minutes! (That was true, then.)

Today’s Web is infinitely more capable, and more complex. Over the past decade, modern browsers and javascript have turned it into an adaptable programming environment that first rendered the old MSOffice-driven desktop world obsolete and now faces its own challenges in the mobile world.

That’s great! It’s where I live and work now. But there will always be a corner of my mind and heart set aside for the Web as that simpler enterprise — that thing that just lets anyone explore and expand a “large universe of documents.”

Filed Under: Net Culture, Uncategorized

‘How to Be Yourself’: My Ignite talk about authenticity

February 10, 2013 by Scott Rosenberg Leave a Comment

Ignite talks are an exquisite form of self-torture for which you voluntarily stand in front of a crowd and give a five-minute talk timed to twenty slides that advance, inexorably, every 15 seconds.

At the end of last year I gave one of these talks at NewsFoo, and the kind folks who organized that event provided some great video.

My theme was a topic I’ve grown increasingly fascinated by — “reality hunger,” the “authenticity bind,” and the nature of personal identity in the digital age.

Here’s my five minutes:

What’s with the references to RuPaul? At the conference I had the good/bad fortune of immediately following Mark Luckie onstage. Luckie’s talk on “Why RuPaul is Better At Social Media Than You” was way more fabulous than mine could ever be, as you can see:

There’s some great stuff in nearly all of the other Ignite talks from NewsFoo. They’re all here.

Filed Under: Net Culture, Personal

Recent work: NY Times’ 9-year-old terror error; local news ethics; Wikipedia

July 21, 2011 by Scott Rosenberg 1 Comment

Sometimes your labor on a bunch of projects comes to fruition all at once. Here are some links to recently published stuff:

Corrections in the Web Age: The Case of the New York Times’ Terror Error — How did a 2002 error in the New York Times wreck a KQED interview in 2011 about John Walker Lindh, the “American Taliban”? And what does the incident tell us about how newsroom traditions of verification and correction must evolve in the digital age? MediaBugs’ Mark Follman and I put together this case study and it’s all here in the Atlantic’s fantastic Tech section. If you’re wondering what the point of MediaBugs is or why I’ve spent so much of the past two years working on it, this is a good summary!

Rules of the Road: Navigating the New Ethics of Local Journalism: I spent a considerable amount of time last winter and spring interviewing a whole passel of editors and proprietors of local news sites as part of this project for JLab, trying to find the tough questions and dilemmas they face as old-fashioned journalism ethics collide with the new shapes local journalism is taking online. It was a blast doing the interviews and fun assembling the results with Andy Pergam, Jan Schaffer and everyone else at JLab. It’s all on the website but it’s also available in PDF and print.

Whose point of view?: In the American Prospect, I used Wikipedia’s article on Social Security as an example to explore how Wikipedia’s principle of “neutral point of view” can break down. Here’s an excerpt:

Wikipedia says virtually nothing about the system’s role as a safety net, its baseline protections against poverty for the elderly and the disabled, its part in shoring up the battered foundations of the American middle class, or its defined-benefit stability as a bulwark against the violent oscillations of market-based retirement piggy banks.

This is a problem—not just for Social Security’s advocates but for Wikipedia itself, which has an extensive corpus of customs and practices intended to root out individual bias.

Filed Under: Media, Mediabugs, Net Culture, Personal, Politics

Circles: Facebook’s reality failure is Google+’s opportunity

June 30, 2011 by Scott Rosenberg 13 Comments

Way back when I joined Facebook I was under the impression that it was the social network where people play themselves. On Facebook, you were supposed to be “real.” So I figured, OK, this is where I don’t friend everyone indiscriminately; this is where I only connect with people I really know.

I stuck with that for a little while. But there were two big problems.

First, I was bombarded with friend requests from people I barely knew or didn’t know at all. Why? It soon became clear that large numbers of people weren’t approaching Facebook with the reality principle in mind. They were playing the usual online game of racking up big numbers to feel important. “Friend count”” was the new “unique visitors.”

Then Facebook started to get massive. And consultants and authors started giving us advice about how to use Facebook to brand ourselves. And marketing people began advocating that we use Facebook to sell stuff and, in fact, sell ourselves.

So which was Facebook: a new space for authentic communication between real people — or a new arena for self-promotion?

I could probably have handled this existential dilemma. And I know it’s one that a lot of people simply don’t care about. It bugged me, but it was the other Facebook problem that made me not want to use the service at all.

Facebook flattens our social relationships into one undifferentiated blob. It’s almost impossible to organize friends into discrete groups like “family” and “work” and “school friends” and so forth. Facebook’s just not built that way. (This critique is hardly original to me. But it’s worth repeating.)

In theory Facebook advocates a strict “one person, one account” policy, because each account’s supposed to correlate to a “real” individual. But then sometimes Facebook recommends that we keep a personal profile for our private life and a “page” for our professional life. Which seems an awful lot like “one person, two accounts.”

In truth, Facebook started out with an oversimplified conception of social life, modeled on the artificial hothouse community of a college campus, and it has never succeeded in providing a usable or convenient method for dividing or organizing your life into its different contexts. This is a massive, ongoing failure. And it is precisely where Facebook’s competitors at Google have built the strength of their new service for networking and sharing, Google+.

Google+ opened a limited trial on Tuesday, and last night it hit some sort of critical mass in the land of tech-and-media early adopters. Invitations were flying, in an eerie and amusing echo of what happened in 2004, when Google opened its very first social network, Orkut, to the public, and the Silicon Valley elite flocked to it with glee.

Google+ represents Google’s fourth big bite at building a social network. Orkut never took off because Google stopped building it out; once you found your friends there was nothing to do there. Wave was a fascinating experiment in advanced technology that was incomprehensible to the average user, and Google abandoned it. Buzz was (and is) a Twitter-like effort that botched its launch by invading your Gmail inbox and raiding your contact list.

So far Google+ seems to be getting things right: It’s easy to figure out, it explains itself elegantly as you delve into its features, it’s fast (for now, at least, under a trial-size population) and it’s even a bit fun.

By far the most interesting and valuable feature of Google+ is the idea of “circles” that it’s built upon. You choose friends and organize them into different “circles,” or groups, based on any criteria you like — the obvious ones being “family,” “friends,” “work,” and so on.

The most important thing to know is that you use these circles to decide who you’ll share what with. So, if you don’t want your friends to be bugged by some tidbit from your workplace, you just share with your workplace circle. Google has conceived and executed this feature beautifully; it takes little time to be up and running.

The other key choice is that you see the composition of your circles but your friends don’t: It’s as if you’re organizing them on your desktop. Your contacts never see how you’re labeling them, but your labeling choices govern what they see of what you share.

I’m sure problems will surface with this model but so far it seems sound and useful, and it’s a cinch to get started with it. Of course, if you’re already living inside Facebook, Google has a tough sell to make. You’ve invested in one network, you’re connected there; why should you bother? But if, like me, you resisted Facebook, Google+ offers a useful alternative that’s worth exploring.

The ideal future of social networking is one that isn’t controlled by any single company. But social networks depend on scale, and right now it’s big companies that are providing that.

Lord knows Google’s record isn’t perfect. But in this realm I view it as the least of evils. Look at the competition: Facebook is being built by young engineers who don’t have lives, and I don’t trust it to understand the complexity of our lives. It’s also about to go public and faces enormous pressure to cash in on the vast network it’s built. Twitter is a great service for real-time public conversation but it’s no better at nuanced social interaction than Facebook. Apple is forging the One Ring to rule all media and technology, and it’s a beaut, but I’ll keep my personal relationships out of its hands as long as I can. Microsoft? Don’t even bother.

Of the technology giants, Google — despite its missteps — has the best record of helping build and expand the Web in useful ways. It’s full of brilliant engineers who have had a very hard time figuring out how to transfer their expertise from the realm of code to the world of human interaction. But it’s learning.

So I’ll embrace the open-source, distributed, nobody-owns-it social network when it arrives, as it inevitably will, whether we get it from the likes of Diaspora and Status.net or somebody else. In the meantime, Google+ is looking pretty good. (Except for that awful punctuation-mark-laden name.)

MORE READING:

Gina Trapani’s notes on “What Google+ Learned from Buzz and Wave”

Marshall Kirkpatrick’s First Night With Google+

Filed Under: Net Culture, Technology

Salon’s TableTalk shutdown: What we can learn from the story of a pioneering online community

May 12, 2011 by Scott Rosenberg 26 Comments

Table Talk home page, circa 1999Salon.com Wednesday announced plans to close Table Talk, the online discussion space and community that has operated continuously since Salon’s launch on Nov. 20, 1995. I was involved in Table Talk’s creation and management for its first several years, and when I read the news, I flashed back to my first day at Salon.

As the tech-savviest of a not-tech-savvy-at-all gang of newspaper refugees trying to build a web magazine, I got pulled over by our then-publisher. He’d been tearing his hair out trying to get a group of unruly Cornell students to write the software that would power Table Talk, which was going to be Salon’s big bid for being not just an online magazine but an “interactive” website worthy of the Salon name. Things weren’t going well. “I want you to project manage this,” the publisher said. I thought, “What do I know from ‘project manage’? I’m a critic!” Then I dove in, because, in a startup with six employees, that was what you did.

For me it was the start of a deepening engagement with and affection for the excitement, complexity and pitfalls of building software-powered websites. (Salon itself was lovingly hand-coded then and for several years after.) We got Table Talk launched, sort of, though within weeks we had to ditch the version those Cornell kids had built and start fresh. Said kids took their software and built TheGlobe.com with it, which went on to an impossibly successful IPO at the height of the dotcom bubble before a spectacular flameout.

The original idea was that every Salon article would have a link at the end to a Table Talk thread. The articles would serve, in part, as discussion-starters and then our community would kick the ideas around. It wasn’t a dumb plan — story comments are now a Web standard. But the way we built it, modeled on the experiences some of us had had as members of The WELL, Table Talk was a separate space with threaded discussions that anyone could add. The conversations weren’t tied to the stories very well, and we quickly learned that the community members — who took to the project avidly — preferred to talk about what they wanted to talk about. Salon’s editors and writers rarely hung out in TT, and it didn’t take long before the TT members developed a dysfunctional relationship with Salon’s staff — simultaneously craving our attention and resenting our presence.

So TT went its somewhat separate way from Salon-the-magazine, which soon started running a simple, hand-coded letters to the editor page to highlight actual responses to our stories. Mary Elizabeth Williams, its original and longtime host, managed the discussion space with great love and devotion for years. We all learned a lot about dealing with anonymity and trolls, personal authenticity and online performance art, technical woes and social dynamics.

What we never managed to do was find a way to knit the energy and talent of Table Talk’s remarkable community with the skills and money being invested into Salon.com itself. Instead, Salon tried over and over to find different models for tying community together with journalism. In 1999 it acquired The WELL. In 2002 it launched a blog program. In 2005 it transformed Letters to the Editor into a more web-standard comments feature. In 2008 it launched Open Salon as a modern, social blogging platform.

As a result, Table Talk became, more and more, a separate entity. When we started Salon Premium in 2001 as a paid service that let users see an ad-free site and some premium content, we rolled Table Talk into it: its pages were readable by anyone, but you needed to pay to post. That insured its survival but also assured its marginality. Over the years Salon’s management (which I was a part of until 2007) considered, over and over, whether to shut it down. It generated large numbers of page views from a relatively small number of users and advertisers were not excited by that. Its WebCrossing software was increasingly out of step with the direction the Web was moving in. Yet TT’s community remained close-knit and vibrant. In the wake of this week’s announcement, its members, unsurprisingly, are already trying to figure out ways to continue their conversations after the site’s announced June 10 shutdown date.

I don’t second-guess Salon’s leadership for deciding to end TT today — I might well do the same in their shoes. I do think there’s a lesson here, though, not just for Salon but for all the other enterprises out there today that dream of doing what we tried for so long to do at Salon. (Hi, Arianna; hi, Tina.)

The lesson is simple: Don’t think of “conversation” and “community” as subsidiaries to “content.” They aren’t after-thoughts, add-ons, or sidebars. They are the point of the Web. Here’s how I put it in Say Everything:

[Interactivity] is just a clumsy word for communication. That communication — each reader’s ability to be a writer as well — was not some bell or whistle. It was the whole point of the Web, the defining trait of the new medium — like motion in movies, or sound in radio, or narrow columns of text in newspapers.

Editors and publishers keep crossing their fingers and hoping to find some new platform that reverses this principle and puts them back in the comfortable realm of piping content out to consumers. They think this stuff will finally settle down. But change keeps accelerating instead. Today we are feeding one another stories, passing links around, telling friends what we’re fascinated by or excited about or steamed over. My Flipboard is more useful and interesting to me than the front page of the New York Times (sorry, Bill Keller). The conversation isn’t an after-thought. It’s interesting in itself, and it’s how we inform one another.

So Table Talk is dead: RIP. But Table Talk is everywhere, too — on Facebook and Twitter, all over the blogosphere, and in a billion comment threads. Table talk is what we do online. It’s not what comes after a publication’s stories. It’s what comes before.

BONUS LINK: If you haven’t already, go read Paul Ford’s wonderful essay on the nature of the Web and its fundamental question — “Why wasn’t I consulted?”

Filed Under: Net Culture, Salon

Thanks for the memories! Why Facebook “download” rocks

October 19, 2010 by Scott Rosenberg 3 Comments

At Open Web Foo I led a small discussion of what I called the “Social Web memory hole” — the way that social networks suck in our contributions and then tend to bury them or make them inaccessible to their authors. It was a treat to share my ideas with this crowd of super-smart tech insiders, though I did have to spell out the Orwell reference (ironic nod to 1984, not joke about memory leaks in program code!).

What I heard was that this problem — which I continue to find upsetting — is most likely a temporary one. Twitter, I was assured, understands the issue and views it as a “bug.” Which is encouraging — except how many years do we wait before concluding that the bug is never going to be fixed?

Meanwhile, the same weekend, Facebook had just introduced its new “download your information” feature. Which is why, at this moment of Wall Street Journal-inspired anti-Facebook feeding frenzy, I want to offer a little counter-programming.

I do not intend to argue about whether Facebook apps passing user IDs in referrer headers is an evil violation of privacy rules, or just the way the Web works. There are some real issues buried in here, but unfortunately, the Journal’s “turn the alarms to 11” treatment has made thoughtful debate difficult. (This post over at Freedom to Tinker is a helpfully sober explanation of the controversy.)

So while the Murdoch media — which has its own axes to grind — bashes Facebook, I’m here today to praise it, because I finally had a chance to use Facebook’s “Download Your Information” tool, and it’s a sweet thing.

I have been a loud voice over the years complaining that Facebook expects us to give it the content of our lives and offers us no route to get that content back out. Facebook has now provided a tool that does precisely this. And it’s not just a crude programmer’s tool — some API that lets developers romp at will but leaves mere mortals up a creek. Facebook is giving real-people users their information in a simple, accessible format, tied up with a nice HTML bow. What you get in Facebook’s download file is a Web directory that you can navigate in your browser, with all your posts, photos and other contributions, well-presented and well-organized.

In my case, I don’t have vast quantities of stuff because I haven’t been a very active Facebook user. The main thing I do on Facebook, in fact, is automatically cross-post my Twitter messages so my friends who hang out on Facebook can see them too. Twitter, of course, still has that “bug” that makes it really hard for you to access your old messages. But now, I actually have an easily readable and searchable archive of my Twitter past — thanks to Facebook! Which, really, is both ironic and delicious.

Here’s what Facebook’s Mark Zuckerberg had to say about the Download feature in a Techcrunch interview:

I think that this is a pretty big step forward in terms of making it so that people can download all of their information, but it isn’t going to be all of what everyone wants. There are going to be questions about why you can’t download your friend’s information too. And it’s because it’s your friend’s and not yours. But you can see that information on Facebook, so maybe you should be able to download it… those are some of the grey areas.

So for this, what we decided to do was stick to content that was completely clear. You upload those photos and videos and wall posts and messages, and you can take them out and they’re yours, undisputed — that’s your profile. There’s going to be more that we need to think through over time. One of the things, we just couldn’t understand why people kept on saying there’s no way to export your information from Facebook because we have Connect, which is the single biggest large-scale way that people bring information from one site to another that I think has ever been built.

So it seems that Zuckerberg and his colleagues felt that they already let you export your information thanks to Facebook Connect. Again: True for developers but useless for everyday users, unless and until someone writes the code that lets you actually get your data — which is what Facebook itself has now done.

I think this means Facebook is beginning to take more seriously its aspiration to be the repository of our collective memory — a project that Zuckerberg lieutenant Christopher Cox has rapturously described but that Facebook has never seemed that serious about.

I still have questions and concerns about Facebook as the chokepoint-holder of a new social-network-based Web. I’d really rather see things go in the federation direction that people like Status.net, Identi.ca and Diaspora are all working on.

Still, Facebook isn’t going anywhere. It’s a fact of Web life today, and so its moves towards letting users take their data home with them deserve applause.

What I’d like to see next is an idea that came out of that Open Web Foo session: As we turn Facebook and other social services into the online equivalent of the family album, the scrapbook and the old shoebox full of photos, we’re going to need good, simple tools for people to work with them — to take the mountains of stuff we’re piling up inside these services and distill memorable stories from them.

The technologists in the room imagined an algorithmic way to do this — some version of Flickr’s “interestingness” rating, where the service could essentially do the work for you by figuring out which of your photos and posts had the most long-term value.

I’m sure there’s a future in that. My vision, as a writer, is something simpler: a tool that would let us easily assemble photos and text and video from our Facebook troves and turn them into pages that tell stories from our own and our friends’ lives. Something like Storify, maybe. I think we’re going to need this, whether from Facebook itself or from a third-party app developer.

That “cloud” we’re seeding with our memories? Let’s make it rain the stories of our lives.

UPDATE: Om Malik has some insights into some of the other companies involved in the Facebook-shares-your-ID story. And if you want to play with FB’s “Download” tool, you’ll find it in Facebook under Account –> Account settings –> Download your information.

Filed Under: Events, Media, Net Culture, Technology

The Web Parenthesis: Is the “open Web” closing?

October 12, 2010 by Scott Rosenberg 24 Comments

Heard of the “Gutenberg parenthesis”? This is the intriguing proposition that the era of mass consumption of text ushered in by the printing press four centuries ago was a mere interlude between the previous era of predominantly oral culture and a new digital-oral era on whose threshold we may now sit.

That’s a fascinating debate in itself. For the moment I just want to borrow the “parenthesis” concept — the idea that an innovative development we are accustomed to viewing as a step up some progressive ladder may instead be simply a temporary break in some dominant norm.

What if the “open Web” were just this sort of parenthesis? What if the advent of a (near) universal publishing platform open to (nearly) all were not itself a transformative break with the past, but instead a brief transitional interlude between more closed informational regimes?

That’s the question I weighed last weekend at Open Web Foo Camp. I’d never been to one of O’Reilly’s Foo Camp events — informal “unconferences” at the publisher’s Sebastopol offices — but last weekend had the pleasure of hanging out with an extraordinary gang of smart people there. Here’s what I came away with.

For starters, of course everyone has a different take on the meaning of “openness.” Tantek Celik’s post lays out some of the principles embraced by ardent technologists in this field:

  • open formats for freely publishing what you write, photograph, video and otherwise create, author, or code (e.g. HTML, CSS, Javascript, JPEG, PNG, Ogg, WebM etc.).
  • domain name registrars and web hosting services that, like phone companies, don’t judge your content.
  • cheap internet access that doesn’t discriminate based on domains

But for many users, these principles are distant, complex, and hard to fathom. They might think of the iPhone as a substantially “open” device because hey, you can extend its functionality by buying new apps — that’s a lot more open than your Plain Old Cellphone, right? In the ’80s Microsoft’s DOS-Windows platform was labeled “open” because, unlike Apple’s products, anyone could manufacture hardware for it.

“Open,” then, isn’t a category; it’s a spectrum. The spectrum runs from effectively locked-down platforms and services (think: broadcast TV) to those that are substantially unencumbered by technical or legal constraint. There is probably no such thing as a totally open system. But it’s fairly easy to figure out whether one system is more or less open than another.

The trend-line of today’s successful digital platforms is moving noticeably towards the closed end of this spectrum. We see this at work at many different levels of the layered stack of services that give us the networks we enjoy today — for instance:

  • the App Store — iPhone apps, unlike Web sites and services, must pass through Apple’s approval process before being available to users.
  • Facebook / Twitter — These phenomenally successful social networks, though permeable in several important ways, exist as centralized operations run by private companies, which set the rules for what developers and users can do on them.
  • Comcast — the cable company that provides much of the U.S.’s Internet service just merged with NBC and faces all sorts of temptations to manipulate its delivery of the open Web to favor its own content and services.
  • Google — the big company most vocal about “open Web” principles has arguably compromised its commitment to net neutrality, and Open Web Foo attendees raised questions about new wrinkles in Google Search that may subtly favor large services like Yelp or Google-owned YouTube over independent sites.

The picture is hardly all-or-nothing, and openness regularly has its innings — for instance, with developments like Facebook’s new download-your-data feature. But once you load everything on the scales, it’s hard not to conclude that today we’re seeing the strongest challenge to the open Web ideal since the Web itself began taking off in 1994-5.

Then the Web seemed to represent a fundamental break from the media and technology regimes that preceded it — a mutant offspring of the academy and fringe culture that had inexplicably gone mass market and eclipsed the closed online services of its day. Now we must ask, was this openness an anomaly — a parenthesis?

My heart tells me “no,” but my brain says the answer will be yes — unless we get busy. Openness is resilient and powerful in itself, but it can’t survive without friends, without people who understand it explaining it to the public and lobbying for it inside companies and in front of regulators and governments.

For me, one of the heartening aspects of the Foo weekend was seeing a whole generation of young developers and entrepreneurs who grew up with a relatively open Web as a fact of life begin to grapple with this question themselves. And one of the questions hanging over the event, which Anil Dash framed, was how these people can hang on to their ideals once they move inside the biggest companies, as many of them have.

What’s at stake here is not just a lofty abstraction. It’s whether the next generation of innovators on the Web — in technology, in services, or in news and publishing, where my passion lies — will be free to raise their next mutant offspring. As Steven Johnson reminds us in his new book, when you close anything — your company, your service, your mind — you pay an “innovation tax.” You make it harder for ideas to bump together productively and become fertile.

Each of the institutions taking a hop toward the closed end of the openness spectrum today has inherited advantages from the relatively open online environment of the past 15 years. Let’s hope their successors over the next 15 can have the same head start.

Filed Under: Business, Events, Media, Net Culture, Technology

In Defense of Links, Part Two: Money changes everything

August 31, 2010 by Scott Rosenberg 16 Comments

This is the second post in a three-part series. The first part was Nick Carr, hypertext and delinkification. The third part is In links we trust.

The Web is deep in many directions, yet it is also, undeniably, full of distractions. These distractions do not lie at the root of the Web’s nature. They’re out on its branches, where we find desperate businesses perched, struggling to eke out one more click of your mouse, one more view of their page.

Yesterday I distinguished the “informational linking” most of us use on today’s Web from the “artistic linking” of literary hypertext avant-gardists. The latter, it turns out, is what researchers were examining when they produced the studies that Nick Carr dragooned into service in his campaign to prove that the Web is dulling our brains.

Today I want to talk about another kind of linking: call it “corporate linking.” (Individuals and little-guy companies do it, too, but not on the same scale.) These are links placed on pages because they provide some tangible business value to the linker: they cookie a user for an affiliate program, or boost a target page’s Google rank, or aim to increase a site’s “stickiness” by getting the reader to click through to another page.

I think Nick Carr is wrong in arguing that linked text is in itself harder to read than unlinked text. But when he maintains that reading on the Web is too often an assault of blinking distractions, well, that’s hard to deny. The evidence is all around us. The question is, why? How did the Web, a tool to forge connections and deepen understanding, become, in the eyes of so many intelligent people, an attention-mangling machine?

Practices like splitting articles into multiple pages or delivering lists via pageview-mongering slideshows have been with us since the early Web. I figured they’d die out quickly, but they’ve shown great resilience — despite being crude, annoying, ineffective, hostile to users, and harmful to the long-term interests of their practitioners. There seems to be an inexhaustible supply of media executives who misunderstand how the Web works and think that they can somehow beat it into submission. Their tactics have produced an onslaught of distractions that are neither native to the Web’s technology nor inevitable byproducts of its design. The blinking, buzzing parade is, rather, a side-effect of business failure, a desperation move on the part of flailing commercial publishers.

For instance, Monday morning I was reading Howard Kurtz’s paean to the survival of Time magazine when the Washington Post decided that I might not be sufficiently engaged with its writer’s words. A black prompt box helpfully hovered in from the right page margin with a come-hither look and a “related story” link. How mean to Howie, I thought. (Over at the New York Times, at least they save these little fly-in suggestion boxes till you’ve reached the end of a story.)

If you’re on a web page that’s weighted down with cross-promotional hand-waving, revenue-squeezing ad overload and interstitial interruptions, odds are you’re on a newspaper or magazine site. For an egregiously awful example of how business linking can ruin the experience of reading on the Web, take a look at the current version of Time.com.
[Read more…]

Filed Under: Business, Media, Net Culture

In Defense of Links, Part One: Nick Carr, hypertext and delinkification

August 30, 2010 by Scott Rosenberg

For 15 years, I’ve been doing most of my writing — aside from my two books — on the Web. When I do switch back to writing an article for print, I find myself feeling stymied. I can’t link!

Links have become an essential part of how I write, and also part of how I read. Given a choice between reading something on paper and reading it online, I much prefer reading online: I can follow up on an article’s links to explore source material, gain a deeper understanding of a complex point, or just look up some term of art with which I’m unfamiliar.

There is, I think, nothing unusual about this today. So I was flummoxed earlier this year when Nicholas Carr started a campaign against the humble link, and found at least partial support from some other estimable writers (among them Laura Miller, Marshall Kirkpatrick, Jason Fry and Ryan Chittum). Carr’s “delinkification” critique is part of a larger argument contained in his book The Shallows. I read the book this summer and plan to write about it more. But for now let’s zero in on Carr’s case against links, on pages 126-129 of his book as well as in his “delinkification” post.

The nub of Carr’s argument is that every link in a text imposes “a little cognitive load” that makes reading less efficient. Each link forces us to ask, “Should I click?” As a result, Carr wrote in the “delinkification” post, “People who read hypertext comprehend and learn less, studies show, than those who read the same material in printed form.”

This appearance of the word “hypertext” is a tipoff to one of the big problems with Carr’s argument: it mixes up two quite different visions of linking.

“Hypertext” is the term invented by Ted Nelson in 1965 to describe text that, unlike traditional linear writing, spreads out in a network of nodes and links. Nelson’s idea hearkened back to Vannevar Bush’s celebrated “As We May Think,” paralleled Douglas Engelbart’s pioneering work on networked knowledge systems, and looked forward to today’s Web.

This original conception of hypertext fathered two lines of descent. One adopted hypertext as a practical tool for organizing and cross-associating information; the other embraced it as an experimental art form, which might transform the essentially linear nature of our reading into a branching game, puzzle or poem, in which the reader collaborates with the author. The pragmatists use links to try to enhance comprehension or add context, to say “here’s where I got this” or “here’s where you can learn more”; the hypertext artists deploy them as part of a larger experiment in expanding (or blowing up) the structure of traditional narrative.

These are fundamentally different endeavors. The pragmatic linkers have thrived in the Web era; the literary linkers have so far largely failed to reach anyone outside the academy. The Web has given us a hypertext world in which links providing useful pointers outnumber links with artistic intent a million to one. If we are going to study the impact of hypertext on our brains and our culture, surely we should look at the reality of the Web, not the dream of the hypertext artists and theorists.

The other big problem with Carr’s case against links lies in that ever-suspect phrase, “studies show.” Any time you hear those words your brain-alarm should sound: What studies? By whom? What do they show? What were they actually studying? How’d they design the study? Who paid for it?

To my surprise, as far as I can tell, not one of the many other writers who weighed in on delinkification earlier this year took the time to do so. I did, and here’s what I found.
[Read more…]

Filed Under: Culture, Media, Net Culture

Next Page »