Wordyard

Hand-forged posts since 2002

Scott Rosenberg

  • About
  • Greatest hits

Archives

Where did 2017 go? Into Backchannel

December 5, 2017 by Scott Rosenberg 1 Comment

From July through Thanksgiving this year I worked with Steven Levy, Jessi Hempel, and their team at Backchannel as an editor — an opportunity that became available while executive editor Sandra Upson was on leave. I’ve been around enough online media shops to recognize the fine hum of a great editorial outfit, and this was definitely the feeling at Backchannel, which now operates as a sort of magazine-within-a-magazine at Wired.

I’ve actually been writing for Backchannel since Steven and Sandra started it at Medium — and I’m proud of early pieces I wrote for them on the nature of the blockchain and corporate programming languages. But this was a chance to edit and write at the same time, which has always been my favorite mode of work. As this gig concludes, I want to share a brief recap of the stories I wrote this year — if for nothing else, to help me find them again!

The Fashion App Founder With a Pocket Full of Visas
Purva Gupta talks about what it’s like to be an immigrant leading a startup. (11/22/17)

Taxes on Tech Need an Overhaul — But Not Like This
Taxing stock options on vesting is probably a dumb idea, but let’s talk about ways tax policy could encourage companies to make good on options’ original spread-the-wealth promise. (11/15/17)

The Lean Startup Pioneer Wants Everyone to Think Like a Founder
A conversation with Eric Ries about his new book, The Startup Way. (11/8/17)

The End of the Cult of the Founder
The Silicon Valley founder is uniquely ill-prepared to deal with complex political, social, and economic problems. (11/8/17)

Why Artificial Intelligence Is Still Waiting For Its Ethics Transplant
A conversation with Kate Crawford. (11/1/17)

Burning Memories: Rethinking Digital Archives After the Napa Fire
What artifacts survive, what information endures, and what can you do? (10/25/17)

This Techie Is Using Blockchain to Monetize His Time
Does charging people for your time using a personal digital currency make any sense? Evan Prodromou talks about his “EvanCoin” project. (10/18/17)

Google Home, Alexa, and Siri Are Forcing Us to Make a Serious Decision
Be careful which digital assistant you hire — because firing them isn’t easy. (10/11/17)

Silicon Valley’s Trillion-Dollar Numbers Game
Why do so many startups tout their “total addressable market” when it’s a largely fictional metric? (10/4/17)

Firewalls Don’t Stop Hackers. AI Might
A conversation with DarkTrace CEO Nicole Eagan. (9/27/17)

The Unbearable Irony of Meditation Apps
Can your smartphone possibly help you focus and breathe? (8/30/17)

Bitcoin Makes Even Smart People Feel Dumb
The ’90s web was easy to fathom and participants flocked. Cryptocurrencies, not so much. (8/9/17)

Artificial Intelligence at Salesforce: An Inside Look
Salesforce’s goal is “AI for everyone” — or at least every company. (8/2/17)

Silicon Valley’s First Founder Was Its Worst
What today’s startup world can learn from the (bad) example of William Shockley. (7/19/17)

How Google Book Search Got Lost
Google’s first “moonshot” project ended up way more mundane than anyone expected. (4/11/17)

Inside Dropbox’s Identity Overhaul
How an innovator in cloud storage designed and developed its new collaborative document authoring system. (1/30/17)

Filed Under: Personal, Technology

Lies our bubbles taught us

April 30, 2014 by Scott Rosenberg Leave a Comment

5826294465_fe40b48ae8_o

Of course it’s a bubble! Let’s not waste any time on that one.

The fact of a bubble doesn’t mean that all the stuff people in tech are building today is worthless, nor will it all vanish when (not “if”) the bubble deflates.

It does mean that a certain set of reality distortions infects our conversations and our coverage of the industry. If you are old enough to have lived through one or more previous turns of this wheel, you will recognize them. If they’re not familiar to you, herewith, a primer: three lies you will most commonly hear during a tech bubble.

(1) Valuation is reality

When one company buys another and pays cash, it makes sense to say that the purchased company is “worth” what the buyer paid. But that’s not how most tech-industry transactions work. Mostly, we’re dealing in all stock trades, maybe with a little cash sweetener.

Stock prices are volatile, marginal and retrospective: they represent what the most recent buyer and seller agreed to pay. They offer no guarantee that the next buyer and seller will pay the same thing. The myth of valuation is the idea that you can take this momentary-snapshot stock price and apply it to the entire company — as if the whole market will freeze and let you do so.

Even more important: Stock is speculative — literally. When I buy your company with stock in my company, I’m not handing you money I earned; I’m giving you a piece of the future of my company (the assumption that someday there will be a flow of profits). There’s nothing wrong with that, but it’s not remotely the same as giving you cash you can walk away with.

When one company buys another with stock, the entire transaction is performed with hopes and dreams. This aspect of the market is like capitalism’s version of quantum uncertainty: No one actually knows what the acquired company is “worth” until the sell-or-hold choices that people will make play out over time. Some people might get rich; some might get screwed.

Too often, our headlines and stories and discussions of deals closed and fortunes made ignore all this. Maintaining the blur is in the interests of the deal-makers and the fortune-winners. Which is why it persists, and spreads every time the markets go bananas. Then young journalists who have never seen a tech bubble before sally forth, wide-eyed, to gape at the astronomical valuations of itty-bitty startups, without an asterisk in sight.

This distorted understanding of valuation takes slightly different forms depending on the status of the companies involved.

With a publicly traded company, it’s easy to determine a valuation: Just multiply price per share times the total number of outstanding shares, right? But the more shares anyone tries to sell at any one time, the less likely it is that he will be able to get the same price. The more shares you shovel into the market, most of the time, the further the price will drop. (Similarly, if you decide you want to buy a company by buying a majority of its stock, you’ll probably drive the price up.) So stock valuations are elusive moving targets.

For private companies, though, it’s even worse. Basically, the founders/owners and potential investors sit down and agree on any price they like. They look for rationales anywhere and everywhere, from comparable companies and transactions to history to media coverage and rumor to “the back of this envelope looks too empty, let’s make up some numbers to fill it.” If other investors are already in the game, they’re typically involved too; they’re happy if their original investment is now worth more, unhappy if their stake (the percentage ownership of the company their investment originally bought them) is in any way diluted.

There are all sorts of creative ways of cutting this pie. The main thing to know is, it’s entirely up to owners and investors how to approach placing a value on the company. All that private investment transactions (like Series A rounds) tell you is what these people think the company is worth — or what they want you to think it’s worth.

Also: it’s ridiculous to take a small transaction — like, “I just decided to buy 2 percent of this company for $20 million because I think they’re gonna be huge” — and extrapolate to the full valuation of the company: “He just bought 2 percent of us for $20 million, so we’re now a $1 billion company, yay!”

If you can keep persuading lots of people to keep paying $20 million for those 2 percent stakes, and you sell the whole company, then you’re a $1 billion company. Until then, you’re just a company that was able to persuade a person that it might be worth $1 billion someday. During a bubble, many such people are born every minute, but their money tends to disappear quickly, and they vanish from the landscape at the first sign of bust.

(Here’s a classic “X is worth Y” fib from the last bubble of the mid-2000s. Note that Digg was ultimately acquired, years later, for a small fraction of the price BusinessWeek floated in 2006.)

(2) “We will always put our users first”

Many startup founders are passionate about their dedication to their users, in what is really Silicon Valley’s modern twist on the age-old adage that “the customer always comes first.” One of the side-effects of a bubble is that small companies can defer the messy business of extracting cash profits from customers and devote their energy to pampering users. Hooray! That makes life good for everyone for a while.

The trouble arises when founders forget that there is a sharp and usually irreconcilable conflict between “putting the user first” and delivering profits to investors. The best companies find creative ways to delay this reckoning, but no one escapes it. It’s most pronounced in advertising-based industries, where the user isn’t the real paying customer. But even tech outfits that have paying users face painful dilemmas: Software vendors still put cranky licensing schemes on their products, or create awkward tie-ins to try to push new products and services, or block interoperability with competitors even when it would make users’ lives easier. Even hardware makers will pad their bottom lines by putting nasty expiration codes on ink cartridges or charging ridiculous prices for tiny adapters.

But the ultimate bubble delusion is the founder’s pipe-dream that she can sell her company yet still retain control and keep “putting the user first” forever. In the public-company version of this phenomenon, the founder tells the world that nothing will change after the IPO, the company’s mission is just as lofty as ever, its dedication to the user just as fanatical. This certainty lasts as long as the stock price holds up. Legally and practically, however, the company now exists to “deliver value to shareholders,” not to deliver value to users. Any time those goals conflict — and they will — the people in the room arguing the shareholders’ cause will always hold the trump card.

In the private company version, the founder of some just-acquired startup proudly tells the world that, even though he has just sold his company off at a gazillion- dollar valuation, nothing will change, users will benefit, move right along. Listen, for instance, to WhatsApp founder Jan Koum, an advocate of privacy and online-advertising skeptic, as he “sets the record straight” after Facebook acquired his company:

If partnering with Facebook meant that we had to change our values, we wouldn’t have done it. Instead, we are forming a partnership that would allow us to continue operating independently and autonomously. Our fundamental values and beliefs will not change. Our principles will not change. Everything that has made WhatsApp the leader in personal messaging will still be in place. Speculation to the contrary isn’t just baseless and unfounded, it’s irresponsible. It has the effect of scaring people into thinking we’re suddenly collecting all kinds of new data. That’s just not true, and it’s important to us that you know that.

Koum sounds like a fine guy, and I imagine he totally believes these words. But he’s deluding himself and his users by making promises in perpetuity that he can no longer keep. It’s not his company any more. He can say “partnership” as often as he likes; he has still sold his company. Facebook today may honor Koum’s privacy pledges, but who can say what Facebook tomorrow will decide?

This isn’t conspiracy thinking; it’s capitalism 101. Yet it’s remarkable how deeply a bubble-intoxicated industry can fool itself into believing it has transcended such inconvenient facts.

(3) This time, it’s different

If you’re 25 today then you were a college freshman when the global economy collapsed in 2007-8. If you went into tech, you’ve never lived through a full-on bust. Maybe it’s understandable for you to look at today’s welter of IPOs and acquisitions and stock millionaires and think that it’s just the natural order of things.

If you’re much older than that, though, no excuses! You know, as you should, that bubbles aren’t forever. Markets that go up go down, too. (Eventually, they go back up again.) Volatile as the tech economy is, it is also predictably cyclical.

Most recently, our bubbles have coincided with periods — like the late ’90s and the present — when the Federal Reserve kept interest rates low, flooding the markets with cash looking for a return. (Today, also, growing inequality has fattened the “play money” pockets of the very investors who are most likely to take risky bets.)

These bubbles end when there’s a sudden outbreak of sanity and sobriety, or when geopolitical trouble casts a pall on market exuberance. (Both of these happened in quick succession in 2000-2001 to end the original dotcom bubble.) Or a bubble can pop when the gears of the financial system itself jam up, as happened in 2007-8 to squelch the incipient Web 2.0 bubble. My guess is that today’s bubble will pop the moment interest rates begin to head north again — a reckoning that keeps failing to materialize, but must someday arrive.

It might be this year or next, it might be in three years or five, but sooner or later, this bubble will end too. Never mind what you read about the real differences between this bubble and that one in the ’90s. Big tech companies have revenue today! There are billions of users! Investors are smarter! All true. But none of these factors will stop today’s bubble from someday popping. Just watch.

Filed Under: Business, Technology

Dear publishers: When you want to switch platforms and “redesign” too? Don’t

April 9, 2014 by Scott Rosenberg 11 Comments

4344254749_b400919e68_o

In my work at Grist, I had a rare experience: We moved an entire publishing operation — with a decade of legacy content, in tens of thousands of posts — from one software platform to another. And yet, basically, nothing broke. Given the scars I bear from previous efforts of this kind, this was an exhilarating relief.

I promised my former colleague Matt Perry (then technical lead at Grist, who bears much responsibility for our success in that move, along with my other former colleague Nathan Letsinger) that I’d share notes with the world on what we learned in this process. It’s taken me forever, but here they are.

Say you run a website that’s been around the block a few times already. You’re going to move your operation from one content management platform to another. Maybe you’ve decided it’s time to go with WordPress. Or some other fine system. Or you’re lucky enough, or crazy enough, to have a developer or a team of coders who’ve built you a custom system.

Then you look at your site’s design: the templates, the CSS, the interface, the structure and navigation all the stuff that makes it look a certain way and behave a certain way. You think, boy, that’s looking old. Wouldn’t it be great to spiff everything up? And while you’re at it, that new platform offers so many exciting new capabilities — time to show them off!

It seems so obvious, doesn’t it? You’re already taking the time away from publishing, or community-building, or advocacy, or monetizing eyeballs, or whatever it is you do with your site, to shore up its technical underpinnings. Now is surely the perfect moment to improve its public face, too.

This is where I am going to grab you by the shoulders and tell you, sadly but firmly and clearly: NO. Do not go there.

Redesigning your site at the same time you’re changing the software it runs on is a recipe for disaster. Here Be Train Wrecks.

Don’t believe me? Go ahead then; do your redesign and your platform move at the same time! Here’s what you may find.

You’ve just split your team’s focus and energy. Unless you have a lot of excess capacity on the technical side — and every online publisher has, like, technical folks sitting around with nothing to do, right? — your developers and designers are already stretched to the limit putting out everyday fires. Any major project is ambitious. Two major projects at once is foolhardy.

You’re now stuck creating a big new design in the dark. That new platform isn’t live yet, so you can’t take the sane route of implementing the new design in bits and pieces in front of real live users. Your team is free to sit in a room and crank out work, sans feedback! Good luck with that.

You’re now working against the clock. Back-end platform changes are full of unpredictable gotchas, and almost always take longer than you think. That doesn’t have to matter a great deal. But the moment you tie the move to a big redesign project, you’re in a different situation. More often than not, the redesign is something that everyone in your company or organization has an investment in. Editors and creators have work with deadlines and must-publish-by dates. Business people have announcements and sales deals and marketing pushes that they need to schedule. The stakes are in the ground; your small-bore back-end upgrade is now a major public event. This is where the worst train wrecks (like that one at Salon over a decade ago that still haunts my dreams) happen.

Painful as it may be, and demanding of enormous self-restraint, the intelligent approach is to move all your data over on the back end first, while duplicating your current design on the new platform. Ideally, users won’t notice anything different.

I’m fully aware that this recommendation won’t come as news to many of you. It’s simple science, really: Fiddle with only one variable at a time so you can understand and fix problems as they arise. I’m happy to report that this approach not only makes sense in the abstract, but actually works in the field, too.

(Of course, you may wish to go even further, and eliminate the whole concept of the site redesign as a discrete event. The best websites are continuously evolving. “Always be redesigning.”)

Filed Under: Media, Personal, Software, Technology

When Google was that new thing with the funny name

July 7, 2013 by Scott Rosenberg 1 Comment

early googleOne little article I wrote 15 years ago for Salon has been making the rounds again recently (probably because Andrew Leonard recently linked to it — thanks, Andrew!).

This piece was notable because it introduced Salon’s readers to a new service with the unlikely name of Google. My enthusiastic endorsement was based entirely on my own happy experience as a user of of the new search engine, and my great relief at finding a new Web tool that wasn’t larded up with a zillion spammy ad-driven come-ons, as so much of the dotcom-bubble-driven Web was at the time. The column was one of the earlier media hits for Google — it might’ve been the first mention outside the trade press, if this early Google “Press Mentions” page is complete.

Today I see a couple of important stories buried in this little ’90s time capsule. One is about money, the other about innovation.

First, the money: A commenter over at Hacker News expressed the kind but deluded wish that I had somehow invested in Google at that early stage. Even if I had been interested (and as a tech journalist, I wasn’t going to go down that road), the company had only recently incorporated and taken on its first private investment. You couldn’t just walk in off the street and buy the place. (Though that didn’t stop Salon’s CEO at the time from trying.)

In its earliest incarnation, and for several years thereafter, the big question businesspeople asked about Google was, “How will they ever make money?” But the service that was so ridiculously appealing at the start thanks to its minimalist, ad-free start page became the Gargantua of the Web advertising ecosystem. Despite its “Don’t be evil” mantra and its demonstrable dedication to good user experience, Google also became the chief driver of the Web’s pay-per-click corruption.

I love Google in many ways, and there’s little question that it remains the most user-friendly and interoperability-minded of the big Web firms. But over the years I’ve become increasingly convinced that, as Rich Skrenta wrote a long time ago, “PageRank Wrecked the Web.” Giving links a dollar value made them a commodity.

Maybe you’ve noticed that this keeps happening. Today, Facebook is making relationships a commodity. Twitter is doing the same to casual communication. For those of us who got excited about the Web in the early ’90s because — as some smart people once observed — nobody owned it, everyone could use it, and anyone could improve it, this is a tear-your-hair-out scenario.

Or would be, except: there’s an escape route. Ironically, it’s the same one that Larry Page and Sergei Brin mapped out for us all in 1998. Which brings us to the second story my 1998 column tells, the interesting one, the one about innovation.

To understand this one, you have to recall the Web scene that Google was born into. In 1998, search was over. It was a “solved problem”! Altavista, Excite, Infoseek, Lycos, and the rest — all these sites provided an essential but fully understood service to Web users. All that was left was for the “portal” companies to build profitable businesses around them, and the Web would be complete.

Google burst onto this scene and said, “No, you don’t understand, there’s room to improve here.” That was correct. And it’s a universal insight that never stops being applicable: there’s an endless amount of room to improve, everywhere. There are no solved problems; as people’s needs change and their expectations evolve, problems keep unsolving themselves.

This is the context in which all the best work in the technology universe gets done. If you’re looking for opportunities to make a buck, you may well avoid markets where established players rule or entrenched systems dominate. But if you’re looking for better ways to think and live, if you’re inspired by ideals more than profits, there’s no such thing as a closed market.

This, I think, is the lesson that Doug Engelbart, RIP, kept trying to teach us: When it comes to “augmenting human intellect,” there’s no such thing as a stable final state. Opportunity is infinite. Every field is perpetually green.

Engelbart-Demo-Intro-9Dec68

Filed Under: Net Culture, Technology

Demonetization

November 24, 2012 by Scott Rosenberg Leave a Comment

Buried near the end of John Markoff’s front-page feature in the Times today about “deep learning”, neural-net-inspired software, this tidbit, which I think requires no further elaboration, but is worth noting, and noting again:

One of the most striking aspects of the research led by Dr. [Geoffrey] Hinton is that it has taken place largely without the patent restrictions and bitter infighting over intellectual property that characterize high-technology fields.

“We decided early on not to make money out of this, but just to sort of spread it to infect everybody,” he said. “These companies are terribly pleased with this.”

Said companies will (a) build a new industry on these openly shared ideas; (b) make fortunes; and then (c) dedicate themselves to locking those ideas up and extracting maximum profit from them.

That’s inevitable and nothing new. Let’s be glad, though, for the occasional Geoffrey Hintons and Tim Berners-Lees, who periodically rebalance the equation between open and closed systems and keep our cycle of technology evolution moving forward.

Filed Under: Business, Technology, Uncategorized

Steve Jobs, auteurs, and team-building

September 7, 2011 by Scott Rosenberg 6 Comments


If you look at my life, I’ve never gotten it right the first time. It always takes me twice.
  — Steve Jobs, in a 1992 Washington Post interview

I first wrote about Steve Jobs as a digital auteur in January 1999, in a profile for Salon that tried, in the near-term aftermath of Jobs’ return from exile to Apple, to sum up his career thus far:

The most useful way to understand what Jobs does best is to think of him as a personal-computer auteur. In the language of film criticism, an auteur is the person — usually a director — who wields the authority and imagination to place a personal stamp on the collective product that we call a movie. The computer industry used to be full of auteurs — entrepreneurs who put their names on a whole generation of mostly forgotten machines like the Morrow, the Osborne, the Kaypro. But today’s PCs are largely a colorless, look-alike bunch; it’s no coincidence that their ancestors were known as “clones” — knockoffs of IBM’s original PC. In such a market, Steve Jobs may well be the last of the personal-computer auteurs. He’s the only person left in the industry with the clout, the chutzpah and the recklessness to build a computer that has unique personality and quirks.

The Jobs-as-auteur meme has reemerged recently in the aftermath of his retirement as Apple CEO. John Gruber gave a smart talk at MacWorld a while back, introducing the auteur theory as a way of thinking about industrial design, and then Randall Stross contrasted Apple’s auteurial approach with Google’s data-driven philosophy for the New York Times.

(Here is where I must acknowledge that the version of the auteur theory presented in all these analyses, including mine, omits a lot. The theory originally emerged as a way for the artists of the French New Wave, led by Francois Truffaut, to square their enthusiasm for American pop-culture icons like Alfred Hitchcock with their devotion to cinema as an expressive form of art. In other words, it was how French intellectuals justified their love for stuff they were supposed to be rejecting as mass-market crap. So the parallels to the world of Apple are limited. We’re really talking about “the auteur theory as commonly understood and oversimplified.” But I digress.)

Auteurial design can lead you to take creative risks and make stunning breakthroughs. It can also lead to self-indulgent train wrecks that squander reputations and cash. Jobs has certainly had his share of both these extremes. They both follow from the same trait: the auteur’s certainty that he’s right and willingness (as Gruber notes) to act on that certainty.

Hubris or inspiration? Either way, this kind of auteur disdains market research. “It isn’t the consumers’ job to know what they want,” Jobs likes to say. Hah hah. Right. Only that, the democratic heart of our culture tells us with every beat, is precisely the consumer’s job. To embrace Jobs’ quip as a serious insight is to say that markets themselves don’t and can’t work — that democracy is impossible and capitalism one colossal fraud. (And while that’s an intriguing argument in its own right, I don’t think it’s what Jobs meant.)

I have to assume what Jobs really means here is that, while most of us know what we want when we’re operating on known territory, there are corners that we can’t always see around — particularly in a tumultuous industry like computing. Jobs has cultivated that round-the-corner periscopic vantage for his entire career. He’s really good at it. And so sometimes he knows what we want before we do.

I find nothing but delight in this. I take considerable pleasure in the Apple products I use. Still, it must be said: “I know best” is a lousy way to run a business (or a family, or a government). It broadcasts arrogance and courts disaster. It plugs into the same cult-of-the-lone-hero-artist mindset that Apple’s ad campaigns have celebrated. It reeks of Randian ressentiment and adolescent contempt for the little people.

Jobs’ approach, in Jobs’ hands, overcame this creepiness by sheer dint of taste and smarts. There isn’t anyone else in Apple’s industry or any other who is remotely likely to be able to pull it off. If what Jobs’ successors and competitors take away from all this is that “we know best” can be an acceptable business strategy, they will be in big trouble.

But there’s a different and more useful lesson to draw from the Jobs saga.

The salient fact about the arc of Jobs’ career is that his second bite at Apple was far more satisfying than his first. Jobs’ is a story that resoundingly contradicts Fitzgerald’s dictum about the absence of second acts in American life. In a notoriously youth-oriented industry, he founded a company as a kid, got kicked out, and returned in his 40s to lead it to previously unimaginable success. So the really interesting question about Jobs is not “How does he do it?” but rather, “How did he do it differently the second time around?”

By most accounts, Jobs is no less “brutal and unforgiving” a manager today than he was as a young man. His does not seem to be a story of age mellowing youth. But somehow, Jobs II has succeeded in a way Jobs I never did at building Apple into a stable institution.

I’m not privy to Apple-insider scuttlebutt and all I really have are some hunches as to why this might be. My best guess is that Jobs figured out how to share responsibility and authority effectively with an inner circle of key managers. Adam Lashinsky’s recent study of Apple’s management described a group of “top 100” employees whom Jobs invites to an annual think-a-thon retreat. Jobs famously retained “final cut” authority on every single product. But he seems to have made enough room for his key lieutenants that they feel, and behave, like a team. Somehow, on some level, they must feel that Apple’s success is not only Jobs’ but theirs, too.

Can this team extend Jobs’ winning streak with jaw-droppingly exciting new products long after Jobs himself is no longer calling the shots? And can an executive team that always seemed like a model of harmony avoid the power struggles that often follow a strong leader’s departure? For now, Jobs’ role as Apple chairman is going to delay these reckonings. But we’re going to find out, sooner or later. (And I hope Jobs’ health allows it to be way later!)

If Apple post-Jobs can perform on the same level as Apple-led-by-Jobs, then we will have to revise the Steve Jobs story yet again. Because it will no longer make sense to argue over whether his greatest achievement was the Apple II or the original Mac or Pixar or the iPod or the iPhone or the iPad. It will be clear that his most important legacy is not a product but an institution: Apple itself.

Filed Under: Business, Technology

Circles: Facebook’s reality failure is Google+’s opportunity

June 30, 2011 by Scott Rosenberg 13 Comments

Way back when I joined Facebook I was under the impression that it was the social network where people play themselves. On Facebook, you were supposed to be “real.” So I figured, OK, this is where I don’t friend everyone indiscriminately; this is where I only connect with people I really know.

I stuck with that for a little while. But there were two big problems.

First, I was bombarded with friend requests from people I barely knew or didn’t know at all. Why? It soon became clear that large numbers of people weren’t approaching Facebook with the reality principle in mind. They were playing the usual online game of racking up big numbers to feel important. “Friend count”” was the new “unique visitors.”

Then Facebook started to get massive. And consultants and authors started giving us advice about how to use Facebook to brand ourselves. And marketing people began advocating that we use Facebook to sell stuff and, in fact, sell ourselves.

So which was Facebook: a new space for authentic communication between real people — or a new arena for self-promotion?

I could probably have handled this existential dilemma. And I know it’s one that a lot of people simply don’t care about. It bugged me, but it was the other Facebook problem that made me not want to use the service at all.

Facebook flattens our social relationships into one undifferentiated blob. It’s almost impossible to organize friends into discrete groups like “family” and “work” and “school friends” and so forth. Facebook’s just not built that way. (This critique is hardly original to me. But it’s worth repeating.)

In theory Facebook advocates a strict “one person, one account” policy, because each account’s supposed to correlate to a “real” individual. But then sometimes Facebook recommends that we keep a personal profile for our private life and a “page” for our professional life. Which seems an awful lot like “one person, two accounts.”

In truth, Facebook started out with an oversimplified conception of social life, modeled on the artificial hothouse community of a college campus, and it has never succeeded in providing a usable or convenient method for dividing or organizing your life into its different contexts. This is a massive, ongoing failure. And it is precisely where Facebook’s competitors at Google have built the strength of their new service for networking and sharing, Google+.

Google+ opened a limited trial on Tuesday, and last night it hit some sort of critical mass in the land of tech-and-media early adopters. Invitations were flying, in an eerie and amusing echo of what happened in 2004, when Google opened its very first social network, Orkut, to the public, and the Silicon Valley elite flocked to it with glee.

Google+ represents Google’s fourth big bite at building a social network. Orkut never took off because Google stopped building it out; once you found your friends there was nothing to do there. Wave was a fascinating experiment in advanced technology that was incomprehensible to the average user, and Google abandoned it. Buzz was (and is) a Twitter-like effort that botched its launch by invading your Gmail inbox and raiding your contact list.

So far Google+ seems to be getting things right: It’s easy to figure out, it explains itself elegantly as you delve into its features, it’s fast (for now, at least, under a trial-size population) and it’s even a bit fun.

By far the most interesting and valuable feature of Google+ is the idea of “circles” that it’s built upon. You choose friends and organize them into different “circles,” or groups, based on any criteria you like — the obvious ones being “family,” “friends,” “work,” and so on.

The most important thing to know is that you use these circles to decide who you’ll share what with. So, if you don’t want your friends to be bugged by some tidbit from your workplace, you just share with your workplace circle. Google has conceived and executed this feature beautifully; it takes little time to be up and running.

The other key choice is that you see the composition of your circles but your friends don’t: It’s as if you’re organizing them on your desktop. Your contacts never see how you’re labeling them, but your labeling choices govern what they see of what you share.

I’m sure problems will surface with this model but so far it seems sound and useful, and it’s a cinch to get started with it. Of course, if you’re already living inside Facebook, Google has a tough sell to make. You’ve invested in one network, you’re connected there; why should you bother? But if, like me, you resisted Facebook, Google+ offers a useful alternative that’s worth exploring.

The ideal future of social networking is one that isn’t controlled by any single company. But social networks depend on scale, and right now it’s big companies that are providing that.

Lord knows Google’s record isn’t perfect. But in this realm I view it as the least of evils. Look at the competition: Facebook is being built by young engineers who don’t have lives, and I don’t trust it to understand the complexity of our lives. It’s also about to go public and faces enormous pressure to cash in on the vast network it’s built. Twitter is a great service for real-time public conversation but it’s no better at nuanced social interaction than Facebook. Apple is forging the One Ring to rule all media and technology, and it’s a beaut, but I’ll keep my personal relationships out of its hands as long as I can. Microsoft? Don’t even bother.

Of the technology giants, Google — despite its missteps — has the best record of helping build and expand the Web in useful ways. It’s full of brilliant engineers who have had a very hard time figuring out how to transfer their expertise from the realm of code to the world of human interaction. But it’s learning.

So I’ll embrace the open-source, distributed, nobody-owns-it social network when it arrives, as it inevitably will, whether we get it from the likes of Diaspora and Status.net or somebody else. In the meantime, Google+ is looking pretty good. (Except for that awful punctuation-mark-laden name.)

MORE READING:

Gina Trapani’s notes on “What Google+ Learned from Buzz and Wave”

Marshall Kirkpatrick’s First Night With Google+

Filed Under: Net Culture, Technology

Why journalists should think twice about Facebook

May 3, 2011 by Scott Rosenberg 33 Comments

Facebook's journalism panel: O'Brien, Milian, Zaleski, McClure (photo by George Kelly)


At Facebook last Wednesday night, a panel of four journalists — Laura McClure of Mother Jones, Katharine Zaleski of the Washington Post, Chris O’Brien of the San Jose Mercury News, and CNN tech writer Mark Milian — talked about how they use Facebook as a tool for journalism. What they said was smart. I’d probably do most of the same things were I in their shoes.

But I had a question for them, and I didn’t get called on to ask it, so I’m going to ask it here. The question goes like this: Everything that journalists are doing on Facebook today — engaging readers in conversation, soliciting sources, polling users, posting “behind the story” material — is stuff they could just as easily do on their own websites. So why are they doing it on Facebook?

One answer is obvious: That’s where the people are! Vadim Lavrusik, a journalist who recently joined Facebook to work on its outreach to the media world, said as much. And it’s true: there are millions of people on Facebook, and Facebook makes it convenient to communicate with them. What’s the problem?

I’ll get to that. But there are other answers to the question, too. Many publications find that their interactions with their readers on Facebook are more civil and valuable than those that take place on their own websites. That, they typically believe, is because Facebook makes users log in with their real names and identities. Finally, individual journalists increasingly find it valuable to build their social-media networks as a hedge against the collapse of the institutions they work for. (“Who owns the ‘social graph’ you build on company time — you or your employer?” is one of those fascinating questions that most newsrooms have barely begun to grapple with.)

We can accept that all these answers make solid sense and yet still feel a little uneasy with media companies’ rush to shovel energy and attention into Facebook’s vast human scrum. Here’s where my uneasiness comes from: Today Facebook is a private company that is almost certainly going to sell stock to the public before long. It will have quarterly earnings reports to make and pressure to deliver to investors. It is run by almost impossibly young people who have never had to deal with any business condition other than hockey-stick-curve growth. For the moment it appears to be trying hard to operate as a neutral and open public platform; its constant tinkering and rethinking of the design and functionality of its services can be maddening, but so far have tended to be driven by a serve-the-user impulse.

That won’t last forever. There are plenty of people waiting to cash in on Facebook’s success, and more in the wings, and they will expect the company to fulfill its inevitable destiny — and “monetize” the hell out of all the relationship-building we’re doing on its pages.

This is the landscape onto which today’s journalists are blithely dancing. I understand why they’re doing it, but I wish the larger companies and institutions would think a little harder about the future.

The web itself is the original social network. Why would you ask reporters to connect with your readers on Facebook if you aren’t already encouraging them to do the same thing in the comments on your own website? If your comments have become a free-fire zone, why don’t you do something about it? If you’ve hired a “social media manager,” great — but why didn’t you hire people to manage your own comments space?

By moving so much of the conversation away from their own websites and out to Facebook, media companies are basically saying, “We did a lousy job of engaging readers under our own roof, so we’re going to encourage it to happen on someone else’s turf.”

You could argue that what news organizations are doing is just like telling your friends, “I can’t invite you over for drinks because our place is such a mess. Let’s meet at a bar!” Maybe. Then again, it might be like saying, “We let our neighborhood go to hell and didn’t do anything about it. Time to move to the mall!”

Facebook is on a fantastic roll today. It’s positioned to dominate the next decade of online evolution the way Google and Microsoft respectively dominated the previous two. It can’t be ignored and I wouldn’t suggest doing so. But it’s not the public sphere, not in the way the Internet itself is. It’s just a company. I hope every editor, reporter and news executive remembers that as they try to get their conversations hopping and their links shared.

Photo by George Kelly

Filed Under: Media, Technology

“Your map’s wrong”: Zuckerberg lights out for the territories

November 17, 2010 by Scott Rosenberg 4 Comments

It’s hard to think of a more meaningful recent exchange in the tech-industry world than the moment onstage at Web 2.0 last night when Facebook’s Mark Zuckerberg turned to conference organizers John Battelle and Tim O’Reilly and told them, “Your map’s wrong.” (I was sorry not to be there in person! I went to the first several Web 2.0 conferences but have recently tried to reduce conference attendance in an effort to Get Things Done instead.)

Zuckerberg was referring to a big map on the wall behind him that charted the conference’s theme of “points of control.” Battelle and O’Reilly had aimed to provide a graphic display of all the different entities that shape and limit our experience online today. It’s a useful exercise in many ways. But Zuckerberg argued that it was wrong-headed in describing an essentially closed system.

Here’s the full exchange, which you can watch below:

ZUCKERBERG: “I like this map that you have up here, but my first instinct was, your map’s wrong.”

BATTELLE: “Of course it’s wrong, it’s version one.”

ZUCKERBERG: “I think that the biggest part of the map has got to be the uncharted territory. Right? One of the best things about the technology industry is that it’s not zero sum. This thing makes it seem like it’s zero sum. Right? In order to take territory you have to be taking territory from someone else. But I think one of the best things is, we’re building real value in the world, not just taking value from other companies.”

Now, of course it’s in Zuckerberg’s interest to make this argument. And it would be disingenuous to maintain that Facebook isn’t engaged in some real direct competition with the other big Net-industry players today. As Tim Wu’s new book reminds us, the cycle of communications-technology innovation runs in a regular pattern in which innovators become monopolists and monopolists exact their tolls. Facebook, like its predecessors, is likely to proceed accordingly.

Nonetheless, I think Zuckerberg’s larger point is profoundly right. He found a way to remind us of something that was true when I started creating websites 15 years ago and that’s still true today: It’s still early in this game, and the game itself continues to grow. The portion of the online realm that we’ve already invented is still a mere fraction of the total job of creation that we’ll collectively perform. There is more world to come than world already made.

I find that I regularly need to remind myself of this every time I’m thinking of starting something new. When I started the Salon Blogs program in 2002 I worried that we were late arrivals to that game. Blogs had been around forever — I’d been reading them for five years! We shouldn’t forget that at the time of Google’s founding in 1998, search was considered old hat, a “solved problem.” Similarly, Facebook itself could have seemed a johnny-come-lately five years ago, coming as it did on the heels of Friendster, Orkut and MySpace.

The Net is still young and what we do with it and on it remains an early work in progress. The “uncharted territory” still beckons those who enjoy exploring. And it may be that one secret of Zuckerberg’s and Facebook’s success is that they aren’t losing sight of this truth as they plunge into the technology industry’s crazy scrum.

Here’s TechCrunch on Zuckerberg’s interview. And here’s the full video, linked to start at the 52:30 mark where the map discussion occurred:

Filed Under: Blogging, Business, Technology

Thanks for the memories! Why Facebook “download” rocks

October 19, 2010 by Scott Rosenberg 3 Comments

At Open Web Foo I led a small discussion of what I called the “Social Web memory hole” — the way that social networks suck in our contributions and then tend to bury them or make them inaccessible to their authors. It was a treat to share my ideas with this crowd of super-smart tech insiders, though I did have to spell out the Orwell reference (ironic nod to 1984, not joke about memory leaks in program code!).

What I heard was that this problem — which I continue to find upsetting — is most likely a temporary one. Twitter, I was assured, understands the issue and views it as a “bug.” Which is encouraging — except how many years do we wait before concluding that the bug is never going to be fixed?

Meanwhile, the same weekend, Facebook had just introduced its new “download your information” feature. Which is why, at this moment of Wall Street Journal-inspired anti-Facebook feeding frenzy, I want to offer a little counter-programming.

I do not intend to argue about whether Facebook apps passing user IDs in referrer headers is an evil violation of privacy rules, or just the way the Web works. There are some real issues buried in here, but unfortunately, the Journal’s “turn the alarms to 11” treatment has made thoughtful debate difficult. (This post over at Freedom to Tinker is a helpfully sober explanation of the controversy.)

So while the Murdoch media — which has its own axes to grind — bashes Facebook, I’m here today to praise it, because I finally had a chance to use Facebook’s “Download Your Information” tool, and it’s a sweet thing.

I have been a loud voice over the years complaining that Facebook expects us to give it the content of our lives and offers us no route to get that content back out. Facebook has now provided a tool that does precisely this. And it’s not just a crude programmer’s tool — some API that lets developers romp at will but leaves mere mortals up a creek. Facebook is giving real-people users their information in a simple, accessible format, tied up with a nice HTML bow. What you get in Facebook’s download file is a Web directory that you can navigate in your browser, with all your posts, photos and other contributions, well-presented and well-organized.

In my case, I don’t have vast quantities of stuff because I haven’t been a very active Facebook user. The main thing I do on Facebook, in fact, is automatically cross-post my Twitter messages so my friends who hang out on Facebook can see them too. Twitter, of course, still has that “bug” that makes it really hard for you to access your old messages. But now, I actually have an easily readable and searchable archive of my Twitter past — thanks to Facebook! Which, really, is both ironic and delicious.

Here’s what Facebook’s Mark Zuckerberg had to say about the Download feature in a Techcrunch interview:

I think that this is a pretty big step forward in terms of making it so that people can download all of their information, but it isn’t going to be all of what everyone wants. There are going to be questions about why you can’t download your friend’s information too. And it’s because it’s your friend’s and not yours. But you can see that information on Facebook, so maybe you should be able to download it… those are some of the grey areas.

So for this, what we decided to do was stick to content that was completely clear. You upload those photos and videos and wall posts and messages, and you can take them out and they’re yours, undisputed — that’s your profile. There’s going to be more that we need to think through over time. One of the things, we just couldn’t understand why people kept on saying there’s no way to export your information from Facebook because we have Connect, which is the single biggest large-scale way that people bring information from one site to another that I think has ever been built.

So it seems that Zuckerberg and his colleagues felt that they already let you export your information thanks to Facebook Connect. Again: True for developers but useless for everyday users, unless and until someone writes the code that lets you actually get your data — which is what Facebook itself has now done.

I think this means Facebook is beginning to take more seriously its aspiration to be the repository of our collective memory — a project that Zuckerberg lieutenant Christopher Cox has rapturously described but that Facebook has never seemed that serious about.

I still have questions and concerns about Facebook as the chokepoint-holder of a new social-network-based Web. I’d really rather see things go in the federation direction that people like Status.net, Identi.ca and Diaspora are all working on.

Still, Facebook isn’t going anywhere. It’s a fact of Web life today, and so its moves towards letting users take their data home with them deserve applause.

What I’d like to see next is an idea that came out of that Open Web Foo session: As we turn Facebook and other social services into the online equivalent of the family album, the scrapbook and the old shoebox full of photos, we’re going to need good, simple tools for people to work with them — to take the mountains of stuff we’re piling up inside these services and distill memorable stories from them.

The technologists in the room imagined an algorithmic way to do this — some version of Flickr’s “interestingness” rating, where the service could essentially do the work for you by figuring out which of your photos and posts had the most long-term value.

I’m sure there’s a future in that. My vision, as a writer, is something simpler: a tool that would let us easily assemble photos and text and video from our Facebook troves and turn them into pages that tell stories from our own and our friends’ lives. Something like Storify, maybe. I think we’re going to need this, whether from Facebook itself or from a third-party app developer.

That “cloud” we’re seeding with our memories? Let’s make it rain the stories of our lives.

UPDATE: Om Malik has some insights into some of the other companies involved in the Facebook-shares-your-ID story. And if you want to play with FB’s “Download” tool, you’ll find it in Facebook under Account –> Account settings –> Download your information.

Filed Under: Events, Media, Net Culture, Technology

Next Page »