Wordyard

Hand-forged posts since 2002

Archives

About

Greatest hits

Wordyard Project nuts and bolts: what I’ll do and how I’ll support it

May 29, 2014 by Scott Rosenberg

249819770_33377bc736_o

For the last couple of days I’ve laid out in broad strokes the areas I intend to write about — the map of my new beat, being ourselves in the post-social world, which falls into two main areas: Life after Facebook, and personal authenticity, online.

But what exactly am I going to do? How will I organize and support this project? Today, I need to get a little meta.

First, the structure:

  • I will post — reported pieces, interviews, essays and commentaries, and annotated links — regularly and frequently here, at least once a (week)day. So a blog, yes, but a focused and structured one.
  • Less frequently — maybe once a month, maybe more — I will produce something longer-form.
  • I will most likely crosspost some of this stuff on other sites and see what works. (The IndieWeb movement’s “POSSE” concept — post on your own site, syndicate elsewhere — makes a lot of sense to me.) I may be singing the “post-social” song, but social networks are how people find stuff to read today. A paradox, perhaps, but I’m not going to let that paralyze me.
  • I want to choose what I write about by combining my own instincts and hunches with what I hear back from you. I’ll experiment with different ways of opening the blog monologue into a discussion. I’ve got some fun ideas in mind.
  • I want to highlight the important work other people, publishers and organizations are doing in the areas I’m covering, and to spotlight the people, events and projects that have inspired what I’m doing here.

I am not suggesting that any of this is earth-shatteringly innovative, but it’s good to lay it out up front.

The “how do I support this” part is more interesting, more complex, and much more a work in progress. Here are my starting points:

  • I intend to work independently. I am not trying to grow a big enterprise. I am not building something to sell. I do not want to “scale up” (except in the broadest way, if my experiments prove useful to others). I have no exit strategy.

    I have had the fantastic experience of building a startup, riding the hockey-stick growth curve, and helping take the whole thing public in an IPO. I have also had the less fantastic experience of riding the other side of that curve, fighting for survival, and succeeding. I learned plenty from both experiences; I have no need to relive either.

  • Instead, I want to see how fully today’s technology and services can support what, for me, was the promise of the Web when I first encountered it in the mid-‘90s: an independent voice, embedded in a broader conversation but not beholden to any single sponsor, funder, or boss. “Freelance” is a great word, and I’ve done that, but freelance means you’re a knight for hire. My model is more the self-directed professional: I think of it as being a writer in private practice.
  • I aim to connect as directly as I can with my readers/users/audience/friends/followers, given the tools and conventions available to online publishers in the mid 2010s. Too much of publishing is still about treating readers as numbers, objects, and targets; we say we want to “know” them but what we really mean is we want to know about them so we can sell them stuff. Can we entirely remove targeting from the picture and make the whole thing as real as feasible, as natural as meeting over a beer? Can we set out, not to know about readers, but actually to know them? (Right, it doesn’t scale. I know!)

    In how many different places and ways can we meet “the people formerly known as the audience” and make the encounter honest and valuable? I’m channel-agnostic, but with a bias toward putting stuff out there without someone else’s ads or terms of service slapped on it.

Today’s publishing environment pushes us in one of two directions: You can play in the big mad game of eyeball monetization, where you set out to gather a huge crowd and then pelt it with ads; or you can content yourself with reaching a few friends and family on your blog (where you’re in charge but people’s attention is hard to dragoon) or in your social network (where your readers are congregating today but where you are at the mercy of fickle platform owners).

I believe there’s room in between — an unexplored opening between the aggressively commercial and the ambitionlessly casual. I want to test the viability of this middle ground. Is there a space to work between the frenzy of the Chartbeat addict and the dependency of the social-media sharecropper? I hope so. I think so. I’m going to find out.

Money, obviously, will be crucial, as it always is.

How to support this work? Where does the revenue come from? This question haunts every online publishing effort, large or small. I don’t have a sure answer at this point, but I have some strong feelings.

Advertising is the most common approach, and the one I have the most experience with over two decades of work in online journalism. And I have to say: it sucks. It still sucks. It’s as bad today — as invasive, as inefficient, and as widely resented — as it was when HotWired unveiled the first 468-pixel banner ad. Worse, in many ways.

Advertising pushes publishers in the direction of page-views above everything. It gets in the way of delivering a good experience to users. It forces site operators to implement technologies that cause engineers to cry out in pain. It introduces enormous overhead costs for both publisher and network. Directly or indirectly, it is responsible for nearly all of the things about the Web that irritate people — the page-view whoring, the attention-hijacking, the eyeball-hoarding, the endless tracking and privacy invasion and data appropriation.

Above all, advertising turns the simple two-way relationship between writer and reader, or publisher and user, into a treacherous triangle trade. Publishers have to pretend that the user is your customer, but everybody knows you’re actually under contract to capture those users and deliver them to the advertiser who is paying your bills.

I am not saying that all advertising-based publishing is evil. Plenty of publishers I admire — including influential sites like TPM, Slate, the Atlantic, Wired and BoingBoing, and blogging pros like Kottke and Gruber and Dooce, and tons of important local-news outlets — depend on ads. I have worked long and hard for businesses (like Salon) and nonprofits (like Grist) that relied on advertising and, who knows, I might do so again before my career is over.

So it’s not that ads are evil. But digital advertising today remains broken. It introduces endless complexity and compromise and it pushes us down roads I know too well. Right now, trying something different looks a lot more interesting.

At Salon I got the grand tour of internet-publishing business models. We tried them all, sometimes more than once: Advertising. Sponsorships. Custom content (now known as “native advertising”). Partial paywall. Full paywall. “Affinity-group”/membership program. Premium membership. IPO money. Foundation money. Desperate letters from our editor pleading for money.

One of these approaches, or some combination of them, might work — indeed, is working today — for some publishers. But none of them makes a lot of sense for what I want to do with Wordyard.

What I have in mind — in an indistinct, still-germinating way — is a simple, direct transaction: I will do this work, and if enough people like it enough to kick in a few bucks, I will be able to keep doing it. I’m not thinking “tip jar” or donations, exactly. (I’m not incorporating as a nonprofit.) I’m imagining something more like paying a small annual fee for a premium-grade Web service that you like and wish to keep around. (What the enticement might be at the higher service level — or whether there even needs to be one at all — I don’t know yet.)

Those of you who have been following this topic for a long time will recognize that this approach draws on some of Kevin Kelly’s “thousand true fans” concept (except I think of anyone reading this as a peer, not a fan) and some of Andrew Sullivan’s “Dish model.” I find both of these concepts inspiring.

Yes, this needs a lot more thinking through. It’s far too early to ask for support or money, anyway. I just want to be open about where I see this going — and also to forestall the inevitable shouts of “BUT WHERE’S THE BUSINESS MODEL?????”

Whatever course I choose, I’ll tell the story in real time here and share as much as I can of the data, the thinking behind my choices, and the outcomes. by chronicling the effort, I hope others can benefit from any success I have — and learn from all the mistakes I know I’ll make.


When I published my first independent website (my god, this coming January will mark the 20-year anniversary for that), in gloriously crude hand-hewn HTML, I had the romantic notion that this amazing new platform would allow me to strike out on my own as a one-person-does-it-all writer/editor/publisher. But I didn’t really know how to make that work in 1995 (and the good ship Salon — then, hah, Salon1999.com! — looked a lot more inviting).

I never fully shook that dream, though, and now I think I’m ready to try again.

So this whole project about “being ourselves” is also — in a roundabout, recursive way — my own attempt to be myself, right here. Since you’ve read this far: Thanks for joining me. I’m going to do my damnedest to make it worth your while.

Filed Under: Meta, Project

Self-invention! Or: We do tech — tech doesn’t do us

May 28, 2014 by Scott Rosenberg

Rube Goldberg

Yesterday, I introduced my new project here, along with my new beat — being ourselves in the post-social world — and I talked about what I mean by that “post-social” thing.

Today, I’m going to talk about the “being ourselves” part.

I know it sounds a little…squishy. Identity is a gigantic topic — at one end, you’ve got big questions like “Who am I?”; and at the other, you’ve got the everyday nuisance of authenticating yourself to your bank or your email provider.

I started paying attention to this subject a few years ago during my research on the history of blogs. I noticed that there was a contradiction at the root of blogging ideology — one that has only intensified in the social media age. On the one hand, digital platforms for the self, from blogs to Facebook, promise a direct shortcut to each user’s authentic being. Accept no imitations — here’s the Real Me! On the other, these same tools offer us boundless opportunities to experiment with alternative identities, to try on different “me”s for size and reinvent ourselves. As Marshall McLuhan used to say: “Don’t like those ideas? I got others.”

It seems obvious to me that both of these conceptions of How to Be Yourself are legitimate and valuable — and that technology has made both of them more available and more tantalizing, making it easier for each of us to find direct unmediated connections with others and also to play with alternative identities and self-reinvention.

Yet, mostly, the public debate on digital identity is stuck in a polarized argument. Advocates of transparency and single identity maintain that a one-person, one-name, one-identity world creates trust and holds us accountable to one another. Believers in anonymity and multiple identities argue that masks and veils can free our voices, liberate us to be playful and vulnerable, and let us speak truth to power.

Both camps urge us to “be ourselves.” But they arrive at opposite conclusions.

Any useful analysis of the nature of identity online needs to acknowledge that neither of these modes is natural or somehow baked into the technology independent of how we use it. Our digital platforms don’t include any inherent bias toward either end of this spectrum; if they push us in one direction or the other, it’s because someone built them that way — and someone else found that useful or attractive.

In other words: The Internet didn’t make me do it! The Internet doesn’t make anyone do anything. We made the Internet ourselves, and we remake it with every click and post and line of code.

Of course different technologies have different characteristics, and those traits fascinatingly affect our experience of those technologies. But they’re not innate, immutable, or inevitable; they’re there because we put them there, and they evolved through an intimate back-and-forth between the technology and the people who make and use it. We need to resist the most common fallacy we fall into in trying to understand communications technology — the assumption that the medium itself has some native will or force that imposes itself on us. This way of thinking turns us into passive receptors of technological imperatives; it denies us our freedom to act.

All “medium is the message” arguments aside, talking about technology’s “impacts” and “effects” is, as Claude Fischer wrote (in his magisterial history of the adoption of the telephone), the “wrong language, a mechanical language that implies that human actions are impelled by external forces when they are really the outcomes of actors making purposeful choices under constraints.”

“Actors making purposeful choices under constraints” — that’s you and me, out here on the net, putting on shows for one another, looking for truth and trying to be ourselves in a rich, perilous, disorienting landscape that has become our home. (For those of you who know that I spent the first, pre-Internet part of my career as a theater critic: Yes, these dots do connect.)

That’s what I’m gonna be writing about a lot here. More tomorrow about exactly what and how.


If this stuff intrigues you, here is a five-minute Ignite talk I gave at NewsFoo 2012 about it:

Filed Under: Meta, Project

The Wordyard Project: Being ourselves in a post-social world

May 27, 2014 by Scott Rosenberg

86515663_28c967bd2a_o

After leaving my full-time job at Grist a few weeks ago, I’ve been weighing my next act, and I’ve decided what I want to write about:

Being ourselves in a post-social world.

This is my new project, here. It falls into three parts: a tech-industry beat I will cover; a cultural investigation and conversation I will undertake; and a personal-publishing venture I am kicking off now.

So let me begin to lay all this out — starting, today, with the tech-industry part.

First thing you’re thinking is, what is this “post-social world” he speaks of?

There’s a lot to say here, but at heart, what I mean is: life after Facebook.

No, I don’t think Facebook is going anywhere. It will continue to dominate much of the digital landscape for some time. But I also think peak Facebook is now behind us.

Every era-defining tech company in recent history — Microsoft, Google, and now Facebook — has seized a moment in the industry’s evolution with a single idea. And for a brief period, that idea proves so powerful that it sucks everything else into its orbit. It seems to be the only game in town, and the only possible future. It also propels utopian visions, and the people responsible for it become filled with a sense of omnipotence — a belief that their magnificent technology can and will solve every imaginable human problem as readily as it has made them rich.

This is where the innovation that originally fed the company’s growth mutates into some world-changing ambition that proves tough to square with the practical demands of quarterly reports and margin-seeking investors. Microsoft’s operating system and office tools became “a computer on every desk and in every home”; Google’s efficient, streamlined search box became “organizing the world’s information”; Facebook’s friend-connecting toolkit turned into its current mission, which is to “give people the power to share and make the world more open and connected.”

But here’s what happens: The moment of corporate omnipotence passes. Always! Microsoft’s computers are still on plenty of desks and in many offices — but they are not in our pockets, where we now use digital technology the most. Google’s search model remains essential, but turns out not to be the only means by which we want to access the world’s information — sorry, Larry and Sergey. Similarly, today’s Facebook has introduced the world to the allure of friend networks and feeds, but it cannot possibly fulfill all of its ambitions or replace email, messaging, news, advertising, entertainment and everything else with its single closed “social graph” universe. The human environment and experience is far too vast to be encompassed by any one company’s data.

Just as the era of Microsoft’s leadership ended with the dotcom crash and the era of Google’s leadership ended with the financial meltdown of 2007-8, so Facebook’s mindshare dominance will end when the current tech bubble deflates. With it will end our mistaken assumption that social networking is the single paradigm that will rule the entire gamut of our Internet-borne behavior.

What comes after that? We don’t know yet. That keeps things interesting! But we have some clues, some sense of which way the pendulums are going to swing:


From the group back to the individual:

    The blogging movement celebrated individual voices. The social-media era’s customs, submerging the individual in a networked environment, privilege the group. We are overdue for a correction.

From centralized platforms to peer networks:

    Some systems concentrate power in one or more hubs. Others move power to the edge. Today’s Internet relies on both approaches, varying depending on which layer of the communications cake you’re talking about; but what defines it, historically and philosophically, is that it is a distributed network.

    Right now we’re experiencing a moment of maximum centralization. We have one company with a near lock on our online identities. Another with the keys to our access to information. Another with a huge chunk of the retail market. In the U.S., the network itself is coming to be dominated by a single provider.

    None of this bodes well. But none of it is irreversible. Technology is anything but static, and its movements and disruptions allow for regular resets of bad patterns and ingrown problems — particularly if we learn from our mistakes and nudge it. 
Fortunately, the Internet itself has created conditions that make it possible for us to do just that.


From “take my data” to “let me take my data”:

    The online publishing and marketing business today depends on our willingness to give up rights to our data. It’s been difficult to get the public too worked up — at least in the U.S. — as long as this has simply meant exploiting the tracks we leave in our digital clickstream.

    But increasingly, people are understanding that “my data” means everything from my medical information to my financial records to my physical travels. In the post-Snowden universe we’re more likely to question standard-issue “just relax” assurances from industry or government. Contrary to conventional-columnist wisdom, the younger cohorts of today’s Internet users take privacy more seriously, not less, than their elders. I think we’re going to spend much of the next decade rebuilding the technical, legal, and financial guts of our connected online world around a more secure, consensual approach to personal data. It will be messy and complex and fascinating.

So yes, “post-social” means “Life After Facebook,” but it’s a lot more than that. Laid out from a high altitude like this, it may sound a little abstract. Don’t worry; a lot of what I want to do here at Wordyard involves talking with people in the trenches, looking at specific ideas and projects. There are individuals and organizations and companies that are already busy trying to imagine and build this post-social world — to fix the mistakes of the past decade and figure out where we should go in the next one.

All of this is being covered in detail and in patches and shreds by the ambitious and lively tech press that has grown up with the Web. But I haven’t seen anyone out there try to put it all together.

That’s my new work! Or at least, the first part of it. Next, I’ll post at greater length about the second part — this business of “being ourselves.”

Filed Under: Meta, Project

Lies our bubbles taught us

April 30, 2014 by Scott Rosenberg

5826294465_fe40b48ae8_o

Of course it’s a bubble! Let’s not waste any time on that one.

The fact of a bubble doesn’t mean that all the stuff people in tech are building today is worthless, nor will it all vanish when (not “if”) the bubble deflates.

It does mean that a certain set of reality distortions infects our conversations and our coverage of the industry. If you are old enough to have lived through one or more previous turns of this wheel, you will recognize them. If they’re not familiar to you, herewith, a primer: three lies you will most commonly hear during a tech bubble.

(1) Valuation is reality

When one company buys another and pays cash, it makes sense to say that the purchased company is “worth” what the buyer paid. But that’s not how most tech-industry transactions work. Mostly, we’re dealing in all stock trades, maybe with a little cash sweetener.

Stock prices are volatile, marginal and retrospective: they represent what the most recent buyer and seller agreed to pay. They offer no guarantee that the next buyer and seller will pay the same thing. The myth of valuation is the idea that you can take this momentary-snapshot stock price and apply it to the entire company — as if the whole market will freeze and let you do so.

Even more important: Stock is speculative — literally. When I buy your company with stock in my company, I’m not handing you money I earned; I’m giving you a piece of the future of my company (the assumption that someday there will be a flow of profits). There’s nothing wrong with that, but it’s not remotely the same as giving you cash you can walk away with.

When one company buys another with stock, the entire transaction is performed with hopes and dreams. This aspect of the market is like capitalism’s version of quantum uncertainty: No one actually knows what the acquired company is “worth” until the sell-or-hold choices that people will make play out over time. Some people might get rich; some might get screwed.

Too often, our headlines and stories and discussions of deals closed and fortunes made ignore all this. Maintaining the blur is in the interests of the deal-makers and the fortune-winners. Which is why it persists, and spreads every time the markets go bananas. Then young journalists who have never seen a tech bubble before sally forth, wide-eyed, to gape at the astronomical valuations of itty-bitty startups, without an asterisk in sight.

This distorted understanding of valuation takes slightly different forms depending on the status of the companies involved.

With a publicly traded company, it’s easy to determine a valuation: Just multiply price per share times the total number of outstanding shares, right? But the more shares anyone tries to sell at any one time, the less likely it is that he will be able to get the same price. The more shares you shovel into the market, most of the time, the further the price will drop. (Similarly, if you decide you want to buy a company by buying a majority of its stock, you’ll probably drive the price up.) So stock valuations are elusive moving targets.

For private companies, though, it’s even worse. Basically, the founders/owners and potential investors sit down and agree on any price they like. They look for rationales anywhere and everywhere, from comparable companies and transactions to history to media coverage and rumor to “the back of this envelope looks too empty, let’s make up some numbers to fill it.” If other investors are already in the game, they’re typically involved too; they’re happy if their original investment is now worth more, unhappy if their stake (the percentage ownership of the company their investment originally bought them) is in any way diluted.

There are all sorts of creative ways of cutting this pie. The main thing to know is, it’s entirely up to owners and investors how to approach placing a value on the company. All that private investment transactions (like Series A rounds) tell you is what these people think the company is worth — or what they want you to think it’s worth.

Also: it’s ridiculous to take a small transaction — like, “I just decided to buy 2 percent of this company for $20 million because I think they’re gonna be huge” — and extrapolate to the full valuation of the company: “He just bought 2 percent of us for $20 million, so we’re now a $1 billion company, yay!”

If you can keep persuading lots of people to keep paying $20 million for those 2 percent stakes, and you sell the whole company, then you’re a $1 billion company. Until then, you’re just a company that was able to persuade a person that it might be worth $1 billion someday. During a bubble, many such people are born every minute, but their money tends to disappear quickly, and they vanish from the landscape at the first sign of bust.

(Here’s a classic “X is worth Y” fib from the last bubble of the mid-2000s. Note that Digg was ultimately acquired, years later, for a small fraction of the price BusinessWeek floated in 2006.)

(2) “We will always put our users first”

Many startup founders are passionate about their dedication to their users, in what is really Silicon Valley’s modern twist on the age-old adage that “the customer always comes first.” One of the side-effects of a bubble is that small companies can defer the messy business of extracting cash profits from customers and devote their energy to pampering users. Hooray! That makes life good for everyone for a while.

The trouble arises when founders forget that there is a sharp and usually irreconcilable conflict between “putting the user first” and delivering profits to investors. The best companies find creative ways to delay this reckoning, but no one escapes it. It’s most pronounced in advertising-based industries, where the user isn’t the real paying customer. But even tech outfits that have paying users face painful dilemmas: Software vendors still put cranky licensing schemes on their products, or create awkward tie-ins to try to push new products and services, or block interoperability with competitors even when it would make users’ lives easier. Even hardware makers will pad their bottom lines by putting nasty expiration codes on ink cartridges or charging ridiculous prices for tiny adapters.

But the ultimate bubble delusion is the founder’s pipe-dream that she can sell her company yet still retain control and keep “putting the user first” forever. In the public-company version of this phenomenon, the founder tells the world that nothing will change after the IPO, the company’s mission is just as lofty as ever, its dedication to the user just as fanatical. This certainty lasts as long as the stock price holds up. Legally and practically, however, the company now exists to “deliver value to shareholders,” not to deliver value to users. Any time those goals conflict — and they will — the people in the room arguing the shareholders’ cause will always hold the trump card.

In the private company version, the founder of some just-acquired startup proudly tells the world that, even though he has just sold his company off at a gazillion- dollar valuation, nothing will change, users will benefit, move right along. Listen, for instance, to WhatsApp founder Jan Koum, an advocate of privacy and online-advertising skeptic, as he “sets the record straight” after Facebook acquired his company:

If partnering with Facebook meant that we had to change our values, we wouldn’t have done it. Instead, we are forming a partnership that would allow us to continue operating independently and autonomously. Our fundamental values and beliefs will not change. Our principles will not change. Everything that has made WhatsApp the leader in personal messaging will still be in place. Speculation to the contrary isn’t just baseless and unfounded, it’s irresponsible. It has the effect of scaring people into thinking we’re suddenly collecting all kinds of new data. That’s just not true, and it’s important to us that you know that.

Koum sounds like a fine guy, and I imagine he totally believes these words. But he’s deluding himself and his users by making promises in perpetuity that he can no longer keep. It’s not his company any more. He can say “partnership” as often as he likes; he has still sold his company. Facebook today may honor Koum’s privacy pledges, but who can say what Facebook tomorrow will decide?

This isn’t conspiracy thinking; it’s capitalism 101. Yet it’s remarkable how deeply a bubble-intoxicated industry can fool itself into believing it has transcended such inconvenient facts.

(3) This time, it’s different

If you’re 25 today then you were a college freshman when the global economy collapsed in 2007-8. If you went into tech, you’ve never lived through a full-on bust. Maybe it’s understandable for you to look at today’s welter of IPOs and acquisitions and stock millionaires and think that it’s just the natural order of things.

If you’re much older than that, though, no excuses! You know, as you should, that bubbles aren’t forever. Markets that go up go down, too. (Eventually, they go back up again.) Volatile as the tech economy is, it is also predictably cyclical.

Most recently, our bubbles have coincided with periods — like the late ’90s and the present — when the Federal Reserve kept interest rates low, flooding the markets with cash looking for a return. (Today, also, growing inequality has fattened the “play money” pockets of the very investors who are most likely to take risky bets.)

These bubbles end when there’s a sudden outbreak of sanity and sobriety, or when geopolitical trouble casts a pall on market exuberance. (Both of these happened in quick succession in 2000-2001 to end the original dotcom bubble.) Or a bubble can pop when the gears of the financial system itself jam up, as happened in 2007-8 to squelch the incipient Web 2.0 bubble. My guess is that today’s bubble will pop the moment interest rates begin to head north again — a reckoning that keeps failing to materialize, but must someday arrive.

It might be this year or next, it might be in three years or five, but sooner or later, this bubble will end too. Never mind what you read about the real differences between this bubble and that one in the ’90s. Big tech companies have revenue today! There are billions of users! Investors are smarter! All true. But none of these factors will stop today’s bubble from someday popping. Just watch.

Filed Under: Business, Technology

Dear publishers: When you want to switch platforms and “redesign” too? Don’t

April 9, 2014 by Scott Rosenberg

4344254749_b400919e68_o

In my work at Grist, I had a rare experience: We moved an entire publishing operation — with a decade of legacy content, in tens of thousands of posts — from one software platform to another. And yet, basically, nothing broke. Given the scars I bear from previous efforts of this kind, this was an exhilarating relief.

I promised my former colleague Matt Perry (then technical lead at Grist, who bears much responsibility for our success in that move, along with my other former colleague Nathan Letsinger) that I’d share notes with the world on what we learned in this process. It’s taken me forever, but here they are.

Say you run a website that’s been around the block a few times already. You’re going to move your operation from one content management platform to another. Maybe you’ve decided it’s time to go with WordPress. Or some other fine system. Or you’re lucky enough, or crazy enough, to have a developer or a team of coders who’ve built you a custom system.

Then you look at your site’s design: the templates, the CSS, the interface, the structure and navigation all the stuff that makes it look a certain way and behave a certain way. You think, boy, that’s looking old. Wouldn’t it be great to spiff everything up? And while you’re at it, that new platform offers so many exciting new capabilities — time to show them off!

It seems so obvious, doesn’t it? You’re already taking the time away from publishing, or community-building, or advocacy, or monetizing eyeballs, or whatever it is you do with your site, to shore up its technical underpinnings. Now is surely the perfect moment to improve its public face, too.

This is where I am going to grab you by the shoulders and tell you, sadly but firmly and clearly: NO. Do not go there.

Redesigning your site at the same time you’re changing the software it runs on is a recipe for disaster. Here Be Train Wrecks.

Don’t believe me? Go ahead then; do your redesign and your platform move at the same time! Here’s what you may find.

You’ve just split your team’s focus and energy. Unless you have a lot of excess capacity on the technical side — and every online publisher has, like, technical folks sitting around with nothing to do, right? — your developers and designers are already stretched to the limit putting out everyday fires. Any major project is ambitious. Two major projects at once is foolhardy.

You’re now stuck creating a big new design in the dark. That new platform isn’t live yet, so you can’t take the sane route of implementing the new design in bits and pieces in front of real live users. Your team is free to sit in a room and crank out work, sans feedback! Good luck with that.

You’re now working against the clock. Back-end platform changes are full of unpredictable gotchas, and almost always take longer than you think. That doesn’t have to matter a great deal. But the moment you tie the move to a big redesign project, you’re in a different situation. More often than not, the redesign is something that everyone in your company or organization has an investment in. Editors and creators have work with deadlines and must-publish-by dates. Business people have announcements and sales deals and marketing pushes that they need to schedule. The stakes are in the ground; your small-bore back-end upgrade is now a major public event. This is where the worst train wrecks (like that one at Salon over a decade ago that still haunts my dreams) happen.

Painful as it may be, and demanding of enormous self-restraint, the intelligent approach is to move all your data over on the back end first, while duplicating your current design on the new platform. Ideally, users won’t notice anything different.

I’m fully aware that this recommendation won’t come as news to many of you. It’s simple science, really: Fiddle with only one variable at a time so you can understand and fix problems as they arise. I’m happy to report that this approach not only makes sense in the abstract, but actually works in the field, too.

(Of course, you may wish to go even further, and eliminate the whole concept of the site redesign as a discrete event. The best websites are continuously evolving. “Always be redesigning.”)

Filed Under: Media, Personal, Software, Technology

And … we’re back

April 3, 2014 by Scott Rosenberg

be-prepared-to-stopA brief note here to acknowledge that this site has been mostly dormant for a couple years there while I worked a full-time-and-more job.

My time at Grist was happy indeed — but splitting my life between Seattle and the Bay Area finally became too much to handle. So as of the end of last month, I’ve stepped down from my job as executive editor, though I continue to work part time editing some great writers there.
[Read more…]

Filed Under: Personal

New bridge, old book: the shape of software progress

September 2, 2013 by Scott Rosenberg

New Bay Bridge east spanThe Bay Bridge’s new eastern span is about to open. When they started building it over a decade ago, I was beginning work on my book Dreaming in Code. As I began digging into the history of software development projects and their myriad delays and disasters, I kept encountering the same line: Managers and executives and team leaders and programmers all kept asking, “Why can’t we build software the way we build bridges?”

The notion, of course, was that somehow, we’d licked building bridges. We knew how to plan their design, how to organize their construction, how to bring them in safely and on time and within budget. Software, by contrast, was a mess. Its creators regularly resorted to phrases like “train wreck” and “death march.”

As I began my research, I could hear, rattling the windows of my Berkeley home, the deep clank of giant pylons being driven into the bed of San Francisco Bay — the first steps down the road that ends today with the opening of this gleaming new span. I wrote the tale of the new bridge into my text as an intriguing potential contrast to the abstract issues that beset programmers.

As it turned out, of course, this mammoth project proved an ironic case in any argument involving bridge-building and software. The bridge took way longer than planned; cost orders of magnitude more than expected; got hung up in bureaucratic delays, political infighting, and disputes among engineers and inspectors; and finally encountered an alarming last-minute “bug” in the form of snapped earthquake bolts.

So much for having bridges down. All that the Bay Bridge project had to teach software developers, really, was some simple lessons: Be humble. Ask questions. Plan for failure as well as success.

Discouraging as that example may be, I’m far more optimistic these days about the software world than I would ever have expected to become while working on Dreaming. Most software gets developed faster and in closer touch with users than ever before. We’ve turned “waterfall development” into a term of disparagement. No one wants to work that way: devising elaborate blueprints after exhaustive “requirements discovery” phases, then cranking out code according to schedules of unmeetable precision — all in isolation from actual users and their changing needs. In the best shops today, working code gets deployed regularly and efficiently, and there’s often a tight feedback loop for fixing errors and improving features.

My own recent experiences working closely with small teams of great developers, both with MediaBugs and now at Grist, have left me feeling more confident about our ability to wrestle code into useful forms while preserving our sanity. Software disasters are still going to happen, but I think collectively the industry has grown better at avoiding them or limiting their damage.

While I was chronicling the quixotic travails of OSAF’s Chandler team for my book, Microsoft was leading legions of programmers down a dead-end path named Longhorn — the ambitious, cursed souffle of an operating system upgrade that collapsed into the mess known as Windows Vista. At the time, this saga served to remind me that the kinds of delays and dilemmas the open-source coders at OSAF confronted were just as likely in the big corporate software world. Evidently, the pain still lingers: When Steve Ballmer announced his retirement recently, he cited “the loopedy-loo that we did that was sort of Longhorn to Vista” as his biggest regret.

But Longhorn might well have been the last of the old-school “death marches.” Partly that’s because we’ve learned from past mistakes; but partly, too, it’s because our computing environments continue to evolve.

Our digital lives now rest on a combination of small devices and vast platforms. The tech world is in the middle of one of the long pendulum swings between client and server, and right now the burden of software complexity is borne most heavily on the server side. The teeming hive-like cloud systems operated by Google, Facebook, Amazon and their ilk, housed in energy-sucking server farms and protected by redundancy and resilient thinking, are among the wonders of our world. Their software is run from datacenters, patched at will and constantly evolving. Such systems are beginning to feel almost biological in their characteristics and complexities.

Meanwhile, the data these services accumulate and the speed with which they can extract useful information from it leave us awe-struck. When we contemplate this kind of system, we can’t help beginning to think of it as a kind of crowdsourced artificial intelligence.

Things are different over on the device side. There, programmers are still coping with limited resources, struggling with issues like load speed and processor limits, and arguing over hoary arcana like memory management and garbage collection.

The developers at the cloud platform vendors are, for the most part, too smart and too independent-minded to sign up for death marches. Also, their companies’ successes have shielded them so far from the kind of desperate business pressures that can fuel reckless over-commitment and crazy gambles.

But the tech universe moves in cycles, not arcs. The client/server pendulum will swing back. Platform vendors will turn the screws on users to extract more investor return and comply with increasingly intrusive government orders. Meanwhile, the power and speed of those little handheld computers we have embraced will keep expanding. And the sort of programmers whose work I celebrated in Dreaming in Code will keep inventing new ways to unlock those devices’ power. It’s already beginning to happen. Personal clouds, anyone?

Just as the mainframe priesthood had to give way to the personal-computing rebels, and the walled-garden networks fell to the open Internet, the centralized, controlled platforms of today will be challenged by a new generation of innovators who prefer a more distributed, self-directed approach.

I don’t know exactly how it will play out, but I can’t wait to see!

Filed Under: Dreaming in Code, Software

When Google was that new thing with the funny name

July 7, 2013 by Scott Rosenberg

early googleOne little article I wrote 15 years ago for Salon has been making the rounds again recently (probably because Andrew Leonard recently linked to it — thanks, Andrew!).

This piece was notable because it introduced Salon’s readers to a new service with the unlikely name of Google. My enthusiastic endorsement was based entirely on my own happy experience as a user of of the new search engine, and my great relief at finding a new Web tool that wasn’t larded up with a zillion spammy ad-driven come-ons, as so much of the dotcom-bubble-driven Web was at the time. The column was one of the earlier media hits for Google — it might’ve been the first mention outside the trade press, if this early Google “Press Mentions” page is complete.

Today I see a couple of important stories buried in this little ’90s time capsule. One is about money, the other about innovation.

First, the money: A commenter over at Hacker News expressed the kind but deluded wish that I had somehow invested in Google at that early stage. Even if I had been interested (and as a tech journalist, I wasn’t going to go down that road), the company had only recently incorporated and taken on its first private investment. You couldn’t just walk in off the street and buy the place. (Though that didn’t stop Salon’s CEO at the time from trying.)

In its earliest incarnation, and for several years thereafter, the big question businesspeople asked about Google was, “How will they ever make money?” But the service that was so ridiculously appealing at the start thanks to its minimalist, ad-free start page became the Gargantua of the Web advertising ecosystem. Despite its “Don’t be evil” mantra and its demonstrable dedication to good user experience, Google also became the chief driver of the Web’s pay-per-click corruption.

I love Google in many ways, and there’s little question that it remains the most user-friendly and interoperability-minded of the big Web firms. But over the years I’ve become increasingly convinced that, as Rich Skrenta wrote a long time ago, “PageRank Wrecked the Web.” Giving links a dollar value made them a commodity.

Maybe you’ve noticed that this keeps happening. Today, Facebook is making relationships a commodity. Twitter is doing the same to casual communication. For those of us who got excited about the Web in the early ’90s because — as some smart people once observed — nobody owned it, everyone could use it, and anyone could improve it, this is a tear-your-hair-out scenario.

Or would be, except: there’s an escape route. Ironically, it’s the same one that Larry Page and Sergei Brin mapped out for us all in 1998. Which brings us to the second story my 1998 column tells, the interesting one, the one about innovation.

To understand this one, you have to recall the Web scene that Google was born into. In 1998, search was over. It was a “solved problem”! Altavista, Excite, Infoseek, Lycos, and the rest — all these sites provided an essential but fully understood service to Web users. All that was left was for the “portal” companies to build profitable businesses around them, and the Web would be complete.

Google burst onto this scene and said, “No, you don’t understand, there’s room to improve here.” That was correct. And it’s a universal insight that never stops being applicable: there’s an endless amount of room to improve, everywhere. There are no solved problems; as people’s needs change and their expectations evolve, problems keep unsolving themselves.

This is the context in which all the best work in the technology universe gets done. If you’re looking for opportunities to make a buck, you may well avoid markets where established players rule or entrenched systems dominate. But if you’re looking for better ways to think and live, if you’re inspired by ideals more than profits, there’s no such thing as a closed market.

This, I think, is the lesson that Doug Engelbart, RIP, kept trying to teach us: When it comes to “augmenting human intellect,” there’s no such thing as a stable final state. Opportunity is infinite. Every field is perpetually green.

Engelbart-Demo-Intro-9Dec68

Filed Under: Net Culture, Technology

“A large universe of documents”

April 30, 2013 by Scott Rosenberg

w3c and buzzfeed2

“The WorldWideWeb (W3) is a wide-area hypermedia information retrieval initiative aiming to give universal access to a large universe of documents.”

That’s how the Web first defined itself to the world.

Today is apparently the 20th anniversary of the moment when Tim Berners-Lee and his colleagues at CERN, the advanced physics lab in Geneva, made the Web’s underlying code free and public. CERN has a big project up to document and celebrate. As part of that project, it has posted a reproduction of the home page of the first public website.

The definition above is the first sentence on that page. Let’s unpack it!

The WorldWideWeb

I’m guessing this odd treatment — one word with CamelCase capitalization — was an inheritance from the Unix programming world in which Tim Berners-Lee worked and the Web hatched. It’s been years since anyone wrote it this way (even the W3C adds spaces). Spaces don’t work in old-school file names and the Web was conceived as a direct way to interconnect the file systems on networked servers, so leaving out the spaces made sense. Today it’s a style-book fight just to keep people from lower-casing “the Web.”

wide-area

The Web was all about moving our conception of a network from the thing that let one computer talk to another (or a printer) in an office to the thing that connected people and data around the world. In those days networks were considered “LANs” — local-area networks — or “WANs” — wide-area networks. LANs were in physically proximate spaces like large offices or, later, homes. WANs were bigger — computers connected first by phone lines and later by an alphabet-soup of higher-speed connections like ISDN, DSL, T1, and so forth. But it wasn’t clear what one would do with a WAN until the Web came along and showed us.

hypermedia

The term that emerged from Ted Nelson’s work on hypertext, popularized by Apple’s HyperCard, meaning texts and documents that are connected by crosslinks. The Web made links second nature for many of us, but we still haven’t fully digested all their possibilities — or stopped arguing about their pros and cons.

information retrieval

It’s fascinating to recall just how simple the Web’s bones are. Its underlying protocols provide a simple collection of action verbs — “get,” “post” and “put” — that describe sending and receiving information. That’s it. All the other stuff we do online today is built on that foundation.

initiative

The Web was not a startup. It was a collaborative “initiative.” This caused many in the tech industry to dismiss it; how could it ever compete against the mighty, money-driven behemoths like Compuserve, Prodigy and AOL, or, later, MSN?

universal access

The Web would be “free” and “open,” as the CERN page now says. No tollgates or licensing fees or dues or rent. Of course there was money in the system; the rapid commercialization of the Internet on which the Web still rests still lay in the future in 1993, but it was already in sight. But the piece of the system that made the Web the Web was going to be free of charge and free to tinker with.

With the right networking technology, it’s easy to make something universally available; it’s much harder to create something that the universe actually wants. That was the genius of the Web.

large universe of documents

This is the phrase that still excites and haunts me. The Web was originally about “documents,” not functional code. It was a publishing platform for the sharing of what we now refer to as “static files.” The phrase reminds us of the irresistible invitation the Web made to non-programmers: you too can contribute! You don’t need to code! HTML is a “markup language” and can be learned in minutes! (That was true, then.)

Today’s Web is infinitely more capable, and more complex. Over the past decade, modern browsers and javascript have turned it into an adaptable programming environment that first rendered the old MSOffice-driven desktop world obsolete and now faces its own challenges in the mobile world.

That’s great! It’s where I live and work now. But there will always be a corner of my mind and heart set aside for the Web as that simpler enterprise — that thing that just lets anyone explore and expand a “large universe of documents.”

Filed Under: Net Culture, Uncategorized

‘How to Be Yourself’: My Ignite talk about authenticity

February 10, 2013 by Scott Rosenberg

Ignite talks are an exquisite form of self-torture for which you voluntarily stand in front of a crowd and give a five-minute talk timed to twenty slides that advance, inexorably, every 15 seconds.

At the end of last year I gave one of these talks at NewsFoo, and the kind folks who organized that event provided some great video.

My theme was a topic I’ve grown increasingly fascinated by — “reality hunger,” the “authenticity bind,” and the nature of personal identity in the digital age.

Here’s my five minutes:

What’s with the references to RuPaul? At the conference I had the good/bad fortune of immediately following Mark Luckie onstage. Luckie’s talk on “Why RuPaul is Better At Social Media Than You” was way more fabulous than mine could ever be, as you can see:

There’s some great stuff in nearly all of the other Ignite talks from NewsFoo. They’re all here.

Filed Under: Net Culture, Personal

« Previous Page
Next Page »