Wordyard

Hand-forged posts since 2002

Scott Rosenberg

  • About
  • Greatest hits

Archives

Blogging, empowerment, and the “adjacent possible”

October 8, 2010 by Scott Rosenberg

Learning to make things changes how we understand and consume those things.

When I started reporting the news as a teenager, I read the newspaper differently. When I learned to play guitar in my ’20s, I listened to songs differently. When I first played around with desktop video editing 15 years ago I began watching movies and TV differently.

It’s the same with writing: Learning how to write changes how we read — and how we think. This is from Maryanne Wolf’s excellent Proust and the Squid:

As the twentieth-century psychologist Lev Vygotsky said, the act of putting spoken words and unspoken thoughts into written words releases and, in the process, changes the thoughts themselves… In his brief life Vygotsky observed that the very process of writing one’s thoughts leads individuals to refine those thoughts and to discover new ways of thinking. In this sense the process of writing can actually reenact within a single person the dialectic that Socrates described to Phaedrus. In other words, the writer’s efforts to capture the ideas with ever more precise written words contain within them an inner dialogue, which each of us who has struggled to articulate our thoughts knows from the experience of watching our ideas change shape through the sheer effort of writing. Socrates could never have experienced this dialogic capacity of written language, because writing was still too young. Had he lived only one generation later, he might have held a more generous view.

What Vygotsky and Wolf observed about writing, we can extend and expand to writing in public. Writing for an audience is a special and important sub-case: it’s writing with feedback and consequences. Doing it yourself changes how you think about it and how you evaluate others’ efforts. The now-unfashionable word “empowerment” describes a part of that change: writing is a way of discovering one’s voice and feeling its strength. But writing in public involves discovering the boundaries and limits of that power, too. We learn all the different ways in which we are not the center of the universe. That kind of discovery has a way of helping us grow up fast.

So when I hear the still-commonplace dismissal of blogging as a trivial pastime or an amateurish hobby, I think, hold on a second. Writing — making texts — changes how we read and think. Every blogger (at least every blogger that wasn’t already a writer) is someone who has learned to read the world differently.

I’m preparing for some public talks later this month about Say Everything, which is why I’m revisiting this ground. It seems to me that, in our current bedazzlement with the transformative powers of social networking, we routinely underestimate the practical social importance of change at this individual level.

Clay Shirky, for instance, has focused, with great verve and insight, on how the Web enables us to form groups quickly and easily, and how that in turn is reshaping society. In his book Cognitive Surplus, Shirky identifies a spectrum of values stretching from personal to communal to public to civic. The spectrum, he writes, “describes the degree of value created for participants versus nonparticipants. With personal sharing, most or all of the value goes to the participants, while at the other end of the spectrum, attempts at civic sharing are specifically designed to generate real change in the society the participants are embedded in.”

This is a useful framework for discussion. What I think it neglects is the way the act of personal sharing changes individuals in ways that make the other sorts of sharing more imaginable to them. In other words, the spectrum is also a natural progression. The person who has struggled to turn a thought into a blog post, and then seen how that post has been reflected back by readers and other bloggers, is someone who can think more creatively about how sharing might work at other scales and in other contexts. A mind that has changed is more likely to imagine a world that can change.

In his great new book Where Good Ideas Come From: The Natural History of Innovation, Steven Johnson describes the concept of “the adjacent possible.” This passage is from a recent excerpt in the Wall Street Journal, in which Johnson considers the improbable yet imaginable “primordial innovation of life itself”:

The scientist Stuart Kauffman has a suggestive name for the set of all those first-order combinations: “the adjacent possible.” The phrase captures both the limits and the creative potential of change and innovation. In the case of prebiotic chemistry, the adjacent possible defines all those molecular reactions that were directly achievable in the primordial soup. Sunflowers and mosquitoes and brains exist outside that circle of possibility. The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself.

The strange and beautiful truth about the adjacent possible is that its boundaries grow as you explore them. Each new combination opens up the possibility of other new combinations. Think of it as a house that magically expands with each door you open. You begin in a room with four doors, each leading to a new room that you haven’t visited yet. Once you open one of those doors and stroll into that room, three new doors appear, each leading to a brand-new room that you couldn’t have reached from your original starting point. Keep opening new doors and eventually you’ll have built a palace.

One way to assess the impact of blogging is to say that the number of people who have had the experience of writing in public has skyrocketed over the course of the last decade. Let’s say that, pre-Internet, the universe of people with experience writing in public — journalists, authors, scholars — was, perhaps, 100,000 people. And let’s say that, of the hundreds of millions of blogs reported to date, maybe 10 million of them are sustained enough efforts for us to say that their authors have gained real experience writing in public. I’m pulling these numbers out of a hat, trying to err on the conservative side. We still get an expansion of a hundredfold.

Each of these people now has an entirely new set of “adjacent possibilities” to explore. What they make of those opportunities will shape the next couple of decades in important, and still unpredictable, ways.

Filed Under: Blogging, Books, Culture

Hey Zuck! Hollywood just hacked your profile

October 4, 2010 by Scott Rosenberg 7 Comments


You know those Facebook phishing hacks — the ones where someone gets control of your account and sends phony messages to your friends? “I’m stuck in London! Send money quick!”

I kept thinking of that phenomenon as I watched The Social Network this weekend. Because what filmmakers Aaron Sorkin and David Fincher have done to their protagonist, Facebook founder Mark Zuckerberg, is the moral equivalent of this sort of identity theft.

They have appropriated Zuckerberg’s life story and, under the banner of fidelity to “storytelling” rather than simple documentary accuracy, twisted it into something mirroring their own obsessions rather than the truth. They transform Mark Zuckerberg’s biography from the messy tale of a dorm-room startup’s phenomenal success into a dark vision of a lonely geek’s descent into treachery.

The Social Network takes the labyrinthine and unique origins of Facebook at Harvard and turns them into a routine finger-wagger about how the road to the top is paved with bodies. Sorkin apparently isn’t interested in what makes his programmer-entrepreneur antihero tick, so he drops in cliches about class resentment and nerd estrangement.

In order to make it big, Sorkin’s Zuckerberg has to betray his business-partner friend (Eduardo Saverin). Why is he hungry for success? Sorkin has him wounded by two primal rejections — one by a girlfriend, the other by Harvard’s fraternity-ish old-money “final clubs.” The programming whiz-kid doesn’t know how to navigate the real-world “social network” — get it? — so he plots his revenge.

Many thoughtful pieces have already discussed the movie, and I don’t want to rehash them. I agree more with my friend David Edelstein’s take on the film’s cold triviality than with the enthusiastic raves from other quarters. Go read Lawrence Lessig and Jeff Jarvis for definitive critiques of the film’s failure to take even the most cursory measure of the real-world phenomenon it’s ostensibly about. Here’s Lessig: “This is like a film about the atomic bomb which never even introduces the idea that an explosion produced through atomic fission is importantly different from an explosion produced by dynamite.” Over in Slate, Zuckerberg’s classmate Nathan Heller outlines how far off the mark Sorkin wanders in his portrait of the Harvard social milieu. (Obsessive, brainy Jewish kids had stopped caring about whether they were excluded from the almost comically uncool final clubs at Harvard long before my own time there, and that was quite a long time ago by now.)

It’s Hollywood that remains clubby and status-conscious, far more dependent on a closed social network to get its work done than any Web company today. The movie diagrams the familiar and routine dynamic of a startup business, where founders’ stakes get diluted as money pours in to grow the company, as some sort of moral crime. (That may explain why — as David Carr lays it out — startup-friendly youngsters watch the film and don’t see the problem with Zuckerberg’s behavior, while their elders tut-tut.) Again, this is a Hollywood native’s critique of Silicon Valley; movie finance works in a more static way.

It’s strange to say this, since I am not a fan of Facebook itself — I prefer a more open Web ecology — but The Social Network made me feel sorry for the real Zuckerberg, notwithstanding the billionaire thing. He’s still a young guy with most of his life ahead of him, yet a version of his own life story that has plainly been shaped by the recollections of people who sued him is now being imprinted on the public imagination.

At least Orson Welles had the courtesy to rename William Randolph Hearst as “Charles Foster Kane.” This isn’t a legal issue (John Schwartz details why in today’s Times). But, for a movie that sets itself up as a graduate course in business ethics, it is most certainly a giant lapse of fairness.

In New York, Mark Harris described the film as “a well-aimed spitball thrown at new media by old media,” but I think it’s more than that — it’s a big lunging swat of the old-media dinosaur tail. The Web, of which Facebook is the latest popular manifestation, has begun to move us from a world in which you must rely on reporters and screenwriters and broadcasters to tell your story to one where you get to present your story yourself. (And everybody else gets to tell their own stories, and yours too, but on a reasonably equal footing.) The Social Network says to Zuckerberg, and by proxy, the rest of us who are exploring the new-media landscape: “Foolish little Net people, you only think you’re in control. We will define you forever — and you will have no say!”

In other words, The Social Network embodies the workings of the waning old order it is so thoroughly invested in. It can’t be bothered with aiming to tell the truth about Zuckerberg — yet it uses his real name and goes out of its way to affect documentary trappings, down to the concluding “where are they now?” text crawl.

The movie’s demolition job on the reputation of a living human being is far more ruthless than any prank Zuckerberg ever plotted from his dorm room. For what purpose? When a moviemaker says he owes his allegiance to “storytelling,” usually what he means is, he’s trying to sell the most tickets. I guess that to get where they wanted to go, Sorkin and Fincher just had to step on a few necks themselves.

Filed Under: Business, Culture, Media

Carr’s “The Shallows”: An Internet victim in search of lost depth

September 8, 2010 by Scott Rosenberg 7 Comments

One day, immersing myself in my reading was simple as breathing. The next, it wasn’t. Once I had happily let books consume my days, with my head propped up against my pillow in bed or my body sprawled on the floor with the volume open in front of me. Now I felt restless after just a few pages, and my mind and body both refused to stay in one place. Instead of just reading, I would pause and ask, “Why am I reading this and not that? How will I ever read everything I want to or need to?”

I was 18. It would be years before I’d hear of the Internet.

Nicholas Carr had, it seems, a similar experience, quite a bit more recently. He describes it at the start of his book The Shallows: What the Internet is Doing to Our Brains:

I used to find it easy to immerse myself in a book or a lengthy article. My mind would get caught up in the twists of the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration starts to drift after a page or two. I get fidgety, lose the thread, begin looking for something else to do. I feel like I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

When I experienced this loss of focus, I simply blamed my new condition on my newly acquired adulthood. Carr, apparently, was lucky enough to retain his deep-reading endurance undisturbed from childhood well into his grownup years. By the time it began to slip away from him, we were all deep into the Web era. Carr decided that, whatever was going on, the Web was to blame. It wasn’t something that simply happened; it was something that the Internet was “doing to” his brain.

The Shallows has been received as a timely investigation of the danger that information overload, multitasking and the Web all pose to our culture and our individual psyches. There are serious and legitimate issues in this realm that we ignore at our peril. (Linda Stone is one important thinker in this area whose work I recommend.)

So I cannot fault Carr for asking what the Internet is doing to us. But that is only half of the picture. He fails to balance that question with its vital complement: What are we doing to, and with, the Internet? This imbalance leads him both to wildly overstate the power of the Internet to alter us, and to confuse traits that are inherent to the medium with those that are incidental.

Carr writes as a technological determinist. In asking what the Internet is “doing to” us he casts us as victims, not actors, and once that casting is in place, there’s only one way the drama can unfold. The necessary corrective to this perspective can be found in the opening chapter of Claude Fischer’s great history of the telephone, America Calling. Fischer admonishes us not to talk about technology’s “impacts” and “effects,” because such language “implies that human actions are impelled by external forces when they are really the outcomes of actors making purposeful choices under constraints.” (Emphasis mine.)
[Read more…]

Filed Under: Books, Culture, Media, Technology

Don’t save your links for the end — it’s more distracting!

September 7, 2010 by Scott Rosenberg 5 Comments

One of the humble yet essential uses of the link is to help us avoid having to repeat what others have already said. I make no great claim to novelty for my “Defense of Links” series; much of what I said, others had already expressed earlier this year when Carr first floated his “delinkification” meme. In particular, Jason Fry’s excellent post at Nieman Lab surveyed the ground well.

Fry talked about the role of links in three areas: credibility, readability and connectivity. “Readability” is plainly the area where Carr had the most provocative and defensible case against links. My motivation from the start was to examine that case closely and evaluate the studies it was based on — to follow the links, as it were.

I found that the studies Carr relied on really didn’t support his case. Just as interesting to me was the fact that a lengthy and in-depth discussion of Carr’s argument had unfolded on the Web without anyone actually looking up the research. Would that have happened had Carr provided links to these studies? (That’s possible on a blog but not, of course, in print. Still, one can publish endnotes online and activate the links, as I have for both of my books. Carr’s book site is quite the link desert, which I guess should not surprise.)

Fry asked a question that several respondents to my series echoed: ” Is opening links in new tabs really so different from links at the end of the piece?” For me, it is: ironically, the end-linking style is, I think, far more distracting than simple inline linking.

If you’re reading along and feel the desire to dig deeper on some point and the link is right there, you can just open the link in a new tab. If it’s not, you don’t know whether the author has provided a link or not. You have an unhappy choice. You can file the question away in your brain to make sure you remember to check once you reach the end of the article (now there’s a cognitive load). Or you can stop reading and scroll down to the bottom of the story to look for the link, which involves reviewing the whole list, figuring out whether the link you seek is actually there, clicking on it if it is, and then scrolling back to the top to find where you were. All of which thoroughly disrupts the deep reading Carr aims to protect far more thoroughly than a handful of highlighted link-words.

For instance, when I read Carr’s “Delinkification” post and saw his references to the “cognitive penalty” of links, I wanted to know where the studies were that supported this claim. There are no links inline, but I knew the whole post was about the experiment of putting links at the end, so I went on a wild goose chase to the bottom of the post hoping to find the studies linked there. (They’re not.) How can this possibly serve the reader’s concentration?

Those with long memories will recall that the original incarnation of Slate, driven by Michael Kinsley’s naivete about the Web, actually employed links-at-the-end as a policy. The magazine gave it up some time later. Turns out Carr’s “experiment” already had some in-the-field results. (You can see what this looks like on this Internet Archive capture of a Jacob Weisberg piece from 1999.)

I got into some of this argument in the comments at Scott Esposito’s thoughtful response to my series. Mathew Ingram at GigaOm provided a nice summary of my lengthier musings. I would also recommend Brian Frank’s rich philosophical take.

Tomorrow, wider thoughts on The Shallows, which of course addresses far more than links!

Filed Under: Books, Culture, Media

Cheap art

September 3, 2010 by Scott Rosenberg 5 Comments

In the 1980s I worked as a theater critic. I spent a lot of time in expensive Broadway theaters and ambitious nonprofit repertory companies. But some of my most memorable experiences were at street theater events by groups like the San Francisco Mime Troupe and Vermont’s Bread and Puppet Theater. I first saw them in Boston at a time when the manifesto below was relatively new. It’s now a quarter century old but it hasn’t lost any of its truth.

For most of my writing life I’ve had a copy of this poster on my wall near where I work. When we rebuilt my basement office I lost track of it, but recently found it and rehung it. Here it is for you. (I got this image here.)

Happy long weekend, everyone. Make some cheap art!

Filed Under: Culture, Food for Thought, Personal

In Defense of Links, part three: In links we trust

September 2, 2010 by Scott Rosenberg 34 Comments

This is the third post in a three-part series. The first part was Nick Carr, hypertext and delinkification. The second part was Money changes everything.

Nick Carr, like the rest of the “Web rots our brains” contingent, views links as primarily subtractive and destructive. Links direct us away from where we are to somewhere else on the Web. They impede our concentration, degrade our comprehension, and erode our attention spans.

It’s important, first, to understand that every single one of these criticisms of links has been raised against every single new media form for the past 2500 years. (Rather than rehash this hoary tale, I’ll point you to Vaughan Bell’s excellent summary in Slate. For a full and fascinating account of the earliest episode in this saga — Socrates’ denunciation of the written word — I recommend the elaboration of it in Maryanne Wolf’s Proust and the Squid.)

Throughout history, the info-panic critique has been one size fits all. The media being criticized may change, but the indictments are remarkably similar. That tells us we’re in the presence of some ancestral predilection or prejudice. We involuntarily defend the media forms we grew up with as bastions of civilization, and denounce newcomers as barbaric threats to our children and our way of life.

That’s a lot to hang on the humble link, which — in today’s Flash-addled, widget-laden, real-time-streaming environment — seems more like an anchor of stability than a force for subversion. But even if we grant Carr his premise that links slow reading and hamper understanding (which I don’t believe his evidence proves at all), I’ll still take the linked version of an article over the unlinked.

I do so because I see links as primarily additive and creative. Even if it took me a little longer to read the text-with-links, even if I had to work a bit harder to get through it, I’d come out the other side with more meat and more juice.

Links, you see, do so much more than just whisk us from one Web page to another. They are not just textual tunnel-hops or narrative chutes-and-ladders. Links, properly used, don’t just pile one “And now this!” upon another. They tell us, “This relates to this, which relates to that.”

Links announce our presence. They show a writer’s work. They are badges of honesty, inviting readers to check that work. They demonstrate fairness. They can be simple gestures of communication; they can be complex signifiers of meaning. They make connections between things. They add coherence. They build context.

If I can get all that in return, why would I begrudge the link-wielding writer a few more seconds of my time, a little more of my mental effort?
[Read more…]

Filed Under: Blogging, Culture, Media

Miscellany: SAI, Crooked Timber, MediaBugs and “Inception”

September 1, 2010 by Scott Rosenberg 1 Comment

Part Three of “In Defense of Links” coming later this week! Some little stuff in between:

  • I have begun an experiment in crossposting some of my stuff over at Silicon Alley Insider/Business Insider. Same writing, grabbier headlines! As it is, my posts appear here, and then also at Open Salon (where Salon sometimes picks them up). And I pipe them into Facebook for my friends who hang out there. The folks at SAI have picked up some of my pieces before, and I’m curious about how my point of view goes over with this somewhat different crowd.
  • Henry Farrell was kind enough to post a bit about In Defense of Links over at the Crooked Timber blog, and the discussion in comments there is just humblingly good — as well as entertaining. Would every single person who has ever issued a blanket putdown of the worthlessness of blog comments please pay this estimable community of online scholars a visit, and then pipe down? Thank you.
  • At MediaBugs, we’re gearing up for some expansions and changes in about a month. In the meantime, we had an illuminating exchange with the Washington Post about a nonexistent intersection. I wrote about it over at MediaShift’s Idea Lab.
  • Just in time for the release of his new novel, Zero History, William Gibson has a great op-ed in the Times:

    Jeremy Bentham’s Panopticon prison design is a perennial metaphor in discussions of digital surveillance and data mining, but it doesn’t really suit an entity like Google. Bentham’s all-seeing eye looks down from a central viewpoint, the gaze of a Victorian warder. In Google, we are at once the surveilled and the individual retinal cells of the surveillant, however many millions of us, constantly if unconsciously participatory.

    In the ’90s I had the pleasure of interviewing Gibson a couple of times — here’s the 1994 edition, in which we discussed why the technology in his early novels never breaks down, and here’s part of the 1996 one, where he talks about building his first website and predicts the rise of people who “presurf” the Web for you.

    I recently caught up with Inception, and was amazed at how shot-through it is with Gibsonisms. Inception is to Neuromancer as The Matrix was to Philip K. Dick’s worlds: an adapation in everything but formal reality.

Filed Under: Blogging, Books, Culture, Mediabugs, Personal

In Defense of Links, Part One: Nick Carr, hypertext and delinkification

August 30, 2010 by Scott Rosenberg

For 15 years, I’ve been doing most of my writing — aside from my two books — on the Web. When I do switch back to writing an article for print, I find myself feeling stymied. I can’t link!

Links have become an essential part of how I write, and also part of how I read. Given a choice between reading something on paper and reading it online, I much prefer reading online: I can follow up on an article’s links to explore source material, gain a deeper understanding of a complex point, or just look up some term of art with which I’m unfamiliar.

There is, I think, nothing unusual about this today. So I was flummoxed earlier this year when Nicholas Carr started a campaign against the humble link, and found at least partial support from some other estimable writers (among them Laura Miller, Marshall Kirkpatrick, Jason Fry and Ryan Chittum). Carr’s “delinkification” critique is part of a larger argument contained in his book The Shallows. I read the book this summer and plan to write about it more. But for now let’s zero in on Carr’s case against links, on pages 126-129 of his book as well as in his “delinkification” post.

The nub of Carr’s argument is that every link in a text imposes “a little cognitive load” that makes reading less efficient. Each link forces us to ask, “Should I click?” As a result, Carr wrote in the “delinkification” post, “People who read hypertext comprehend and learn less, studies show, than those who read the same material in printed form.”

This appearance of the word “hypertext” is a tipoff to one of the big problems with Carr’s argument: it mixes up two quite different visions of linking.

“Hypertext” is the term invented by Ted Nelson in 1965 to describe text that, unlike traditional linear writing, spreads out in a network of nodes and links. Nelson’s idea hearkened back to Vannevar Bush’s celebrated “As We May Think,” paralleled Douglas Engelbart’s pioneering work on networked knowledge systems, and looked forward to today’s Web.

This original conception of hypertext fathered two lines of descent. One adopted hypertext as a practical tool for organizing and cross-associating information; the other embraced it as an experimental art form, which might transform the essentially linear nature of our reading into a branching game, puzzle or poem, in which the reader collaborates with the author. The pragmatists use links to try to enhance comprehension or add context, to say “here’s where I got this” or “here’s where you can learn more”; the hypertext artists deploy them as part of a larger experiment in expanding (or blowing up) the structure of traditional narrative.

These are fundamentally different endeavors. The pragmatic linkers have thrived in the Web era; the literary linkers have so far largely failed to reach anyone outside the academy. The Web has given us a hypertext world in which links providing useful pointers outnumber links with artistic intent a million to one. If we are going to study the impact of hypertext on our brains and our culture, surely we should look at the reality of the Web, not the dream of the hypertext artists and theorists.

The other big problem with Carr’s case against links lies in that ever-suspect phrase, “studies show.” Any time you hear those words your brain-alarm should sound: What studies? By whom? What do they show? What were they actually studying? How’d they design the study? Who paid for it?

To my surprise, as far as I can tell, not one of the many other writers who weighed in on delinkification earlier this year took the time to do so. I did, and here’s what I found.
[Read more…]

Filed Under: Culture, Media, Net Culture

“Perfecting Sound Forever”: great book on history of recording

August 4, 2010 by Scott Rosenberg 1 Comment

I’ve written a bit here about the curse of over-compression in recorded music:

For those of us already unhappy with the music industry’s bungling of the transition to digital distribution, here’s another thing we can blame them for. Seeking to have their products “stand out,” they entered a sonic race to the bottom… The irony is that we can only perceive loudness through contrast, so the contemporary recordings sound miasmic, not punchy. When you crank up all the dials to, as Spinal Tap would say, 11, everything sounds the same, your ears get tired, and you wonder why music doesn’t sound as good as it did when you were younger.

So when I discovered, belatedly, that Greg Milner has written an entire book about the birth, history and present plight of recording, I grabbed it. It’s called Perfecting Sound Forever: An Aural History of Recorded Music.
If, like me, you have always cared about sound quality but never had much of a vocabulary or structure for discussing or understanding it, it’s a wonderful read.

Milner’s tale starts with Edison’s famous “sound tests” (where they’d pit live vs. recording in front of an audience) and carries through to our MP3-muddled present. It’s fascinating to see how certain threads follow us from the days of sound cylinders up to the iPod era. Each successive generation of technology promises — and, for everyday listeners, seems to deliver — the utopia of perfect, life-like sound, sound captured so well that you cannot distinguish the recording from reality. But you soon realize a truth that Milner elegantly excavates: this “reality” is a chimera — an unobtainium of the ear. Our norms for “realistic sound” are hopelessly subjective. If Victrola recordings that crackle in our ears today sounded like “reality” to 1920s listeners, what will music-lovers of the 2120s think about the over-compressed recordings our culture is now producing?

There’s so much that’s fun and unexpected in Perfecting Sound Forever: the early religious wars between the proponents of acoustic recording and believers in the electrical approach that won out (presaging today’s analog vs. digital argument); how the advent of recording tape began to move us from the notion of sound reproduction to the idea of composing in the studio; how competition between radio stations upped the compression ante until we reached the point where the Red Hot Chili Peppers became “the band that clipped itself to death”; and much more.

Music criticism has fallen on hard times today, what with the fragmentation of the audience and the collapse of the industry. But Milner’s book is one case where writing about music most certainly isn’t like dancing about architecture — it’s more like dancing with ideas. Here’s a taste:

We never fully agree on what perfect sound is, so we keep trying, defining our sonic ideals against those of others, playing the game to the best of our abilities, in whatever position we occupy on the field. We add more reverb, we pump up the bass, we boost the treble, we compress dynamic range, we send the band back into the studio because we don’t hear a single — and we then remix that single, we press the song on vinyl, on disc, as a ghostly collection of ones and zeros that we send around the world. We do what we can to make it sound right and then we hear the sound flow from the speakers and we call it perfect.


With this post I intend to begin more regularly reviewing the books I’m reading, right here on Wordyard. Because, as my friend Laura Miller keeps reminding us, readers are scarcer than writers — or, as Gary Shteyngart was just saying on Fresh Air, “Nobody wants to read but everybody wants to write.”

Well, I intend to keep doing both! And, just so you know, I will also be wiring up my links to Amazon with partner codes; these will funnel a tiny bit of change back to me so I can keep buying those books.

Filed Under: Books, Culture, Music

Does the Web remember too much — or too little?

July 26, 2010 by Scott Rosenberg 11 Comments

Jeffrey Rosen’s piece on “The End of Forgetting” was a big disappointment, I felt. He’s taking on important themes — how the nature of personal reputation is evolving in the Internet era, the dangers of a world in which social-network postings can get people fired, and the fuzzier prospect of a Web that prevents people from reinventing themselves or starting new lives.

But I’m afraid this New York Times Magazine cover story hangs from some very thin reeds. It offers few concrete examples of the problems it laments, resorts to vague generalizations and straw men, and lists some truly preposterous proposed remedies.

Rosen presents his premise — that information once posted to the Web is permanent and indelible — as a given. But it’s highly debatable. In the near future, we are, I’d argue, far more likely to find ourselves trying to cope with the opposite problem: the Web “forgets” far too easily.
[Read more…]

Filed Under: Blogging, Culture, Media, Net Culture

Next Page »