Wordyard

Hand-forged posts since 2002

Archives

About

Greatest hits

Perfect software? More on Cantrill and Dreaming

November 13, 2007 by Scott Rosenberg

Bryan Cantrill has now fleshed out his critique of Dreaming in Code that I recently addressed. I want to thank him for returning to the topic and giving it serious consideration. I’m going to respond at length because I think, after clearing away a lot of smaller points, we’ve actually found an interesting point to dispute.

As far as I can tell, Cantrill is now saying that the problem with Dreaming is that it starts from the notion that “software is hard” and then explains why it’s hard by exploring what makes it unique — and he would have preferred a book that started from the notion that software is unique and then explored why its uniqueness makes it hard. At some point this becomes pretty abstruse, and I’ll leave it to those of you who want to compare what Cantrill says with what’s in the book to weigh our different perspectives.

I do want to correct his statement that I “picked a doomed project” from the start, as if that was my intention, in order to support a pessimistic view of software. At the time I began following Chandler, it had high hopes and a lot of enthusiastic support. I chose it because I cared about the problems it set out to solve, I thought the people involved were interesting, and I thought it had a reasonable chance of success. 20/20 hindsight leads Cantrill to dismiss Chandler as an ill-fated “garbage barge” from the start. But that’s hardly how it looked in 2002. It attracted people with considerable renown in the field, like Andy Hertzfeld and Lou Montulli, as well as other, equally smart and talented developers whose names are not as widely known. Nor, even at this late date, do I consider Chandler in any way to be definitively “doomed” — though certainly it has failed to live up to its initial dreams. There are too many examples of projects that took long slogs through dark years and then emerged to fill some vital need for anyone to smugly dismiss Chandler, even today.

Also, I need to say that my interest in the difficulty of software did not emerge as some ex post facto effort to justify the problems that OSAF and Chandler faced. In fact, as I thought I wrote pretty clearly, it emerged from my own experience in the field at Salon, where we had our own experience of a software disaster at the height of the dotcom boom.

Finally, in substantiating his criticism of what he calls my “tour de crank” of software thinkers, Cantrill still doesn’t make a lot of sense to me. In the two chapters near the end of the book that depart from Chandler to explore wider issues, I discuss the work and ideas of Edsger Dijsktra, Watts Humphrey, Frederick Brooks, Ward Cunningham, Kent Beck, Joel Spolsky, Jason Fried and Nick Carr (in Chapter 9), and Charles Simonyi, Alan Kay, Jaron Lanier, Richard Gabriel and Donald Knuth (in Chapter 10). (Marvin Minsky and Bill Joy, whom Cantrill mentions, are each cited only in passing.) Anyone might view some number of these figures skeptically — I do, too — but cranks, all?

Anyway, these are side issues. It’s later in Cantrill’s argument that we get to the heart of a real disagreement that’s worth digging into. Despite the evident pragmatism that’s on display in his Google talk, halfway through his post Cantrill reveals himself as a software idealist.

…software — unlike everything else that we build — can achieve a timeless and absolute perfection….software’s ability to achieve perfection is indeed striking, for it makes software more like math than like traditional engineering domains.

And once software has achieved this kind of perfection, he continues, it “sediments into the information infrastructure,” becoming a lower-level abstraction for the next generation to build on.

Ahh! Now we’re onto something substantial. Cantrill’s view that aspects of software can achieve this state of “perfection” is tantalizing but, to me, plain wrong. He makes a big point of insisting that he’s not talking simply about “software that’s very, very good” or “software that’s dependable” or even “software that’s nearly flawless”; he really means absolutely perfect. To me, this insistence — which is not at all incidental, it’s the core of his disagreement with me — is perplexing.

Certainly, the process by which some complex breakthrough becomes a new foundational abstraction layer in software is real and vital; it’s how the field advances. (Dreaming in Code, I think, pays ample attention to this process, and at least tries to make it accessible to everyday readers.) But are these advances matters of perfection? Can they be created and then left to run themselves “in perpetuity” (Cantrill’s phrase)?

On its own, an algorithm is indeed pure math, like Cantrill says. But working software instantiates algorithms in functional systems. And those systems are not static. (Nor are the hardware foundations they lie upon — and even though we’re addressing software, not hardware, it’s unrealistic to assume that you will never need to alter your software to account for hardware changes.) Things change all the time. This mutability of the environment is not incidental; it’s an unavoidable and essential aspect of any piece of software’s existence. (We sometimes jokingly refer to this as “software rot.“) Some of those changes break the software — if not over a matter of weeks or months, then over years and decades. It is then, I think, no longer accurate to call it “perfect” — unless you want to take the pedantic position that the software itself remains “perfect,” it’s the accursed world-around-the-software that’s broken!

This, of course, is a version of Joel Spolsky’s Law of Leaky Abstractions argument, which I present at length in Dreaming in Code: “perfect” abstractions that you can ignore are wonderful until something happens that makes it impossible to keep ignoring them. Such things happen with predictable regularity in the software world I know. I don’t know how you can discuss software as if this issue does not lie at its heart.

The strange thing about this disagreement is that, as far as I can tell, Cantrill is — like the engineers I know a lot better than I know him — a hands-on kind of guy. And DTrace, the project he’s known for, is by most accounts a highly useful tool for diagnosing the myriad imperfections of real-world software systems — for navigating those trips down the ladder of abstraction that software developers must keep making.

All of which leaves me scratching my head, wondering where the world is in which Cantrill has found software of “absolute perfection,” and which programs it is that have achieved such a pristine state, and how they — unlike all other programs in existence — escape the creep of software rot.
[tags]bryan cantrill, software, software development, dreaming in code[/tags]

Filed Under: Dreaming in Code, Software

Norman Mailer, 1923-2007

November 10, 2007 by Scott Rosenberg

I got my start in journalism-for-pay writing book reviews for the Village Voice and the Boston Phoenix. My editor at the Phoenix in those days (the early ’80s), Kit Rachlis, believed in giving young writers challenges — bless him. So one day I found myself staring at the forbidding 700-page mass of a new book by Norman Mailer titled Ancient Evenings — the celebrated novelist’s self-declared bid for literary immortality.

Somebody had to review it, and it really helped if that somebody didn’t have a day job.

The novel, set in ancient Egypt, is widely considered unreadable today — typically, by people who have not read it. And, to be honest, I don’t know if I’d have finished it had I not been paid to do so. But I was glad I did. The book, for all its mad excess, constituted a remarkable act of imaginative ambition — and even if Mailer only made good on a fraction of his self-dare, to see if he could get inside the world-view of a distant age, that was…something.

So — after immersing myself in Mailer’s voluminous body of work, reading his best, from The Naked and the Dead to Advertisements for Myself to Armies of the Night to The Executioner’s Song, along with a smattering of his not-best, of which there was plenty — I gave the book one of its few mixed reviews. And one day, in my infrequently-visited freelance writer’s mailbox, I found a little note from the author — thanking me, graciously, not for whatever praise I might have offered, but for what must have been my evident effort to approach the book on its own terms.

Now, on the one hand, for Mailer to have sent such a note violated what I, in my morally prescriptive youth, thought of as the impenetrable barricade that must always separate Artist from Critic. On the other hand, I was an aspiring little nobody just out of college, and he was Norman Mailer. I let pride win out over any sense of impropriety, and took the note as a rare sign of encouragement from the universe that my decision to set forth on the road of a writing career had not been entirely foolhardy.

At the moment of Mailer’s passing it’s worth remembering how much of his work centered on the moment of death. Ancient Evenings begins at the moment immediately following its narrator’s death, and its story is told from the perspective of this post-mortem residue, a “Ka” in the Egyptian nomenclature. “In the disorienting lightning flash of the book’s first page,” I wrote back in 1983, “the reader has no idea who the narrator is, but the narrator’s worse off — he has no idea what he is.”

Ancient Evenings also turns out to be a sequel to Mailer’s last big book. Another death-haunted story, full of musings about reincarnation, The Executioner’s Song built up slowly to full volume at Gary Gilmore’s execution, then dropped into silence. Ancient Evenings picks up at the very next moment. Although the two books’ material couldn’t be more different (one is a collation of the mundane, the other a heap of the spectacular), they’re both written in blunt, hard monosyllables that show the author off more humbly and impressively than the assertive baroque extravagance he used to employ. The sentences of Ancient Evenings are like blocks of stone heaved laboriously into place, and if the strain occasionally shows, the sight almost always elicits awe.

Here’s to Mailer’s Ka, wherever it may be.
[tags]norman mailer, criticism, ancient evenings, obituaries[/tags]

Filed Under: Culture, Personal

Does “Dreaming in Code” suck?

November 2, 2007 by Scott Rosenberg

Early in my online career — this goes back to around 1990 — I learned a basic principle about off-the-cuff criticism online: No flame flares in a vacuum. In other words, don’t be too glib with your put-downs — because before you know it, the person you’re putting down will find your comment and call you on it.

I recalled this today when I encountered (thanks to a mention on Sumana’s blog) a drive-by attack on Dreaming in Code. In the opening couple of minutes of a talk at Google this past summer, it seems, a Sun engineer named Bryan Cantrill declared, with some vociferousness, that my book — I quote — “sucks.” This judgment is now preserved in the perpetuity that is Google Video.

Now, Cantrill is one of the creators of DTrace, a popular, award-winning and innovative tool for managing Solaris, and my hat is instantly removed to anyone who bears responsibility for a successful piece of software. I am also not particularly shocked to hear that a smart programmer didn’t like my book; he’s neither the first nor the last in that group.

What’s just plain puzzling is exactly what Cantrill has to say in his handful of complaints about Dreaming in Code. Because every point he makes in explaining the basis for the book’s suckiness turns out to be a point that I have made at length in the book itself and in my talks this year about the book — including at Google, several months before Cantrill’s appearance there. Of course I’m not suggesting that he borrowed from me — he almost certainly hasn’t heard my presentation! But I am puzzled how he could so completely have missed my argument, and misrepresented my position, when it seems to be so close to his own.

As best I can make out, Cantrill believes that Dreaming in Code fails to acknowledge that software is uniquely different from other creative endeavors because (a) it’s not a physical entity; (b) we can’t see it; (c) it’s really an abstraction. These factors cause all the analogies that we draw to things like building bridges to break down. Cantrill describes himself as a “finisher” of books and I’ll take his word, but I’m flummoxed how anyone who has finished the book can knock it for failing to understand or express this view of software.

The critique gets sketchy from here on in; Cantrill draws some sort of analogy between Dreaming in Code and Apocalypse Now (a comparison I’ll gladly accept — it’s a reference I make in the book myself) and suggests that I got “hoodwinked” by “every long-known crank in software” (the lone “crank” cited is Alan Kay).

It’s true that the final section of the book surveys both the nuts-and-bolts methodologies that try to alleviate software’s practical difficulties and a whole gallery of software philosophers from both the mainstream and the fringes — people like Kay, Charles Simonyi, Donald Knuth and Jaron Lanier. If even discussing these people’s ideas constitutes “hoodwinking” I guess I’m guilty.

From here Cantrill wanders into his own case for software’s uniqueness, which as far as I can tell is nearly identical to the one I make in Dreaming in Code. “All the thinking around software engineering has come from the gentlemanly pursuit of civil engineering,” Cantrill says. “That’s not the way software is built.” Indeed.

So I’m not sure what the complaint is. Maybe analogies are so odious to Cantrill that he feels they should not even be discussed, even if the discussion is intended to expose their limitations. Maybe the notion of software’s uniqueness and its intractability to old-fashioned physical-world engineering principles seems so obvious to Cantrill that he is appalled anyone would even bother to explore it in a book. But there’s still an enormous amount of attention and money being applied to the effort to transform software development into a form of reliable engineering. I found thoughtful arguments on several different sides of the matter and thought it was worth the ink, although my own conclusion — that software is likely to remain “hard” and not become an easily predictable undertaking — is pretty clear.

Anyway: Go ahead and tell me my book sucks — I can take it! But don’t tell me that it sucks because it fails to acknowledge an argument that actually forms its very heart. Say that and, well, I’m just not going to be able to resist a retort.
[tags]dreaming in code, bryan cantrill, software engineering[/tags]

Filed Under: Dreaming in Code, Software

Marshall McLuhan and the Web: Hot, cold, or ?

November 1, 2007 by Scott Rosenberg

Today Nick Carr — whose new book, The Big Switch, comes out in January — has an interesting piece about McLuhan and today’s Web. Although Wired hoisted the Canadian media theorist into the digital era as its “patron saint” (the company’s book imprint even republished a couple of his collaborations with Quentin Fiore), it’s always been difficult to figure out how, exactly, to apply McLuhan’s theories to the Web. I took a stab at it in 1995 (an effort to which Carr kindly links), suggesting that the Web was neither a “hot” medium nor a “cold” one but rather some weird new lukewarm hybrid:

It remains almost exclusively a medium that transmits and reproduces vast quantities of text at high speeds. McLuhan interpreted the evolution of writing from ideograms and stone tablets to alphabetic characters and print reproduction as a “hotting up” “to repeatable print intensity.” By that standard, the Net is boiling.

On the other hand, its functional characteristics match those McLuhan identified as cool. There’s no question that the Internet is among the most participatory media ever invented, like the cool telephone. And its cultural patterns — with its oral-tradition-style transmission of myth and its collective anarchy — match those of McLuhan’s tribal global village.

…McLuhan said that all media are tranquilizers, but these hot-and-cold media have an especially potent numbing effect: They seduce us into lengthy engagement, offer us a feeling of empowerment and then glut our senses till we become indifferent.

My view of the Web has probably grown more positive since then; my own experience over the past 12 years has been one of growing engagement rather than creeping indifference. I think I was too pessimistic about the downside of glut.

But I think McLuhan would probably have shared that pessimism. He’s usually remembered in his high-priest-of-the-’60s mode, as a critic all too willing to dance on the grave of print. What I found when I dug deeper into McLuhan’s writings in the course of reviewing his biography for the Washington Post in 1997 (that piece is no longer available online so I’ve posted it here) was considerably more complex. He was, it turned out, most decidedly a lover of print himself.

In a 1959 letter, decades before the popularization of the Internet, he predicted: “When the globe becomes a single electronic computer, with all its languages and cultures recorded on a single tribal drum, the fixed point of view of print culture becomes irrelevant and impossible, no matter how precious.”

Ultimately, McLuhan’s perspective remains valuable more as a provocation to critical thought than as a fully worked out critical framework. He overloaded so many meanings on terms like “hot” and “cold” media that they could come to mean whatever you wanted them to mean. But there remains lasting value in McLuhan’s grand challenge to us — that we step out of the media bath in order to understand its effects on our organisms. What we most remember is his descriptive writing that mapped the impact of new media forms. We forget his prescriptive goal, of “immunizing” us from the worst influences of those media.

Carr reminds us of this in recalling McLuhan’s prophetic warning about the manipulative power of corporate media: “Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.”
[tags]marshall mcluhan, media studies, nicholas carr[/tags]

Filed Under: Books, Culture, Media, Net Culture

Code Reads #13: “The Inevitable Pain of Software Development”

October 31, 2007 by Scott Rosenberg

Code ReadsThis is the thirteenth edition of Code Reads, a series of discussions of some of the central essays, documents and texts in the history of software. You can go straight to the comments and post something if you like. Here’s the full Code Reads archive.

This month’s paper, Daniel Berry’s “The Inevitable Pain of Software Development, Including of Extreme Programming, Caused by Requirements Volatility,” is a sort of update and latter-day restatement of Frederick Brooks’s classic “No Silver Bullet” argument — that the traits of software development work that make it difficult are inherent in the enterprise and extremely unlikely to be vanquished by some breakthrough innovation.

Berry is, as he admits, not the first to locate the source of software’s essential difficulty in “requirements volatility” — unpredictable fluctuations in the list of things that the software being built is expected to be able to do (including variations in user behavior scenarios, data types and all the other factors that a working piece of software must take into account). Read any development manual, listen in on any software team’s gripe session and you will hear curses directed at “changing requirements.”

Every new approach to improving the software development process includes a proposed method for taming this beast. These methods all fail, Berry maintains, leaving software development just as much of a “painful” exercise as it was before their application.

In each case, Berry locates this failure in some aspect of or practice dictated by a particular method that programmers find to be too much of a pain to actually perform.

Every time a new method that is intended to be a silver bullet is introduced, it does make many part of the accidents easier. However, as soon as the method needs to deal with the essence or something affecting or affected by the essence, suddenly one part of the method becomes painful, distasteful, and difficult, so much so that this part of the method gets postponed, avoided and skipped….

Each method, if followed religiously, works… However, each method has a catch, a fatal flaw, at least one step that is a real pain to do, that people put off. People put off this painful step in their haste to get the software done and shipped out or to do more interesting things, like write more new code.

So that, for instance, the method of “requirements engineering” (exhaustively “anticipate all possible requirements and contingencies” before coding) offers many benefits, but “people seem to find haggling over requirements a royal pain.” Also, it demands that “people discover requirements by clairvoyance rather than by prototyping.”

Similarly, Extreme Programming (XP) depends on programmers writing test cases first. That’s a step that in itself seems to be painful for many developers. When requirements change, XP calls for frequent refactoring of existing code. “Refactoring itself is painful,” Berry notes. “Furthermore, it may mean throwing out perfectly good code whose only fault is that it no longer matches the architecture, something that is very painful to the authors of the code that is changed. Consequently, in the rush to get the next release out on time or early, refactoring is postponed and postponed, frequently to the point that it gets harder and harder.”

Berry goes right down the list and confirms that the pain he diagnoses is a condition universal in the field.

The situation with software engineering methods is not unlike that stubborn chest of drawers in the old slapstick movies; a shlimazel pushes in one drawer and out pops another one, usually right smack dab on the poor shlimazel’s knees or shins. If you find a new method that eliminates an old method’s pain, the new method will be found to have its own source of pain.

Berry’s paper concludes with an alternative version of the principle that in Dreaming in Code I dubbed, tongue-in-cheek, Rosenberg’s Law:

To the extent that we can know a domain so well that production of software for it becomes almost rote, as for compiler production these days, we can go the engineering route for that domain, to make building software for it as systematic as building a bridge or a building. However, for any new problem, where the excitement of innovation is, there is no hope of avoiding relentless change as we learn about the domain, the need for artistry, and the pain.

Berry writes about the creation of software as much from the vantage of psychology as from that of engineering, and that gives his observations a dimension of bracing realism. In “The Inevitable Pain of Software Development” I found a willingness to examine the actual behavior of working programmers that’s rare in the software-methodology literature.

Too many authors are all too eager to pronounce what developers should do without considering the odds that any particular developer actually will do these things. Berry is a realist, and he keeps asking us to consider the cascade of consequences that flows from each method’s weak spots.

His case against “pain” seems not to be a naive attitude of trying to take a process that’s fundamentally difficult and somehow conjure the hardness right out of it. Instead, he asks us to note carefully the location of the “pain points” in any particular approach to software creation — because, given human nature, these are its most likely points of failure.
[tags]code reads, software development, software methodologies, daniel berry[/tags]

Filed Under: Code Reads, Dreaming in Code, Software

Five-letter word, begins with “s”

October 24, 2007 by Scott Rosenberg

For as long as I can remember, people have gotten Salon and Slate confused. Maybe it was that they are somewhat similar sites — at least if you compare both to, say, EBay or Flickr — and both have five-letter names beginning with “S.” I don’t know why. But here it is, 12 years after I joined Salon’s merry startup crew and several months after I finally left the place, and I’m still finding myself referred to as “Slate’s Scott Rosenberg” (this by tech blogger Robert Scoble a little while back, since corrected) or “Slate co-founder Scott Rosenberg” (this by Cyberjournalist.net, published by the Online News Association, which, you know, really ought to know better).

I’m always grateful for the links, but before Google starts to associate my name too closely with that of a publication I’ve never been connected to, I think it’s time to stop this train. So, once and for all, let me provide a simple disambiguation guide.

  • Salon is a fine publication that began publishing in November 1995. Slate is another fine publication that started about eight months later. Both were pioneers of the “webzine” format.
  • Salon has always been an independent company. Slate was funded first by Microsoft and is now owned by the Washington Post.
  • Salon was edited for many years by David Talbot and is now edited by Joan Walsh. Slate was edited for many years by Michael Kinsley and is now edited by Jacob Weisberg.
  • Salon has plenty of great commentary but prides itself on its independent investigative reporting. Slate occasionally breaks stories but seems more editorially centered on digest-style features like “Today’s Papers” and explanatory commentary.

I wrote a huge number of articles for Salon over the years and also edited many more and helped run the publication and manage its site for many years. I have never, that I can recall, written for Slate.

There. I feel better now.
[tags]web magazines, salon, slate[/tags]

Filed Under: Media

Remixing news: A river runs through it

October 22, 2007 by Scott Rosenberg

News organizations spend an extraordinary amount of time and effort deciding what “leads” — what goes on the front page; what goes in the newscast at the top of the hour; what’s important. This is how professional news organizations deploy the minds and time of some of their best-paid and most experienced employees: They sit down at daily meetings and argue this stuff out; sometimes they agonize over it.

In the era of scarce column-inches and broadcast time this made a lot of sense. But that era is fading. With the Web reshuffling how the most avid users of news get their information, editors’ roles are changing — not vanishing, but definitely being challenged.

These thoughts are occasioned by Dave Winer’s new experiments remixing the New York Times. A while back he offered us the Times River — a simple reverse-chronological list of “head-and-deck” links from the newspaper’s RSS feed that is perfect for scanning on mobile devices or just checking in to see what the latest Times stories are. In his latest rethinking of the flow of Times headlines, Winer has built an outline-style interface to the same set of headlines, built around the Times’ own keywords.

These pages are notable for their simplicity. There are no distracting ads, no complex navigational tools, no typographical elegance or design flourishes. It’s just the text and you. A part of me looks at this and thinks, “How crude.” Another part of me looks at it and sees the same spare utility as the original Google home page — and wonders if, a handful of years from now, I’m going to prefer keeping up with my Times this way over continuing to kill trees with my lifelong (but now imperiled) newspaper consumption habit.

Years ago, during the dotcom mania, as Salon’s home page got more and more festooned with stuff that Salon was playing around with to try to increase revenue, a software developer did something similar with our news flow — he “screen-scraped” our headlines and presented them in an ultra-simple list form. (His script still appears to be running but it no longer works properly — Salon’s home page has been redesigned a bunch of times since then.) This was a kind of proto-Salon River. Use of it never spread beyond a tiny handful of geeks. If it had — if hordes of Salon users essentially defected and said they preferred that version of our home page to our own — it would have presented us with a business dilemma.

But I think the real resistance to this new vision for news delivery will be less on the business end (business tends to extract some kind of value anywhere large numbers of people can be congregated) than in the newsroom itself. Because the whole “river of news” approach, like the “newest posts on top” design of all blogs, takes a big bite out of the editor’s job. The reader who looks at Times River and says “this is how I want my news” is a reader who is saying to the Times editors, “Don’t waste all that time figuring out what to tell me you think is important.”

As Winer put it, “They [editors] have a very powerful internal gravity driven by a philosophy that their job is to arrange our thinking.”

I think that there are still plenty of readers who like what editorial judgment adds to the arrangement of the news. Of course, they don’t always agree with it, and many like to argue with it. But they want their quick scans of the news to be ordered by something besides chronology, so they choose a publication to make a deal with, saying, in effect, “I’m giving you my attention and you tell me what you think is important. If I disagree often enough I’ll move on, but in the meantime, tell me what you think matters.”

The real question over the next decade or so will be, how many of those readers are there? Is it the vast majority — which is what most editors believe? Or is it a shrinking tribe of news consumers who grew up under the old dispensation?

Although most professional editors will immediately dismiss the scenario, I think it’s quite possible that the “editors’ cut” of the news will dwindle in importance until we hit some threshold where the majority of users decide they don’t want their thinking “arranged” for them.

At that point, the “river” will roll right across the front page. And some editors may need to find other outlets for their talents.
[tags] future of news, editors, dave winer, river of news, times river, new york times[/tags]

Filed Under: Blogging, Business, Media

A government of men, not laws

October 19, 2007 by Scott Rosenberg

Thoughts occasioned by the confirmation hearings for Michael Mukasey to become the next U.S. attorney general:

Apparently there have been some interesting changes in the whole notion of the constitutional balance of powers since I studied such matters. As most of us learned at some point in our schooling, there are three branches of government established in the U.S. constitution. Congress passes the laws, as defined by Article I. the president executes the laws and handles a bunch of other stuff as defined by Article II. And the supreme court interprets the laws, as defined by Article III. Yes, I’m aware that the whole judicial review thing evolved over time and wasn’t grounded that explicitly in the constitution’s language. On the other hand, it’s served us pretty well for over 200 years, and it has been a keystone of the checks-and-balances system that has proven so resilient over those centuries.

Under the Bush administration we have seen two fundamental assaults on this system. One, embodied in the idea of “signing statements” that the president makes when he signs congressional legislation, proposes that the president is himself equal to the supreme court in his power to review the constitutionality of legislation. According to this notion, the chief executive has the unilateral authority to say, “I don’t think this or that part of this law is constitutional, so I will reserve the right not to enforce or obey it.” He’s not saying, “I think this is an unconstitutional law, so I’m going to challenge it before the supreme court.” He’s saying, “I think this is an unconstitutional law, so I’m going to ignore it.”

The second assault centers on the notion of the “unitary executive.” This theory proposes that the entire executive branch is a sort of “off limits” zone for congress. To the extent that a congressional law or rule constrains the president’s authority over the executive branch in some way, he is free to ignore it, because it’s unconstitutional — and, right, he gets to ignore laws he believes are unconstitutional.

Put these two notions together and you have, I think it’s fair to say, a whole new game in the federal government town. Forget checks and balances, or “government by laws and not men.” Say hello to a new world in which the unitary executive claims supremacy over both the congress (whose laws he can ignore at will and whose powers cannot reach into the executive branch) and the supreme court (whose role as reviewer of the constitutionality of legislation the president is now quite able to assume himself).

Now, it’s true that, as we say here on the Internets, I am not a lawyer. But I’m a citizen. And I have to report that these new ideas about the constitution make me a little concerned for the future of our political system.

I know that we have a vice president who got, let’s just say, peeved that the congress reined in a criminal president back when he was a young man, and who has spent the rest of his life itching to redress that old grievance. But this isn’t a partisan matter. An autocratic view of the chief executive — which is what Bush’s lawyers have propounded, and Munkasey, for all his superior forthrightness compared with the henchman who preceded him, endorsed in his testimony today — is a time-bomb for both parties. Once precedents for unchecked authority are set, who is to say that a Democratic president might not avail himself (or herself) of them?

“Checks and balances” is a big fat cliche, but it’s also a foundation that has supported two centuries and more of American political stability. Long after the pathetic corruptions and petty inhumanities of the Bush administration have receded from view, we’re still going to be trying to patch together a constitution that the Bush/Cheney legal establishment has shredded.
[tags]u.s. constitution, unitary executive, judicial review, michael mukasey[/tags]

Filed Under: Media, Politics

Ecco on Mac, Gibson on books

October 19, 2007 by Scott Rosenberg

I’ve been laying low this week completing a draft of a new book proposal. More on that as we get closer to the finish line. This is the first year I’ve not attended the Web 2.0 conference, but, you know, I need to focus — and I think I wasn’t that eager to hear Rupert Murdoch, anyway.

In the meantime, I’m happy to report that I have successfully managed to get Ecco Pro running on a Mac via Parallels. I actually achieved this goal a decade ago using Virtual PC, but boy was it slow! The Parallels set up, by contrast, is snappy and, so far, foolproof. Thanks to all of you who advised me on this dilemma. Very exciting. (The “coherence” mode of Parallels is remarkable — its puts the Windows taskbar and WinXP program windows on an equal footing on the Mac screen with the OSX stuff, turning your display into a sort of operating-system hermaphrodite.)

As I close in on my next book-project goal, I would also like to draw your attention to this quotation from William Gibson (in a Washington Post interview from last month), musing on the persistence of the book:

It’s the oldest and the first mass medium. And it’s the one that requires the most training to access. Novels, particularly, require serious cultural training. But it’s still the same thing — I make black marks on a white surface and someone else in another location looks at them and interprets them and sees a spaceship or whatever. It’s magic. It’s a magical thing. It’s very old magic, but it’s very thorough. The book is very well worked out, somewhat in the way that the wheel is very well worked out.

[tags]books, william gibson, ecco, ecco pro, parallels[/tags]

Filed Under: Culture, Personal, Technology

Deborah Solomon and real-time quotations

October 15, 2007 by Scott Rosenberg

There’s an interesting dustup in the journalism world about the Q&A column in the Sunday New York Times magazine. A New York Press story about these interviews by Deborah Solomon included complaints from two of her subjects — one of them This American Life’s Ira Glass, himself no interviewing naif — that she misrepresented them and inserted questions in her own voice that she hadn’t actually asked them. Then on Sunday Times “public editor” Clark Hoyt devoted a whole column to the matter.

It always seemed hugely obvious to me that Solomon’s terse, one-page interviews were boiled-down and heavily edited. But I look at these things as an editor with some experience. What this controversy really reveals is the gulf between the reverence most newspaper reporters have for quotation marks and the relatively cavalier stance assumed by many magazine writers and editors. (Yes, of course these are gross generalizations, and the world has plenty of careless newspaper reporters and careful magazine journalists. But the patterns do exist, in my experience.) So it’s no wonder that the flashpoint for confusion here should be in the weekly magazine published by a daily newspaper. Times mag editor Gerald Marzorati told Hoyt, “This is an entertainment, not a newsmaker interview on ‘Meet the Press.'” But it’s also part of the New York Times, and that still carries a set of expectations about the reliability of everything between quote marks.

I got my start in journalism in newspapering, and what I learned was that anything between direct quotation marks ought to be a verbatim quote from your subject. If you took words out, you marked it with an ellipsis. If you were paraphrasing or otherwise changing words, you had to take the quote marks off — you then had an indirect quote.

When I started freelancing and experiencing the wide variety of editing standards at different publications I was appalled to discover that some significant portion of my editors were “fixing up” quotes in various ways. When I complained, they dismissed my objections. It seemed that they believed we had license to improve the statements of the subject for the benefit of the reader.

As with so many aspects of journalism, the rules here vary far more than just from publication to publication: there’s essentially a different set of rules for each journalist you meet.

Over the course of my career I came to the following set of practices: In news coverage of any kind, I stick to the verbatim quotation-mark reverence of my training. In my book, anything between quote marks represented words somebody said. In lengthy Q&A interviews where I’ve taped an interview, and where the purpose of the piece is to talk with a writer or artist about his or her work, I will take the liberty of tightening rambling answers and sharpening both questions and answers. I’ve done a lot of these and never heard an objection. If you’re helping interviewees explain their work or expose their ideas, they’re usually grateful for you to do a little editing, as long as it doesn’t alter the substance of their statements. But if you’re challenging them, it’s best to stick to the tape.

It looks like Solomon got into trouble because she frequently adopts a confrontational stance. That’s part of the appeal of her column, and there’s nothing wrong with it. But if you’re practicing “gotcha” journalism you can’t take liberties with the transcript. It’s inevitable that you’ll get called on it. And in today’s media environment, you’ll get called fast.
[tags]deborah solomon, new york times, quotations, journalism[/tags]

Filed Under: Media

« Previous Page
Next Page »