Wordyard

Hand-forged posts since 2002

Archives

About

Greatest hits

Clash of the titanic business-press cliches

December 17, 2007 by Scott Rosenberg

My eight-year-old sons don’t pay much attention to the business pages, but yesterday’s New York Times Sunday Business cover — featuring three cartoon characters in a boxing ring — caught their eyes over breakfast.

“Who’s the big fat guy?”

That, I told them, was supposed to be Steve Ballmer, Microsoft’s CEO. “The one in glasses?” Bill Gates. They’ve heard of him. The third figure, I said, was a poor likeness of Google CEO Eric Schmidt.

As their interest dwindled, I explained the illustration. And it occurred to me what about the cover bugged me. The headline, no joke, was “Clash of the Titans” (omitted from the Web edition, for some reason). And the whole tired frame for the story had been constructed with an eye to the sensibility of eight-year-olds.

It’s the oldest cliche in the business-journalism book: Corporations are led by warriors and market conflicts are military campaigns — “clashes of the titans.” The trouble is, it’s not only infantile, it distorts our understanding of reality.

Are Microsoft and Google in conflict? Of course. They have fundamentally different visions of where computing’s headed — visions that the Times article, by Steve Lohr and Miguel Helft, ably lays out. But it’s not as if they are feudal fiefdoms fighting over some fixed patch of ground. Their conflict will play out as each company builds its next generation of software and services, and the next one after that, and people make choices about what to buy and what to use.

Those choices are the key to the outcome. In a battle, civilians are mostly bystanders or casualties. In the software business, civilians — users — determine who wins.

Remember that the next time you see a business publication trot out the old corporate-battlefield cliches to talk about the software industry. And if you want to know where the software world is headed, watch your nearest eight- or ten- or twelve-year old — they’ll be making decisions over the next couple of decades that, far more than any punches thrown by Ballmer or Gates or Schmidt, will determine which titans prosper.
[tags]software business, business press[/tags]

Filed Under: Business, Media, Technology

Deep packet inspection and the new ad targeting

December 10, 2007 by Scott Rosenberg

It’s not hard to understand why people got upset with Facebook over “Beacon,” the company’s effort to track what its users do on the Web and auto-transform those actions — like buying products or tickets to a movie — into messages broadcast over a personal network. Who wouldn’t be creeped out, at least sometimes, by this transmutation of private transactions into public statements?

But Facebook is just facing the same pressures all tech companies encounter when they find they have to deliver on sky-high valuations for investors and markets. Facebook, and the people pouring money into it, now claim the company is worth $15 billion. Expect plenty more “monetization” gambits.

I’ll remain wary, but I won’t be surprised. Instead, I’m keeping my eyes on a different, and far more troubling, violation of Web norms: it’s called “deep packet inspection.” That geeky phrase hides a world of potential ill.

All Internet messages travel as packets of data. Packets have headers; they’re like the addresses on envelopes, and service providers’ routing equipment uses the headers to make sure messages get where they’re going. Deep packet inspection (DPI) involves looking at the content of the packet as well — it’s the equivalent of the post office opening your envelope, or the phone company listening to your call. Internet service providers use DPI for security purposes. It’s usually been discussed in the past as a tool that enables ISPs to limit Bittorrent use or other peer-to-peer filesharing activities; it is also what would enable various schemes being bandied about for creating “fast lanes” of privileged types of Internet communication. The debate over such schemes is well-advanced.

But now, it seems, hardware companies have begun producing devices that enable service providers to use DPI to target ads. The Wall Street Journal covered this topic last week here. And that, to me, is just way over the line.

I don’t want my ISP looking at how I use the Internet to target ads to me, period, any more than I want the phone company listening in on my conversations in order to sell me stuff.

I’m sure we’ll hear that the DPI-based targeting schemes are a Big! New! Benefit! in providing us with more relevant ads. But I’d rather be the steward of my own personal information than let a service provider make decisions for me. We’ll also hear that privacy-minded users should just find a service provider that suits them. But how can we make an informed choice about service providers unless they are forthright about telling us exactly what they’re doing with DPI, in words everyone can understand? In many communities, high-speed Net service is a monopoly, anyway.

Then we’ll hear that this is no different from the way Google’s Gmail scans your messages to target text ads to you. But Gmail has tons of competition. And Google’s accumulation of personal data has begun to raise privacy concerns as well — so saying “Google does it too” doesn’t exactly provide full ethical cover.

This issue sits at the heart of the Net neutrality debate, and it comes at us in a form that is more easily understandable to the everyday user than its previous manifestations. “Packet inspection” may be unintelligible to non-geeks, but anyone can understand why you don’t want the post office opening your mail.
[tags]deep packet inspection, net neutrality, ISPs, targeted advertising[/tags]

Filed Under: Business, Technology

Becoming a cranky geek

December 5, 2007 by Scott Rosenberg

Back in the dotcom era I used to appear occasionally on ZDTV’s “Silicon Spin” and chew on the tech headlines with John Dvorak and other guests. Dvorak is doing pretty much the same thing once more, in somewhat less lavish circumstances but with a somewhat more honest name for the show — Cranky Geeks.

I joined the panel today for a lively discussion about Facebook’s Beacon ad-policy brouhaha; the mysterious firing of a GameSpot editor, apparently for panning an advertiser’s game; Google’s entry into the wireless spectrum auction; AMD’s CEO bad-mouthing Intel (which really doesn’t qualify as news, does it?); and more.

You can stream or download the Cranky Geeks episode from this page.
[tags]john dvorak, cranky geeks[/tags]

Filed Under: Media, Personal, Technology

WordPress footer follies

November 30, 2007 by Scott Rosenberg

I was all prepared to post a backlog of interesting stuff today when it came to my attention, thanks to alerts from Reinhard Handwerker and Vikram Thakur of Symantec, that some strange spammy stuff was happening on this site. I ended up spending the day rooting out bot droppings from my WordPress installation.

Yes, it’s true, I’d been lax about upgrading to the latest version. I was only a little behind, but perhaps that was enough. In any case, here are some details, which might be useful to others who find themselves victim to what I think of as the “wordpress footer exploit.” (I’ve already gotten email from a couple of other users who are battling the same problem. Al Gore, apparently, went through something similar.)

Skip the rest of this unless you’re a WordPress user in trouble looking for help!

Here were the gory details in my case. No doubt others will differ. I don’t have a clear sense of the starting point for the exploit — no doubt some little chink in the WordPress armor that I can only hope is no longer open in the current version.

My HTML source revealed a long list of spammy links in the WordPress footer — hidden from view but presumably accessible to the Googlebot. The first step in defeating them was to remove the php call to the wp_footer function from the footer template. (If you need that function for other plugins or users, you can add it back in once your code is cleaned up.)

That alone isn’t enough, alas. I also found 2-3 lines of code inserted into the main index.php file at the top level of the blog. The code that kept reinserting the spammy links into the footer even after they’d been deleted was located in a few lines added to the default-filters file in the wp-includes directory. Then I found two more completely new files had been added to wp-includes: one called “class-mail” and the other, deceptively simply named “apache.php,” which was a motherlode of mischief. (Thank you, though, oh hackers, for labeling your crud with ASCII art of a spider — it’s really helpful when one is scanning dozens of files to know that when you stumble on the malicious code, it comes with its very own Dark Mark.) “Classes.php” looked like it had been touched, too, based on the mod date; I replaced it with a clean version.

I killed all this crud and succeeded in removing the spammy links, but I still had a problem: there were a bunch of files that seemed to be being served from my domain that were just pages advertising, you know, those drugs that spammers like to advertise. They weren’t my content, of course, but they’d somehow made their way into my WordPress — and they were being linked to from other compromised WordPress sites. The ways of the botnets are devious indeed! I couldn’t figure out exactly where this infection’s root lay, but — having removed all the malicious code I could find and then changed all my passwords — I overwrote my WordPress installation with a clean download of the WordPress code, and that appeared to do the trick.

If you suspect your site is compromised, I recommend proceeding in the following order: First, root out the bad code; then change your passwords. If you change your passwords while your site is still compromised, you risk having your new passwords exposed via exactly the same route your old ones were, if in fact they were (I don’t know if mine were or not, but hey, when you start finding bad code in your directories, it’s time to change your passwords).

May you never need this information! But if you do need it, may this be of some use to you.
[tags]wordpress, spam, bots, exploits[/tags]

Filed Under: Blogging, Personal, Technology

Kapor’s early bet on the Net

November 16, 2007 by Scott Rosenberg

This season, Mitch Kapor is delivering a trilogy of lectures on “Disruptive Innovations I Have Known and Loved” at the UC Berkeley School of Information. I missed number one, which covered Kapor’s role in the early years of the PC (podcast audio is here). Wednesday evening I made it to the second lecture, which focused on the rise of the Internet, and particularly the early, pre-Web Net era — the time when the Internet was considered a hopelessly geeky backwater for Unix heads and the golden road to the future lay with outfits like Prodigy and Compuserve and AOL. (The third lecture, on virtual worlds, is on Nov. 28.)

I first heard Kapor speak on this topic in the summer of 1993, at the Digital World conference in Beverly Hills, where, amid a throng of cable executives and Hollywood honchos and telco bureaucrats, he was the only speaker to make the then-fringe-y claim that the Internet offered a better model for the networked future than the “Information Highway” then being touted as an inevitability. His concern — one that resonated with me at the time — was that we try to create something better than “500 channels with the same crap that’s now on 50,” some system that would not only be open to corporate entertainment and commerce but would offer “a migration path for the weirdos and the outsiders to get into the system, mature and blossom.”

Kapor’s espousal of the Internet in those days was part of a wider activist portfolio; he’d cofounded the Electronic Frontier Foundation only a few years before. But it also stands as one of the more accurate acts of long-shot prophecy in technology history. At his talk Wednesday, Kapor looked back on that time and filled in some of the details of his own role.

He’d joined The Well early on, in the late ’80s (a year or two before I did), and “lost the next two weeks of my life” absorbed in the online conversation. A bit later the Well gave its members Internet access, making it one of the only ways that members of the general public could connect to that network. Around then the National Science Foundation began an aggressive project to open the Internet out to the public and to private businesses. In 1992 Kapor was spending a lot of time in Washington doing EFF work and got to know the founders of UUNet, which was one of the first firms to resell Internet connections to other companies.

“Why,” Kapor asked, “wasn’t this an obvious investment?” He tried to interest John Doerr at Kleiner Perkins, but Doerr “wouldn’t take the meeting.” Kapor himself ended up putting some of his own money into the company. He provided no details, but it must have been a lucrative move: UUNet went public in 1995, shortly before Netscape, and was gobbled up in an accelerating series of acquisitions that made it part of Worldcom, in the days when people thought Worldcom was taking over the known universe. (Today what’s left of Worldcom — after a storied detour through the courts and various name changes — is part of Verizon. Full timeline here.)

My recollection of those days — when I was a recent immigrant to technology journalism from the arts — was that, much as I rooted for the Internet-style future as a healthier one for our culture, it was awfully hard to see how anyone was likely to make money via such a system. Kapor said he looked at the open network’s advantage in generating innovation and encouraging participation and concluded, “I think this is the one that’s going to win.” He was right.

It’s incredibly useful to keep that era in mind today, I think, because it provides not just a heartening saga of the triumph of free expression and open participation, but also a clear case in which those ideals were more practical, too.

The Internet’s victory over the services we now derisively dismiss as “walled gardens” was an instance, within recent memory, when the idealists weren’t hopelessly outgunned by the cynics — when, in fact, the idealists turned out to be the realists, and the cynics took a bath. That’s worth keeping in mind as today’s tech industry — powered by Google’s success and enthralled by innovators like Facebook — races through yet another cycle of debate over what “open” really means.
[tags]mitch kapor, uunet, lectures, berkeley school of information, internet history[/tags]

Filed Under: Business, Technology

Ecco on Mac, Gibson on books

October 19, 2007 by Scott Rosenberg

I’ve been laying low this week completing a draft of a new book proposal. More on that as we get closer to the finish line. This is the first year I’ve not attended the Web 2.0 conference, but, you know, I need to focus — and I think I wasn’t that eager to hear Rupert Murdoch, anyway.

In the meantime, I’m happy to report that I have successfully managed to get Ecco Pro running on a Mac via Parallels. I actually achieved this goal a decade ago using Virtual PC, but boy was it slow! The Parallels set up, by contrast, is snappy and, so far, foolproof. Thanks to all of you who advised me on this dilemma. Very exciting. (The “coherence” mode of Parallels is remarkable — its puts the Windows taskbar and WinXP program windows on an equal footing on the Mac screen with the OSX stuff, turning your display into a sort of operating-system hermaphrodite.)

As I close in on my next book-project goal, I would also like to draw your attention to this quotation from William Gibson (in a Washington Post interview from last month), musing on the persistence of the book:

It’s the oldest and the first mass medium. And it’s the one that requires the most training to access. Novels, particularly, require serious cultural training. But it’s still the same thing — I make black marks on a white surface and someone else in another location looks at them and interprets them and sees a spaceship or whatever. It’s magic. It’s a magical thing. It’s very old magic, but it’s very thorough. The book is very well worked out, somewhat in the way that the wheel is very well worked out.

[tags]books, william gibson, ecco, ecco pro, parallels[/tags]

Filed Under: Culture, Personal, Technology

Terror of tinyurl

October 5, 2007 by Scott Rosenberg

From the earliest days of the Web to the present, there’s been a fundamental split between people who get the value of “human-readable URLs” and people who don’t. A human-readable URL is a Web address that tells you a lot of useful information about the page it represents. For instance, Salon URLs always tell you the date an article was posted, the section of the site the article appeared in, and a few words describing the subject matter of the article. By comparison, the typical URL at, say, CNET, looks like this: http://reviews.cnet.com/4520-10895_7-6782817-1.html. It is, essentially, human-unreadable.

In the old days, writers and editors who actually knew and used HTML always appreciated a good human-readable URL; and typically, for the ugly gibberish URLs, we had to thank (some) software architects and (some) publication managers who’d never hand-coded a link themselves. At Salon, we editors knew we’d be typing (and proofing) a zillion of those URLs ourselves; we insisted on something we could work with. (Our developers “got it” too.)

The cause of human-readable URLs got a great shot in the arm when sites began trying to optimize themselves for Google, because Google gives a little extra weight to text hints in URLs. So a lot of sites (like the New York Times) that had a history of human-unfriendly page addresses began to do better.

Today, though, we’re taking a step backwards, or at least sideways, in the cause of human readability, thanks to the growing popularity of the “tinyurl.”

When the tinyurl first crossed my radar I understood it to be a convenient way to tame unmanageably long Web addresses. (The Tinyurl site focuses in particular on how long Web addresses break in email messages.)

That’s all fine. But the tinyurl giveth and the tinyurl taketh away. When you encode a Web address as a tinyurl you’re hiding its target. Normally, when I read an article on the Web that has a link, I’ll hover my cursor over the link to see where it points. Even on a site with human-unfriendly URLs like CNETs, at least I can see that the link points to CNET.

With a tinyurl, I know nothing about the link except what the author chose to say about it. I can’t tell if it’s a reference to an article I’ve already read. If I want to find out, I have no choice but to click.

My sense is that tinyurls have grown in popularity with the rise of Twitter (where the strict character limit of messages means you don’t want to fill up a whole message with an URL), as well as the growing use of mobile devices for Web-posting activities. These are perfectly understandable reasons. But still, each time I see a tinyurl I think, there goes another tiny piece of the Web’s transparency.
[tags]tinyurl, urls, human-readable urls, web usability[/tags]

Filed Under: Salon, Technology

Moore’s Law, once more with feeling

September 24, 2007 by Scott Rosenberg

Jeff Jarvis reminds us that Moore’s Law is not: “Chips double in speed every 18 months.” Gordon Moore first predicted that the power of microprocessors (as measured by the number of transistors you could cram into a particular space on a chip) would double once every year; later he revised it to once every two years. Somehow — most likely, thanks to careless popular journalism — in the popular imagination this has become set in stone as an every-18-month prediction about chip speed.

Jeff asks:

So I raise again the question of how we can better map content and corrections. How does Moore assure there is a definitive statement of his law? How do we know it comes from him? Once it’s acknowledged as correct, how do we notify those who got it wrong so the can correct it and start spreading the right meme? Truth is a game of wack-a-mole.

I’ve been playing that game for a decade. Here’s a Salon column from October 1997 that addresses it. Here’s a post from just this past spring.
Here’s two pointers for good reference information on Moore’s Law: one from Greg Papadopoulos at Sun and the other from ExtremeTech.

If we all keep repeatedly linking to the good information maybe we can demonstrate that Gresham’s Law does not apply to information, and that good info can drive out bad.

But, you know, I won’t hold my breath.
[tags]jeff jarvis, moore’s law, gordon moore[/tags]

Filed Under: Science, Technology

Computer reuse center needs help

September 16, 2007 by Scott Rosenberg

A brief note to express my support for the Alameda County Computer Resource Center, an outfit not far from my home that has done creative work over the years in restoring and finding new homes for donated or broken electronic equipment that would otherwise be headed to landfill. I’ve donated my share of stuff to the ACCRC over the years. Now it seems that the center has been targeted by a government inspector for technical infractions of the regulations regarding recycling centers.

I’m no expert in that area of the law. But I know a good organization when I see it. ACCRC is such an outfit. If there are issues or problems with how ACCRC does its thing the government should be helping it achieve compliance, not threatening to shut it down.

Dale Dougherty of O’Reilly has the scoop. You can also read ACCRC founder James Burgett’s version of the saga. I’ve already written to the state agency involved; Burgett’s blog has more info.
[tags]recycling, reuse, alameda county computer resource center[/tags]

Filed Under: Business, Technology

Quicktime: we own your desktop

September 11, 2007 by Scott Rosenberg

Apologies for slow blogging. Been working in parallel on a number of important tracks. Some interesting stuff coming up shortly. In the meantime, I will trouble you with this rant on a trivial annoyance.

If you use a Windows computer for any period of time, your system tray — the little box next to your clock on your task bar — will get clogged up with a million and one icons you don’t need or care about. The tray is useful for stuff like “safely remove hardware”, but it’s stupid as a location for applications. That’s what the “quick launch” icons are for; that’s what your “start” menu is for; that’s what any number of other utilities are for. When an application pushes its icon into the system tray it’s almost always redundant — a case of corporate overreach.

So when renegade applications insist on putting their icons in the system tray anyway, in a sort of desktop manifest destiny policy, I get peeved, and I try to figure out how to banish them. Usually, though not always, it’s possible.

I hate to think of something as routine and plumbing-like as Quicktime as a renegade application, but in this way, at least, it is. I went through the “get this thing out of my tray” routine with Quicktime a long time ago; I found the “preferences/advanced” dialogue that let me express my wishes; I did so and thought that was the end of it.

Today, suddenly, it turned up again. I soon realized I’d recently allowed Apple’s auto-update install a new version of Quicktime. OK, I’m willing to turn this sort of routine patching and maintenance over to the software companies — I’d rather they worry about it than have to worry about it myself.

But how rude is it to overwrite a user’s preferences and force the reappearance of an annoying icon that you’ve already ordered the program to suppress? Why does Apple do this stuff? To me this is just one tiny but telling instance of the company’s perpetual dance between delivering useful innovations and behaving as a desktop Big Brother that pretends to know better than you do what you really want.

Me? I just want Apple to keep its fingers out of my system tray.
[tags]apple, quicktime, software, interface design[/tags]

Filed Under: Personal, Technology

« Previous Page
Next Page »