Nieman Foundation at Harvard
HOME
          
LATEST STORY
Google is changing up search. What does that mean for news publishers?
ABOUT                    SUBSCRIBE
March 2, 2023, 10:02 a.m.

Those meddling kids! The Reverse Scooby-Doo theory of tech innovation comes with the excuses baked in

“The largest, most profitable, most powerful companies in the world ought to be judged based on how they are impacting the present, not based on their pitch decks for what the future might someday look like.”

There’s a standard trope that tech evangelists deploy when they talk about the latest fad. It goes something like this:

    1. Technology XYZ is arriving. It will be incredible for everyone. It is basically inevitable.

    2. The only thing that can stop it is regulators and/or incumbent industries. If they are so foolish as to stand in its way, then we won’t be rewarded with the glorious future that I am promising.

We can think of this rhetorical move as a Reverse Scooby-Doo. It’s as though Silicon Valley has assumed the role of a Scooby-Doo villain — but decided in this case that he’s actually the hero. (“We would’ve gotten away with it, too, if it wasn’t for those meddling regulators!”)

The critical point is that their faith in the promise of the technology is balanced against a revulsion towards existing institutions. (The future is bright! Unless they make it dim.) If the future doesn’t turn out as predicted, those meddlers are to blame. It builds a safety valve into their model of the future, rendering all predictions unfalsifiable.

This trope has been around for a long time. I teach it in my history of the digital future class, with examples from the ’90s, ’00s, and ’10s. It’s still with us in the present day. And lately it has gotten intense.

Take a look at this tweet from Balaji Srinivasan. Click and you can read the whole thread. Srinivasan is a popular venture capitalist with a significant following among the tech class. He’s the author of The Network State, a book that you probably shouldn’t read. (The Network State isn’t a good book, but it is a provocative book, in much the same way that Elon Musk buying Twitter for $44 billion wasn’t a good investment, but it sure does make you think.) He writes a lot about the inevitability of crypto, AI, and everything else in his investment portfolio. (Balaji was a big fan of Clubhouse. Remember Clubhouse?)

Let’s break down what he’s doing in this tweet thread. He’s stringing together two empirical claims to establish the trajectory of an ideological narrative.

    Claim 1: AI means a brilliant doctor on your phone, for free.

    Claim 2: AI directly threatens the income streams of doctors, lawyers, journalists, etc. Their industries will resist attempts at AI-based disruption.

    The ideological narrative: These entrenched interests are going to try to short-circuit the awesome potential of AI. Democrats in government will go along with them. We ought to oppose them today, and blame them for any shortcomings tomorrow.

(That right there? That’s a Reverse Scooby-Doo, folks.)

The first claim is not even a little bit true. AI is not, at present, a “brilliant doctor on your phone, for free.” It is nowhere close to that. There are few stupider use cases for the current crop of generative AI tools than asking them to diagnose non-obvious, potentially-critical medical symptoms. Recent attempts to deploy machine learning to aid COVID response went disastrously awry. There is an established track record here. It’s terrible. AI is optimistically decades away from being suitable for such a task. It might never be an appropriate use case.

Balaji is simply projecting, insisting that in the future, AI companies will surely solve those problems. This is a type of magical thinking. And like all real magic, what they are actually attempting is an elaborate misdirection.

Consider: If AI is ever going to become your instant free doctor, the companies developing these tools are going to require a truly massive dataset. They’ll need limitless access to everyone’s medical records.

The implicit plan Srinivasan is pushing looks something like this:

    Step 1: Give up any semblance of medical privacy.
    Step 2: Trust startups not to do anything shady with it.
    Step 3: TKTK, something about Moore’s Law and scientific breakthroughs. We’ll work all that out later.
    Step 4: Profit!

Fake-it-till-you-make-it hasn’t gone great for medical tech startups. The last big one to try was Theranos, and the executives of that company (Elizabeth Holmes and Sunny Balwani) are now serving 11 and 13 years in prison, respectively. So Balaji’s imagined future only has a chance if he can divert attention away from the pragmatic details.

Now there’s actually a version of his second empirical claim that I agree with. (Hell, I made a similar argument a couple months ago.) I expect well-credentialed industries will be much less impacted by developments in generative AI than industries that are mostly made up of freelancers. Lawyers will be fine; digital artists are going to face a world of hurt.

But this isn’t because “they’re the Democrat base.” It’s because well-credentialed industries are positioned to represent and protect their own interests.

Lawyers and doctors are the two obvious examples here. An AI might be able to correctly diagnose your symptoms. But it cannot order medical scans or prescription drugs. Insurers will not reimburse medical procedures on the basis of “ChatGPT said so.” An AI could also write a legal contract for you. Hell, you could probably track down boilerplate legal contract language using an old-fashioned Google search too. But that will work right up until the moment when you need to enforce the contract. That’s when you run the risk of learning you missed a critical loophole that a savvy lawyer who specializes in the actual field would know about.

When billionaire tech entrepreneurs like Balaji insist that AI will replace lawyers, let’s keep in mind what they actually mean is AI will replace other people’s lawyers. (Just like Elon Musk doesn’t intend to live on Mars. He wants other people to colonize Mars for him.)

It brings me back to William Gibson’s famous dictum: “The future is already here—it’s just not evenly distributed.” I’ve written about this previously, but what has always stood out to me is that the future never becomes evenly distributed. Balaji and Marc Andreessen and Sam Altman aren’t living in or constructing a future that everyone else will eventually get to equally partake in. The uneven distribution is a persistent feature of the landscape, one that helps them to wield power and extract audacious rents.

Srinivasan isn’t so much making empirical claims here as he is telling a morally-charged story: Pledge your allegiance to the ideology of Silicon Valley. Demonstrate faith in the Church of Moore’s Law. All will be provided, so long as the critics and the incumbent industries and the regulators stay out of the way. Faith in technological acceleration can never fail, it can only be failed.

And Balaji is hardly alone here. This type of storytelling has a strong pedigree in the archives of digital futures’ past. Tech ideologues have been weaving similar tales for decades.

In 1997, Wired magazine published a bizarre tech-futurist manifesto of sorts, “Push!” The magazine’s editorial team declared that the World Wide Web was about to end. It would be replaced, inevitably, by “push” media — companies like BackWeb and PointCast that pushed news alerts to your desktop computer and would one day reach you on every surface of your home. They envisioned “technology that, say, follows you into the next taxi you ride, gently prodding you to visit the local aquarium, all the while keeping you up-to-date on your favorite basketball team’s game in progress.”

The more closely you read “Push!” the less sense the argument makes. At one point they argue that Wired’s old-fashioned magazine is both pull-media and push-media. Never once do they consider whether email might already be a well-established form of push media. The whole thing is kind of a mystery.

But what they lacked in clarity they made up for in certainty. The authors declare that the oncoming Push! future is inevitable, because “Increasingly fat data pipes and increasingly big disposable displays render more of the world habitable for media” and “Advertisers and content sellers are very willing to underwrite this.” The web is surely dead, in other words, because Wired’s editors have seen a demo, they have a sense of some tech trends, and they are confident advertisers will foot the bill.

But then, they include this caveat: “One large uncertainty remains…If governments should be so stupid as to regulate the new networked push media as they have the existing push media, the expansion of media habitat could falter.”

(To summarize: Push! was arriving. It would be incredible for everyone. It was basically inevitable. That is, unless regulators started meddling. In that case, our glorious technological future could be denied.)

At no point did they consider that the technologies they were breathlessly hyping actually sound godawful. Advertising that follows you around a city, that nudges you to visit the aquarium even when you get in a taxi? Big ad-supported disposable displays that you can never turn off or outrun? That sounds…like something that we’d probably want regulators to curtail.

In a 2019 Wired cover story, “Welcome to Mirrorworld,” Kevin Kelly offered a surprisingly direct articulation of this perspective. It came in an essay declaring that augmented reality would soon arrive. It would be incredible for everyone. It was, basically, inevitable.

Let’s set aside whether AR has much of a future, and what that future will look like. My current answers are “maybe” and “it depends on a lot of factors that are still very unclear.” I plan to write more on the topic once there is more substance to write about. The critical passage appears late in the piece, where he articulates his ideological position on technology and regulation (emphasis added):

Some people get very upset with the idea that new technologies will create new harms and that we willingly surrender ourselves to these risks when we could adopt the precautionary principle: Don’t permit the new unless it is proven safe. But that principle is unworkable, because the old technologies we are in the process of replacing are even less safe. More than 1 million humans die on the roads each year, but we clamp down on robot drivers when they kill one person. We freak out over the unsavory influence of social media on our politics, while TV’s partisan influence on elections is far, far greater than Facebook’s. The mirrorworld will certainly be subject to this double standard of stricter norms.

As an empirical matter, Kelly’s “Mirrorworld” (a 1-to-1 digital twin of the entire planet and everything inhabiting it) is still a long way off. Like Srinivasan, what Kelly is doing in the piece is projecting — demonstrating faith that the accelerating pace of technological change means we are on the path he envisions.

What Kelly’s writing gives us is a richer taste of the ideological project these tech thinkers are collectively engaged in: Abandon the precautionary principle! Don’t apply the same old rules and regulations to startups and venture capitalists. Existing society has so many shortcomings. The future that technologists are creating will be better for everyone, if we just trust them and stay out of the way!

It’s a Reverse Scooby-Doo narrative. And, viewed in retrospect, it becomes easy to pick out the problems with this approach. Have faith in the inevitability of Push!? Of Mirrorworld? Of autonomous vehicles? Of crypto, or web3, or any of the other flights of fancy that the techno-rich have decided to include in their investment portfolio? Push! didn’t flop because of excessive regulation. The problem with autonomous vehicles is that they don’t work. Trust in crypto’s speculative bonanza turned out to be misplaced for exactly the reasons critics suggested.

My main hope from the years of “techlash” tech coverage is that we collectively might start to take the power of these tech companies seriously and stop treating them like a bunch of scrappy inventors, toiling away at their visions of the future they might one day build. Silicon Valley in the ’90s was not the power center that it is today. The largest, most profitable, most powerful companies in the world ought to be judged based on how they are impacting the present, not based on their pitch decks for what the future might someday look like.

What I like about the study of digital futures’ past is the sense of perspective it provides. There’s something almost endearing in seeing the old claims that “the technological future is inevitable, so long as those meddling regulators don’t get in the way!” — applied to technologies that had so very many fundamental flaws. Those were simpler times, offering object lessons that we might learn from today.

It’s much less endearing coming from the present-day tech billionaire class. Balaji Srinivasan either doesn’t understand the existing limits of AI or doesn’t care about the existing limits of AI. He’s rehashing an old set of rhetorical tropes that place Silicon Valley’s inventors, engineers, and investors as the motive force of history, and regards all existing social, economic, and political institutions as interfering villains or obstacles to be overcome. And he’s doing this as part of a political project to stymie regulators and public institutions so the tech sector can get back into the habit of moving fast and breaking things. (It’s 2023. They have broken enough already.)

The thing to keep in mind when you hear Balaji and his peers declaring some version of “the technological future is bright and inevitable…so long as those meddling public institutions don’t get in the way,” is that this is just a Reverse Scooby-Doo. That line of thinking originates from the villain, and for good reason. The people who say such things are ultimately up to no good.

David Karpf is an associate professor in the School of Media and Public Affairs at George Washington University. A version of this piece originally appeared in his newsletter The Future, Now and Then.

POSTED     March 2, 2023, 10:02 a.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Google is changing up search. What does that mean for news publishers?
A shift to AI-generated search results will decrease the traffic that Google sends to publishers’ sites, as more people get what they need straight from the Google search page instead.
The Athletic’s live audio rooms bring sports talk radio into this century
The Athletic’s first live room took place in September 2021. By January 2022, they’d done 100. Today, they’re closing in on 1,000.
In Spain, a new data-powered news outlet aims to increase accountability reporting
Demócrata.es, launched in March, publishes data-driven reporting and plans to expand.