AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and
Sayash Kapoor

Is artificial intelligence conquering the world? The term is, at least. Every screen suddenly has a logo either urging the user to try an A.I. service or signaling that an A.I. product has been delivered unbidden. Shares of Nvidia, which manufactures A.I. chips, have tripled in value, making the company worth more than tech behemoths Microsoft and Apple.

Before everyone submits to the artificial-intelligence revolution, though, Arvind Narayanan and Sayash Kapoor—a Princeton professor and graduate student, respectively, in computer science—hope the public can get a few things straight.

For starters, they explain in their new book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, that the term “artificial intelligence” doesn’t really cover any one specific thing. Our current discussions, they write, are as if the only word people had for any and all forms of transportation was “vehicle”: “There are furious debates about whether or not vehicles are environmentally friendly, even though no one realizes that one side of the debate is talking about bikes and the other side is talking about trucks.” Which one of these is about to run everyone over, the truck or the bicycle?

Different parts of the A.I. revolution, Narayanan and Kapoor explain, are further along—often insidiously so—than others. What’s in the headlines right now is generative A.I., the suddenly prominent systems that churn out passages of writing or chunks of computer code or uncannily luminous kitsch illustrations from simple prompts. But there’s also predictive A.I., the older and more quietly widespread enterprise of using algorithms to make guesses about people’s educational prospects, hirability, health risks, or criminal tendencies. And there’s a whole body of other processes and applications that may or may not qualify, not a technology so much as a set of tendencies for computers to do things with less direct programming instruction, more autonomy, and greater flexibility.

Current discussions about A.I. are as if the only word people had for any and all forms of transportation was “vehicle”: “There are furious debates about whether or not vehicles are environmentally friendly, even though no one realizes that one side … is talking about bikes and the other side is talking about trucks.”

The “snake oil” of the book’s title refers mainly to the predictive algorithms, which are opaque, often invisibly applied, largely unaccountable—and, the authors argue, assigned to tasks which may be inherently impossible. In trying to forecast social phenomena, they ask, What if “we can’t make accurate predictions because there aren’t enough people on Earth to learn all the patterns that exist?”

The results of trying to force the issue are both absurd and pernicious: a computer advising a hospital, for instance, that patients with asthma were at lower risk than other patients of getting deadly pneumonia, rather than recognizing that the asthmatic patients had fared better because they “were sent straight to the ICU as soon as they arrived.” Another medical prediction system used a patient’s past health-care spending as a stand-in for the state of their health, meaning that “people who were already receiving better healthcare would be more likely to be classified as high risk and continue to receive better care in the future.”

An engineer wiring an early IBM computer, photographed by Berenice Abbott in the late 1950s.

Nevertheless, these systems proliferate, and authorities fob off more and more responsibility onto them. “Using an algorithm to detect unemployment fraud,” Narayanan and Kapoor write, “the U.S. state of Michigan wrongly collected USD 21 million from residents between 2013 and 2015.”

But someone had to build these systems, and someone else had to agree to deploy them. The book tells a compact and lucid story of the rise of the machines: how, in the 1950s, the psychologist Frank Rosenblatt led the creation of the “perceptron,” a computer that could learn to make a simple binary distinction between one shape or letter and another by weighing the different patterns they made on a 400-pixel array, without being programmed to know any information about those patterns in advance; how after fits and starts and dead ends, people figured out how to stack perceptron-like processes to distinguish multiple, ever more complicated kinds of arrangements, from “simple concepts such as edges, textures, and patterns” up through “objects’ parts and, finally, specific objects”; how cramming inconceivably vast quantities of data into those machines and running an incomprehensible number of calculations produced systems that can carry out novel tasks beyond the contents of their training data.

The “snake oil” of the book’s title refers mainly to the predictive algorithms, which are opaque, often invisibly applied, largely unaccountable—and, the authors argue, assigned to tasks which may be inherently impossible.

All the while, Narayanan and Kapoor keep their eyes firmly on human activity and human agency. The machines are just machines—the authors define the much-hyped concept of “Artificial General Intelligence,” or A.G.I., with bracing non-mysticism as “AI that can perform most or all economically relevant tasks as effectively as any human.” The idea that a merciless, God-like A.G.I. represents an “imminent existential threat,” as Elon Musk, pioneering A.I. researcher Geoffrey Hinton, and countless others have warned, the authors write, “rests on a tower of fallacies,” and claims about the percentage risk of runaway A.I. are “guesses dressed up with the veneer of mathematical precision.”

On the one hand, they write, A.I. development has been much slower and more erratic than the current excitement suggests—what’s really happened is that “consumer-facing AI has finally, after many, many decades, crossed the threshold of usefulness.” On the other hand, recursively self-improving computing has been going on ever since programmers figured out ways around inputting every command as strings of ones and zeros. It’s not that technology threatens to transcend humanity, it’s that humans, armed with technology, have already transcended the limitations on our ancestors’ power to shape or ruin their environment.

They write: “AI has already been making us more powerful, and this will continue as AI capabilities improve. We are the ‘superintelligent’ beings that the bugbear of humanity-ending superintelligence evokes. There is no reason to think that AI acting alone—or in defiance of its creators—will in the future be more capable than people acting with the help of AI.”

A hypothetical machine remorseless enough to destroy the world for efficiency’s sake, they write, would be self-defeating, “an agent that is unfathomably powerful yet lacks an iota of common sense.” To function “in the real world,” they write, “requires common sense, good judgment, the ability to question goals and subgoals, and a refusal to interpret commands literally.”

The truly remorseless and dangerous system, in the authors’ view, is the one humans have been following all along. Open the panel on the rogue machines in the news, and what you’ll find inside is regular human arrogance, carelessness, and above all the runaway-profit motive. Image-recognition and image-generation systems spit out bigoted results because the people who built them scraped up as much training material from the Internet as they could, as fast as possible, and trained machines on it for a decade before they bothered to filter it. Facebook’s failures of content moderation, allowing its users abroad to promote genocide in Myanmar, for instance, happened not simply because translation software and automated filters were inadequate but because the company didn’t spend the money to do the job properly:

“Our guess is that if social media platforms took their international commitments seriously—if they tried to pay attention to local context all around the world to the same extent that they do in the United States and Europe—they would go out of business. In other words, they are able to offer a relatively polished product with a welcoming and reasonably well-enforced set of rules in Western countries only because they have an exploitative relationship with most non-Western ones.”

Similarly, they write, companies have “invested millions of dollars” to clean up the toxic output of their new generative chatbots because they can’t afford to scare off users, while “the loss of income faced by artists and the lost time faced by teachers as a result of generative AI doesn’t directly affect the companies’ bottom lines and has therefore received little to no attention from them.” The way A.I. is currently being deployed, they write, it “will shift power away from workers and centralize it in the hands of a few companies”; the use of bogus snake-oil systems is “rampant in underfunded institutions” hoping that “by removing humans from the process of decision-making, they can lower costs.”

This is what we get instead of a thrilling, decisive battle of resistance against the thinking machines—or, if you prefer, instead of allowing the machines to lead humanity to some ultra-rational paradise: the same open-ended struggle as ever against labor exploitation, austerity, colonialism, and the grinding corporate optimization of 21st-century life.

“In many jurisdictions,” Narayanan and Kapoor write, “the frameworks needed to regulate AI already exist.” A few pages later, they add, “Most of what’s needed is the enforcement of existing regulations rather than the creation of new regulations.” Whether this news is cause for hope or for despair may depend on the reader.

Tom Scocca is the former politics editor at Slate and the editor at Popula