The Handover: How We Gave Control of Our Lives to Corporations, States, and AIs by David Runciman

Sam Bankman-Fried infamously preferred blogs to books. Why spend 100,000 words making an argument that could be made in 2,000—or, even better, in a skimmable X thread? With this inability to consider single issues at depth and length, it’s hardly surprising he was incapable of managing a complex crypto-currency exchange.

And yet, when it comes to the “thesis book” nonfiction genre—where the author introduces a broad, novel, timely theory, and then explains all its historical context, current relevance, and future importance—I can’t help but agree.

You can identify a thesis book at a distance. The dramatic title will begin with “The”; the subtitle starts with “How”; and the writer is invariably a professor of something or other. In this case, The Handover: How We Gave Control of Our Lives to Corporations, States, and AIs, by David Runciman, professor of politics at Cambridge University (and fourth Viscount Runciman of Doxford).

Runciman’s thesis is that, over time, humanity has gradually ceded power to what he calls “artificial agents”: entities made by people that end up having motives and agency all their own, beyond what any one person can affect. In the past, these manifested as states and corporations; its latest form is A.I. It’s a reasonably compelling thesis, with enough interesting points to make for a great article; just not a great book.

Runciman is no Luddite or anarchist. He grants that much of the power that has accrued to these artificial agents has been freely transferred out of necessity, and often to our benefit. Most of us would agree that a standing army, a judiciary, and a market economy are good things, on balance. But, Runciman argues, these “machines” take more than we give them. They can make us do things we wouldn’t otherwise do. Serving one’s country may require murder; working for a company involves putting the interests of the corporation above your fellow man. The medium of a new technology dictates the message.

You can identify a thesis book from its cover. The dramatic title will begin with “The”; the subtitle starts with “How”; and the writer is invariably a professor of something or other.

Unfortunately, Runciman has written a very formulaic book for someone concerned about machine minds. He has a professor’s insistence on noting every book and thinker and historical reference in the text, rather than relegating some of them to the endnotes. His few merciful detours to more interesting territory—such as his discussion of how technology could shape what future states look like—are quickly abandoned, as he swerves back to his thesis.

Despite his dry prose and extensive bibliography, Runciman doesn’t really explain what “machines” and “artificial agents”—terms he uses interchangeably—are and aren’t with enough rigor. Is a political party a machine? A labor union? A Free Palestine rally? A bowling team? The reader is left to guess.

Similarly, he doesn’t provide much insight into how “machines” interact. Of the development of DARPA—the R&D hub of the United States Department of Defense, which paved the way for the Internet—Runciman writes, “The American state ended up underwriting the collaborative work that enabled machines to talk to machines. Why? Because it could see the value, being a machine itself.” In other words, machines like machines because they’re machines.

I haven’t said much about A.I. Nor, despite its centrality in the book’s subtitle and recent public discourse, does Runciman. He worries that we will hand too much power to A.I. systems, and give up too much control over our lives. It’s a valid concern, but the case isn’t convincingly argued. If Runciman understands how A.I. works, he doesn’t show it.

He drifts near issues such as A.I. goal alignment (making sure A.I. programs do what humans want them to do without causing harm), intelligence explosions (exponential improvements in A.I.), confinement (keeping A.I. out of certain critical systems), and the risk of mass unemployment, but he never seriously addresses them.

Whereas the sections on political theory and history (Runciman’s academic specialty) are filled with references, the slim discussion of A.I. cites no experts on the subject. Only pop historian Yuval Noah Harari is referenced by name. Readers who want to seriously engage with the topic should pick up Life 3.0 and Human Compatible by Max Tegmark or Nick Bostrom’s Superintelligence.

A.I. use is not without risk, and Runciman is correct that “it is essential that humans think as hard as possible about the relationship we want with thinking machines.” But The Handover doesn’t further that effort.

Ross Anderson is the life editor at The Spectator World