It was Lord Byron’s daughter the mathematician Ada Lovelace who first saw the possibilities for artificial intelligence in the early 19th century. She understood that if her fellow mathematician Charles Babbage’s calculating machine, considered by many to be the first computer, “could be programmed to calculate something,” the novelist Jeanette Winterson writes in her new book, 12 Bytes, “it could be programmed to calculate anything.”
Lovelace doubted, however, whether a machine could ever replicate the “leap-capacity of human intelligence,” as Winterson puts it. Lovelace described her own work as a mathematician and researcher as “poetical science,” the kind of work only a human could perform.
Alan Turing, who read Lovelace’s notes in his own research, believed otherwise and in 1950 devised a test to measure a machine’s ability to perform as well as, or outperform, a human. He thought machine intelligence would surpass human intelligence by the year 2000. Twenty-one years on, it still hasn’t happened, but, as Winterson writes, “we are getting closer.”
How close we are getting to true artificial intelligence, and what this might mean for us as a species, is the question at the heart of 12 Bytes, an essay collection which emerged from Winterson’s previous novel, Frankissstein (also about A.I., this time via Mary Shelley). The research that went into that novel, from the Gnostics to the Industrial Revolution to sexbots, informs the 12 inquiring, accessible essays in her latest work.
They delve into questions of connectivity and progress, creativity and religiosity (A.I. devotees can sound like born-again Christians), gender and misogyny (why are the people who write the programs and start the start-ups so often men?), how A.I. can help us get beyond the gender binary, love in the time of the sexbot, and the emerging ethics of the technology age.
The point where A.I. surpasses its creators to become A.G.I.—artificial general intelligence—will be where reality transcends Lovelace’s imagination.
If A.I. is programmed to look for patterns, A.G.I. will look for “relatedness, for connections.” It won’t be our “savior,” but it will “turn us towards solutions we will follow to end suffering.” A.G.I., claims Winterson, will have to be a bit like Buddhist philosophy that way. “The Middle Way avoids extremes. Humans have proved to be dangerous extremists. It may take a different life-form and another kind of intelligence to avoid the inevitable disasters of extremism.”
And it’s going to happen soon:
Not the next 250 years, but the next 25 years will take us into a world where intelligent machines and non-embodied A.I. are as much a part of everyday life as humans are. Many of the separate strings we are developing now—the Internet of Things, blockchain, genomics, 3D printing, V.R., smart homes, smart fabrics, smart implants, driverless cars, voice-activated A.I. assistance—will work together. Google calls it ambient computing: it’s all around you. It’s inside you. This future isn’t about tools or operating systems; the future is about co-operating systems.
Like it or hate it, it’s what’s next, and books like this one, which break down complex computing and philosophical ideas into punchy, often beautiful prose (“poetical science”), are necessary if we’re going to ensure that, unlike the Industrial Revolution, A.I. benefits “society as a whole—and not … the self-entitled few.” We need accountability, oversight, legislation. (And for Big Tech to pay their f***ing taxes.)
Winterson’s most impassioned message is that we must learn from the past to learn from our mistakes.
Lauren Elkin is a Paris-based writer