What is it like to be wrong about something? “A lot of people are more scared than they have any reason to be,” the Harvard law professor and former White House adviser Cass R. Sunstein wrote in February 2020, about the oncoming coronavirus pandemic. “They have an exaggerated sense of their own personal risk.” Whoops!
Surely, that analysis was off target. But in what way was it off target? Sunstein has joined Daniel Kahneman and Olivier Sibony—the Nobel-winning economist and author of Thinking, Fast and Slow, and a business-school decision-making expert, respectively—to publish Noise: A Flaw in Human Judgment, a book about error-making.
Noise begins by imagining various groups shooting at a bull’s-eye. Some cluster their shots around the center of the target; some cluster, but in the wrong place; some don’t cluster; and some fail to cluster and do so in the wrong place.
If you’ve taken an introductory statistics course, you should immediately recognize our old friends “accuracy” and “precision.” Yet for the purposes of this book, our trio of experts chooses not to speak their names. Instead, they focus on what to call their absences: if your shots are inaccurate, centered somewhere other than the bull’s-eye, they display “bias”; if they are imprecise, spread out relative to one another, what they display is “noise.”
Noise, the book argues (as it must), is everywhere. This is meant to be startling, which is presumably why the authors avoid the standard language, so as not to simply be announcing that one of the two fundamental problems in measurement is everywhere. But the result is that their concept itself is scattered all around the outer rings of the target.
Blaming Bias
Noise happens because different people judge things differently. (In criminal justice, different judges are stricter or more lenient about the same cases, or idiosyncratic about different cases.) It happens because the same person judges one thing differently at different times. (A hungry or tired judge is more unforgiving.) It happens because the measurement driving the judgment is unreliable. (The forensic scientists testifying in court make random errors.)
All of these multifarious problems, taken together, can be an even bigger source of error than bias is. Yet according to the authors, people put much more effort into correcting for bias than they do into correcting for noise. “Why do we never invoke noise to explain bad judgments, whereas we routinely blame biases?” they ask.
Don’t we? Have none of them ever watched an umpire miss ball and strike calls? But the book reads as a sort of natural history of a slightly alien world, where the boundaries between the surprising and the commonplace keep being mislocated. Is forensic fingerprinting necessarily something “we tend to think of as an exact science”? Is it true, of workers’ arbitrary and annoying performance reviews, that “most people do not know just how noisy they are”?
How about: “Among doctors, the level of noise is far higher than we might have suspected”? This is one of the maxims or talking points or study-guide cues with which the authors leave the reader at the end of each chapter. Did you know that in many areas of medicine, diagnoses can be inconsistent? Well, yes; that’s why there are medical-mystery columns in magazines, and the standard practice known as “getting a second opinion.”
Reading it all, one starts to fear that the world’s supply of counter-intuitive discoveries may be running thin—or that the surprising-ideas industry has settled into its own tropes and clichés.
One solution to noise, the book tells us, is to subject your intuitive conclusions to careful analysis; that is, to think fast and slow. Another is to nudge people away from their biases—as suggested in Sunstein’s earlier book, Nudge.
We hear, too, about the wisdom of crowds, and, inevitably, about Moneyball: “Even today,” the authors write, “coaches, managers, and people who work with them often trust their gut and insist that statistical analysis cannot possibly replace good judgment.” This would be news to anyone who watched the Tampa Bay Rays pull their ace pitcher, Blake Snell, from the sixth inning of Game Six of last year’s World Series, even though he was holding a 1–0 lead and dominating the Los Angeles Dodgers—because Tampa Bay’s rule-based analytics system said a pitcher should not face each opposing hitter more than twice in a game. “The thought process was right,” the Rays’ manager said, after the Dodgers rallied to clinch the title.
“Why do we never invoke noise to explain bad judgments, whereas we routinely blame biases?”
Bringing old tools to bear on new situations is how you end up explaining how psychological biases are making everyone irrationally over-react to the threat of a coronavirus pandemic—or, as in Noise, arguing that computer algorithms will solve problems such as racism in the criminal-bail system, a task at which algorithms have already been found to be a failure. (It is “certainly a risk in principle,” the authors allow, that “if, for instance, the number of past arrests is used as a predictor, and if past arrests are affected by racial discrimination, then the resulting algorithm will discriminate as well.”)
In one particularly bleak meta-example, the authors praise Google for its highly structured hiring program, designed to answer the question “Will this person be successful at Google?” This model approach did not prevent the company from going through the highly public firing of its top experts on artificial-intelligence ethics—because, it seems, their area of expertise was not compatible with Google’s idea of success.
What if the problem isn’t where the shots land on the target, but where the target is in the first place? It is true, as the authors lay out at length, that the American criminal-justice process sentences people to prison in arbitrary and inconsistent ways. But if everyone’s prison terms were reliably ranked according to their crimes, so that no one had a hungry judge throw the book at them, the country would still have the highest incarceration rate in the world.
The authors are convinced, and wish to convince the reader, that our decision-making is just too noisy, and that clean, sensible algorithms can help lead us to a noise-free future.
“If one noise-reduction strategy is error-prone, we should not be content with high levels of noise,” they write. “We should instead try to devise a better noise-reduction strategy.” Two pages later, they write: “Exactly how to test for disparate impact, and how to decide what constitutes discrimination, bias, or fairness for an algorithm, are surprisingly complex topics, well beyond the scope of this book.” Maybe the truly counter-intuitive project would be trying to make a better world, rather than optimizing the world we have.
Tom Scocca is the politics editor of Slate and the editor of Hmm Weekly