1.
In the 1990s researchers from the Max Planck Institute in Berlin conducted what seemed like routine man-in-the-street interviews: they asked pedestrians to tell them, off the tops of their heads, the names of German businesses. Led by the psychologist Gerd Gigerenzer, the researchers then constructed a stock portfolio made up of companies mentioned by 90 percent of the respondents. A few months later that portfolio had not only beaten the market soundly, it had performed better than ones constructed by money managers. Instead of assessing market fundamentals and dividend yields and economic trends, as the experts did, the psychologists took a shortcut—what in their language is known as a “heuristic”—that relied solely on name recognition. And because the researchers “put our money where our heuristic was” they ended up with both a wad of cash and a hypothesis to test further: that knowing less can be knowing more; that decisions derived hastily are more efficient and accurate than decisions based on exhaustive research when—or as long as—the decision-maker uses the appropriate shortcut to limit incoming information.1
In 1999, not long after their stock market triumph, Gigerenzer and his associates published a book called Simple Heuristics That Make Us Smart. In it, they not only made the case that we all rely on mental short-cuts whether we’re conscious of doing so or not, but that we’d be better off if we employed them more deliberately. “In our program, we see heuristics as the way the human mind can take advantage of the structure of information …to arrive at reasonable decisions,” they wrote. And so, cognitive psychology ventured into self-help.
Malcolm Gladwell’s fevered new book, Blink: The Power of Thinking Without Thinking, blessedly uses the word “heuristic” rarely, but its subject and intent closely follow Gigerenzer’s. Where Gigerenzer and his group of social scientists present their rather arcane heuristic strategies as “an adaptive toolbox” for arriving at decisions “fast and frugally,” Blink is evangelical in a got-religion kind of way, with Gladwell offering it up as a “tool kit” for people aiming to reach the higher consciousness of rapid cognition through what he calls “thin-slicing”—looking at the smallest amount of information possible. In fact, both books wander through territory staked out by Herbert Simon fifty years ago when he wrote about “bounded rationality,”2 as well as by practitioners of a branch of psychology called “heuristics and biases,” and by evolutionary biologists and economists and neuroscientists and philosophers and those ancient taxonomists who classified cognition as either intuition or reason. It’s a long literature, and hey! who has time for it?
Saving time would appear to be the essence of rapid cognition. Why waste precious minutes performing a complete physical exam on a man suffering chest pains if there are only three symptoms that reliably rule out heart attack? Why suffer through dinner on a blind date when a five-minute conversation would be cheaper and revealing enough? Gladwell’s point—much like Gigerenzer’s before him—is that “decisions made very quickly can be every bit as good as decisions made cautiously and deliberately.”
While it is doubtful that someone who has swerved out of the path of an oncoming truck, or picked the right horse at Saratoga because there was something about its name—which is to say anyone—would disagree that quick, even instant, decisions can be good, even excellent ones, Gladwell’s ambition hints at a general bias in the way we think about thinking. If it is not precisely a bias toward overvaluing decisions we have to sweat over, then it is a bias toward undervaluing ones that are speedy, visceral, and seemingly unreflective. (Yes, LBJ asked the woman who would become Lady Bird to marry him the first time they met, but would you?) “What would happen,” Gladwell wonders, “if we took our instincts seriously?”
What Gladwell is getting at may be more an issue about human anatomy than about human thought. Not only do we have a brain stem and a limbic system, the repositories of instinct, we have a well-developed prefrontal cortex, the part of the brain that enables us, more than any other species, to plan and to ponder; the part, in other words, that makes us human.
But thinking, Gladwell tells us—or, more precisely, thinking too much—can trip us up. Consider the cura-tors at the Getty Museum who were offered the opportunity to purchase a rare sixth-century Greek marble statue of a young man. After a year of sophisticated archaeological and geological analysis that included core sampling and electron spectrometry and X-ray diffraction, they handed over nearly ten million dollars, cer-tain that the piece was authentic, a find. Meanwhile, two leading art historians, Frederico Zeri and Eve-lyn Harrison, and Thomas Hoving, the former director of the Metropol-itan Museum of Art, each came to a different conclusion after simply eyeballing the piece: even in the face of such compelling scientific data they felt it was a fake. And they were right:
Advertisement
In the first two seconds of looking—in a single glance—they were able to understand more about the essence of the statue than the team at the Getty was able to understand after fourteen months.
“Blink,” Gladwell says, “is a book about those first two seconds.”
Ask Thomas Hoving how he knew, instantly, that the kouros was a fake, though, and he won’t exactly be able to say.3 Nor can the tennis coach Vic Braden explain how he can “see” a double fault well before the tennis ball has hit the ground. Explanations of this degree of perspicacity inevitably falter.
“Something in the way the tennis players were holding themselves or the way they toss the ball, or the fluidity of their motion triggers something in his unconscious,” Gladwell writes of Braden.
He instinctively picks up the “giss” of a double fault. He thin-slices some part of the service motion and—blink!—he just knows. But here’s the catch: much to Braden’s frustration, he simply cannot figure out how he knows.
And nor, it seems, can Malcolm Gladwell: “…Snap judgments are, first of all, enormously quick: they rely on the thinnest slices of experience. But they are also unconscious…. Snap judgments and rapid cognition take place behind a locked door.” Gladwell’s tool kit, it seems, does not include a key. (Gigerenzer’s heuristics, in contrast, are his keys.)
Is it so remarkable that the former head of what is arguably the world’s leading art museum, a man who has been assessing works of art for the better part of half a century, should spot an art fraud? Is it really amazing that one of the most successful tennis coaches in the history of the sport should intuitively know which flawed body mechanics are likely to produce a double fault? It would be amazing if Gigerenzer’s German pedestrians were calling faults with something approaching Braden’s accuracy, or knowing—not guessing—that the kouros was a fake. But that’s absurd. Hoving’s “snap” judgment, much like Braden’s unerring calls, is only quick when time is measured on a discrete, twenty-four-hour clock. But time is cumulative. Hoving and Braden’s snap judgments issue from decades of experience. In this sense they are neither fast nor frugal.
Experience matters. It lays down tracks in the brain, cognitive templates against which new information is compared. Herbert Simon called this pattern recognition and observed that it was one of the most common and efficient ways that we make sense of the world. In his latest book, The Wisdom Paradox, the neuropsychologist Elkhonon Goldberg observes that exposure to similar, new things creates neural networks in the brain that attract each other and accumulate, networks that in some circumstances are expressed as expertise and in others as intuition (or both). The networks accrue with age—Goldberg ventures to call the result of this accumulation wisdom—and are, therefore, unavailable to young people. They enable the brain to recognize not only information that has been encountered before, but what may be encountered in the future, and to rapidly apprehend connections between what is and what was and what will be.
“Intuition is often understood as an antithesis to analytic decision-making, as something inherently nonanalytic or preanalytic,” Goldberg writes.
But in reality, intuition is the condensation of vast prior analytic experience; it is analysis compressed and crystallized…. It is the product of analytic processes being condensed to such a degree that its internal structure may elude even the person benefiting from it…. The intuitive decision-making of an expert bypasses orderly, logical steps precisely because it is a condensation of extensive use of such orderly logical steps in the past.
It is not only experience and the passing of time that open the province of intuition—of pattern recognition and condensed decision-making—to people as they age, it is also biology. It’s how we’ve evolved. The left hemisphere– right hemisphere duality of our brains that was once seen—through studies of adults with brain damage—primarily as a division between language functions (left brain) and visual-spatial reasoning (right brain) is now known to encompass something much broader: a distinction between processing what’s new and what’s not. “The right hemisphere is the novelty hemisphere, the daring hemisphere, the explorer of the unknown and the uncharted,” writes Goldberg. “The left hemisphere is the repository of compressed knowledge, of stable pattern-recognition devices that enable the organism to deal efficiently and effectively with familiar situations.”
Goldberg’s brain-imaging research has borne this out. The right hemisphere is activated when an individual is in the early stages of acquiring a new cognitive skill but as that task is mastered, the left brain takes over:
Advertisement
The right-to-left transfer could also be demonstrated for various real-life professional skills, which take years to acquire. Novices performing the tasks requiring such skills showed clear right-hemisphere activation. But skilled professionals showed distinct left-hemisphere activation while performing the same tasks.
It is the same across the age span: brain-imaging studies have shown that young people have more activation on the right side of the brain, and that it shifts to the left as we get older:
Contrary to previously well-entrenched beliefs, the right hemisphere is the dominant hemisphere at early stages of life. But as we move through the life span it gradually loses ground to the left hemisphere, as the latter accumulates an ever-increasing “library” of efficient pattern-recognition devices in the form of neural attractors.
Imagine two bird watchers, one experienced, one a beginner. The experienced one catches a glimpse of a large, yellowish bird flickering overhead and calls out “evening grosbeak.” Meanwhile the novice frantically flips through a field guide, shuttling between pages of yellow birds, birds with crowned heads, birds with large silhouettes, birds that undulate as they fly. The experienced bird watcher has synthesized all that data and internalized a signature pattern, while the novice must rely on an external device—the field guide—which can only provide information, not synthesis, and inefficiently at that. The experienced bird watcher responds quickly because she’s relying on the accumulated wisdom of “intuition.” The novice can only stand there—like Malcolm Gladwell in the presence of Vic Braden—amazed.
One problem with the field guide approach to decision-making is that it provides too much information, allows for too many options. Call it “unbounded rationality.” Call it “thick-slicing.” What enables thin-slicing to work, by contrast, is not simply that it deals with a smaller universe, but that it homes in on the bits that are uniquely relevant to the problem at hand. Gladwell tells the story of Brendan Reilly, an emergency room physician at Cook County Hospital in Chicago, an overcrowded and financially strapped institution whose emergency department was treating a quarter of a million patients a year, many of them complaining of chest pain. Typically, a doctor diagnoses heart attack by taking a patient’s history, administering a physical exam including an electrocardiogram (EKG), and making an educated guess. It’s time-consuming, and too often, in Reilly’s hospital at least, that guess has turned out to be wrong. Hoping to find a more accurate and less costly method of finding out who was not experiencing cardiac arrest, so his staff could focus on those who might be, Reilly began to test a decision-making algorithm developed by the cardiologist Lee Goldman. Though the algorithm only took three factors into account in addition to an ECG—is the pain unstable, is there fluid in the lungs, is the blood pressure below 100?—it was based on Goldman’s historical analysis of the course of hundreds of heart attacks:
[Reilly] took Goldman’s algorithm, presented it to the doctors in the Cook County ED and the doctors in the Department of Medicine, and announced that he was holding a bake-off. First, the staff would use their own judgment in evaluating chest pain, the way they always had. Then they would use Goldman’s algorithm, and the diagnosis and outcome of every patient treated under the two systems would be compared. For two years, data were collected and in the end, the result wasn’t even close. Goldman’s rule won hands down in two directions: it was a whopping 70 percent better than the old method at recognizing the patients who weren’t actually having a heart attack.
The Cook County Hospital experiment brings Gladwell to the conclusion that “you need to know very little to find the underlying signature of a complex phenomenon.” True as this may be, it is also deceptive. Goldman’s simple algorithm is based on a complicated methodology. And much like, say, Thomas Hoving’s expertise, it was decades in the making. That complexity and those years of refinement, both of which occur offstage, are what “snap” the judgment. Moreover, by screening out what does not matter—a patient’s diabetes, for instance, or his insomnia, or his Percocet habit—Goldman’s algorithm disallows subjective and idiosyncratic concerns from skewing the diagnosis. In the language of cognitive psychologists, the algorithm is a heuristic, the irrelevant concerns are biases, and biases muck things up.
2.
Gladwell also spends a lot of time in Blink writing about the muck-ups, telling the sorry stories of ordinary people who were hamstrung by embedded prejudices that clouded their better judgment, military personnel who were wedded to inelastic chains of command that left no room for intuitive decision-making, market researchers who asked the wrong questions and therefore promoted the wrong products, and voters who chose style over substance. This last is what Gladwell calls the Warren Harding error, and it is, he says, the “dark side of rapid cognition.”
Harding, who until quite recently had the honor of being considered the worst president in American history, was launched on his political career by an Ohio political operative named Harry Daugherty who happened to be sitting next to Harding one day when they were both having their shoes shined. To Daugherty’s quick, intuitive eye, Warren Harding looked presidential. He had the right bearing, the right stature, the right forehead. Other people—the voters of Ohio—thought so too. They sent Warren Harding—a man who stood for nothing—to the Senate in 1914. (“Why that son of a bitch looks like a senator,” one of his supporters declared at a campaign banquet.)
In 1920, after Harding had served one term, Daugherty convinced him to seek the Republican nomination for president. (Apparently, as his hair grayed, he looked even more the part.) When delegates found themselves deadlocked over the top two candidates, Harding catapulted over them both:
In the early morning hours, as they gathered in the smoke-filled back rooms of the Blackstone Hotel in Chicago, the Republican Party bosses threw up their hands and asked, wasn’t there a candidate they could all agree on? And one name came immediately to mind: Harding! Didn’t he look just like a presidential candidate? So Senator Harding became candidate Harding, and later that fall, after a campaign conducted from his front porch in Marion, Ohio, candidate Harding became President Harding.
And that was before television.
The election of Warren Harding might have been, in Gladwell’s terms, an error, but it’s unclear that it was an aberration. Indeed, the “dark side” of Blink not only seems to permeate our political life, it seems to eclipse it. In politics, more than in almost anything else, people go on first impressions. As Democrats found in the last election cycle, when almost every piece of news coming out of Iraq and the budget office and the foreign exchange should have helped their cause, none of it mattered because more people “liked” Bush than “liked” Kerry. It was the power of thinking without thinking.
Writing in The New Yorker the summer before the 2004 election about how Americans typically go about picking their presidents, Louis Menand recounts the advice offered to campaign strategists in the pages of Campaigns & Elections: The Magazine for People in Politics:
In a competitive political climate informed citizens may vote for a candidate based on issues. However, uninformed or undecided voters will often choose the candidate whose name and packaging are most memorable. To make sure your candidate has that “top-of-mind” voter awareness, a powerful logo is the best place to start. You want to present your candidate in language that voters will understand. They understand colors. “Blue is a positive color for men, signaling authority and control,” another article advises. “But it’s a negative color for women, who perceive it as distant, cold and aloof. Red is a warm, sentimental color for women—and a sign of danger or anger to men. If you use the wrong colors to the wrong audience, you’re sending a mixed message.
As reductive as this may seem, these kinds of messages—whether in the form of logos or slogans or colors or songs—are effective. They are effective because they are reductive. There is only so much information a person can or is willing to absorb. Political strategists exploit this, certainly, but most of us are complicit. How many voters who said they agreed with John Kerry’s health care program, for instance, could state with any specificity what his health care program entailed? As the political scientist Samuel Popkin suggests to Louis Menand, even “elite” voters—that is to say, informed voters—rely on shortcuts:
The very essence of being an ideologue lies in trusting the label—liberal or conservative, Republican or Democrat. Those are “bun-dling” terms: they pull together a dozen positions on individual issues under a single handy rubric. They do the work of assessment for you.
We live our lives on a need-to-know basis.
Of course, so do animals in the wild, who must instantly assess if the approaching footsteps belong to a potential predator, if the plant is edible, if that female is fertile. The environment supplies the cues, and at the moment of decision, everything else is just background noise. When, on February 4, 1999, four New York City policemen shot and killed a Guinean man named Amadou Diallo in his Bronx apartment building, they were, it would seem, reacting instinctively to cues that made them behave as if Diallo were a predator: his presence as a black man on his building’s stoop late at night; his flight when approached; the way his black leather wallet, held out to the officers in a darkened hallway, looked to them like a gun. Their reaction—forty-one shots fired in less than two minutes—would seem to be the very darkest side of “blink.”
Gladwell, however, doesn’t exactly see it this way. To him the killing of Amadou Diallo was a spectacular “mind-reading failure.” The officers, observing Diallo on the stoop,
sized him up and in that instant decided he looked suspicious. That was mistake number one. Then they backed the car up, and Diallo didn’t move. [Officer] Carroll later said that “amazed” him: How brazen was this man, who didn’t run at the sight of the police? Diallo wasn’t brazen. He was curious. That was mistake number two. Then Carroll and [officer] Murphy stepped toward Diallo on the stoop and watched him turn slightly to the side, and make a movement for his pocket. In that split second, they decided he was dangerous. But he was not. He was terrified. That was mistake number three.
What Gladwell seems to mean by “mind-reading,” then, is the ability to interpret another person’s state of mind from his body language—which in fact is what the police officers thought they were doing when they started firing their weapons. But their ability was clouded by instinct—fear. True mind-reading, Gladwell suggests, requires instinct to be suppressed, or retrained. He cites examples of security guards who are made to undergo “stress inoculation” by being exposed to ferocious dogs and shot at (with loud fake guns) over and over again until their racing hearts slow down and their fears slink away because the situations and the interventions required become routine. “Our unconscious thinking is, in one critical respect, no different from our conscious thinking,” Gladwell writes: “in both we are able to develop our rapid decision making with training and experience.” Elkhonon Goldberg would call this the getting of wisdom.
Early on in his book, Gladwell relates the story of a place he calls “the love lab,” a nondescript office near the University of Washington where John Gottman and his team of researchers videotape and then analyze the conversations of married couples. So far they’ve deconstructed about three thousand marital dialogues, each one a seemingly inconsequential tête-à-tête about the family dog or where to go on vacation or whether to buy a pickup truck or a minivan. Using a system Gottman developed that assigns a num-erical value to each one of twenty different facial gestures every time one is expressed, Gottman’s team translates each discussion into a string of 1,800 numbers—900 per spouse. These number strings are put into an equation that also factors in the talkers’ temperature, heart rate, the degree of nervous fidgeting, and other physical phenomena. (They are hooked up to electrodes.) The result, Gladwell says, is “something remarkable. If [Gottman] analyzes an hour of a husband and wife talking, he can predict with 95 percent accuracy whether that couple will still be married fifteen years later.” While Gladwell (who is unmarried) acknowledges that Gottman’s work is hardly an example of rapid cognition, he argues that, nonetheless, it shows how
the truth of even impossibly complex interactions like marriage can be understood very rapidly and with limited information…. Can a marriage really be understood in one sitting? Yes it can….
Someday, perhaps, there will be a prenuptial algorithm available for aspiring spouses to assess their future happiness, but in the meantime, as Gladwell reports, there is speed dating, where unclaimed singles scurry around a room sizing up potential mates in a couple of minutes. Gigerenzer’s work shows that most people need not spend a lot of time, or encounter a tremendous number of new prospects, to find a suitable partner. Bounded rationality is in effect. Hooking up, however, is not the same thing as staying hitched. There is a biological imperative to reproduce. It’s instinctive. We hardly have to think about it.
This Issue
April 28, 2005
-
1
In another experiment, students at the University of Chicago and the University of Munich were asked whether San Diego or San Antonio had more inhabitants. While 62 percent of the Americans answered correctly, 100 percent of the Germans did. “All the German students had heard of San Diego but many of them did not recognize San Antonio. They were thus able to apply the recognition heuristic and make a correct inference.” The Americans, in contrast, were “not ignorant enough.”
↩ -
2
Contrary to classical economic theory, which held that people make choices based on all available information, Simon argued that the universe of available information was too large and that people choose the first option that meets their needs.
↩ -
3
According to Gladwell: “Hoving always makes a note of the first word that goes through his head when he sees something new, and he’ll never forget what that word was when he first saw the kouros. ‘It was “fresh”—”fresh,”‘ Hoving recalls…. ‘I had dug in Sicily, where we found bits and pieces of these things. They just don’t come out looking like that.’” Later Gladwell writes of the scholars who assessed the kouros: “They simply took a look at that statue and some part of their brain did a series of instant calculations, and before any kind of conscious thought took place, they felt something, just like the sudden prickling of sweat on the palms of the gamblers. For Thomas Hoving, it was the completely inappropriate word ‘fresh’ that suddenly popped into his head. In the case of Angelos Delivorrias, it was a wave of ‘intuitive repulsion.’ For Georgios Dontas, it was the feeling that there was a glass between him and the work. Did they know why they knew? Not at all. But they knew.”
↩