The Elephant in the Brain explores hidden patterns in how our brains really work.
The following are the key points I highlighted in this book. If you’d like, you can download all of them to chat about with your favorite language model.
Then he read Hierarchy in the Forest by anthropologist Christopher Boehm, a book that analyzes human societies with the same concepts used to analyze chimpanzee communities. After reading Boehm’s book, Kevin began to see his environment very differently.
Richard doesn’t complain about Karen by saying, “She gets in my way”; he accuses her of “not caring enough about the customer.” Taboo topics like social status aren’t discussed openly, but are instead swaddled in euphemisms like “experience” or “seniority.”
So if these activities aren’t altruistic, what’s the point? What’s in it for the individual babbler who competes to do more than his fair share of helping others? The answer, as Zahavi and his team have carefully documented, is that altruistic babblers develop a kind of “credit” among their groupmates—what Zahavi calls prestige status. This earns them at least two different perks, one of which is mating opportunities: Males with greater prestige get to mate more often with the females of the group. A prestigious alpha, for example, may take all the mating opportunities for himself. But if the beta has earned high prestige, the alpha will occasionally allow him to mate with some of the females. In this way, the alpha effectively “bribes” the beta to stick around. The other perk of high prestige is a reduced risk of getting kicked out of the group. If the beta, for example, has earned lots of prestige by being useful to the group, the alpha is less likely to evict him.
As with the babblers we met in the previous chapter, social status among humans actually comes in two flavors: dominance and prestige. Dominance is the kind of status we get from being able to intimidate others (think Joseph Stalin), and on the low-status side is governed by fear and other avoidance instincts. Prestige, however, is the kind of status we get from being an impressive human specimen (think Meryl Streep), and it’s governed by admiration and other approach instincts. Of course, these two forms of status aren’t mutually exclusive; Steve Jobs, for example, exhibited both dominance and prestige. But the two forms are analytically distinct strategies with different biological expressions. They are, as some researchers have put it, the “two ways to the top.”
We earn prestige not just by being rich, beautiful, and good at sports, but also by being funny, artistic, smart, well-spoken, charming, and kind. These are all relative qualities, however. Compared to most other animals, every human is a certifiable genius—but that fact does little to help us in competitions within our own species. Similarly, even the poorest members of today’s world are richer, by many material standards, than the kings and queens of yesteryear—and yet they remain at the bottom of the prestige ladder.
Machiavelli’s famous guidebook is The Prince, written for supreme rulers, while Castiglione wrote The Book of the Courtier for those of lesser nobility who sought favor at court. But although their subject matter is similar, in many ways, the two books are polar opposites.
When a high-status person chooses someone as a mate, friend, or teammate, it’s often seen as an endorsement of this associate, raising that person’s status. This (among other things) creates an incentive to win the affections of people with high status. But there are acceptable and unacceptable ways to do this. It’s perfectly acceptable just to “be yourself,” for example. If you’re naturally impressive or likable, then it seems right and proper for others to like and respect you as well. What’s not acceptable is sycophancy: brown-nosing, bootlicking, groveling, toadying, and sucking up. Nor is it acceptable to “buy” high-status associates via cash, flattery, or sexual favors. These tactics are frowned on or otherwise considered illegitimate, in part because they ruin the association signal for everyone else. We prefer celebrities to endorse products because they actually like those products, not because they just want cash. We think bosses should promote workers who do a good job, not workers who just sleep with the boss.
Of all the signals sent and received by our bodies, the ones we seem least aware of are those related to social status. And yet, we’re all downright obsessed with our status, taking great pains to earn it, gauge it, guard it, and flaunt it. This is a source of great dramatic irony in human life. Because of their privileged position, high-status individuals have less to worry about in social situations. They’re less likely to be attacked, for example, and if they are attacked, others are likely to come to their aid. This allows them to maintain more relaxed body language. They speak clearly, move smoothly, and are willing to adopt a more open posture. Lower-status individuals, however, must constantly monitor the environment for threats and be prepared to defer to higher-status individuals. As a result, they glance around, speak hesitantly, move warily, and maintain a more defensive posture. High-status individuals are also willing to call more attention to themselves. When you’re feeling meek, you generally want to be a wallflower. But when you’re feeling confident, you want the whole world to notice. In the animal kingdom, this “Look at me!” strategy is known as aposematism. It’s a quintessentially honest signal. Those who call attention to themselves are more likely to get attacked—unless they’re strong enough to defend themselves. If you’re the biggest male lion on the savanna, go ahead, roar your heart out. The same principle explains why poisonous animals, like coral reef snakes and poison dart frogs, wear bright warning colors. They may not look too tough, but they’re packing heat.
But status is more than just an individual attribute or attitude—it’s fundamentally an act of coordination. When two people differ in status, both have to modify their behavior. Typically the higher-status person will take up more space, hold eye contact for longer periods of time (more on this in just a moment), speak with fewer pauses, interrupt more frequently, and generally set the pace and tenor of interaction. The lower-status person, meanwhile, will typically defer to the higher-status person in each of these areas, granting him or her more leeway, both physically and socially. In order to walk together, for example, the lower-status person must accommodate to match the gait of the higher-status person. Most of the time, these unconscious status negotiations proceed smoothly. But when people disagree about their relative status, nonverbal coordination breaks down—a result we perceive as social awkwardness (and sometimes physical awkwardness as well). Most of us have had these uncomfortable experiences, as, for example, when sitting across from a rival colleague, not quite knowing how to position your limbs, whether it’s your turn to talk, or how and when to end the interaction.
In contexts governed by dominance, eye contact is considered an act of aggression. It’s therefore the prerogative of the dominant to stare at whomever he or she pleases, while submissives must refrain from staring directly at the dominant. When a dominant and a submissive make eye contact, the submissive must look away first. To continue staring would be a direct challenge. Now, submissives can’t avoid looking at dominants entirely. They need to monitor them to see what they’re up to (e.g., in order to move out of their space). So instead, submissives resort to “stealing” quick, furtive glances. You can think of personal information as the key resource that dominant individuals try to monopolize for themselves. They use their eyes to soak up personal info about the other members of the group, but try to prevent others from learning about them. In contexts governed by prestige, however, eye contact is considered a gift: to look at someone is to elevate that person. In prestige situations, lower-status individuals are ignored, while higher-status individuals bask in the limelight. In this case, attention (rather than information) is the key resource, which lower-status admirers freely grant to higher-status celebrities.
Many interactions, of course, involve both dominance and prestige, making status one of the trickier domains for humans to navigate. When Joan the CEO holds a meeting, for example, she’s often both the most dominant and the most prestigious person in the room, and her employees must rely on context to decide which kinds of eye contact are appropriate. Whenever Joan is talking, she’s implicitly asking for attention (prestige), and her employees oblige by looking directly at her. When she stops talking, however, her employees may revert to treating her as dominant, issuing the kind of furtive glances characteristic of submissives who hesitate to intrude on her privacy, and yet still wish to gauge her reactions to what’s happening in the meeting.
And although there are many different ways to look at prestige, we can treat it as synonymous with “one’s value as an ally.”
Self-Deception & Hidden Motives
“Every man alone is sincere. At the entrance of a second person, hypocrisy begins.”—Ralph Waldo Emerson
Sigmund Freud, of course, was a major champion of hidden motives. He posited a whole suite of them, along with various mechanisms for keeping them unconscious. But although the explanations in this book may seem Freudian at times, we follow mainstream cognitive psychology in rejecting most of Freud’s methods and many of his conclusions.
Instead, we start closer to evolutionary psychology, drawing from scholars like Robert Trivers and Robert Kurzban, along with Robert Wright—yes, they’re all Roberts—who have written clearly and extensively about self-deception from a Darwinian perspective.
Our brains are experts at flirting, negotiating social status, and playing politics, while “we”—the self-conscious parts of the brain—manage to keep our thoughts pure and chaste. “We” don’t always know what our brains are up to, but we often pretend to know, and therein lies the trouble.
Cognitive and social psychology. The study of cognitive biases and self-deception has matured considerably in recent years. We now realize that our brains aren’t just hapless and quirky—they’re devious. They intentionally hide information from us, helping us fabricate plausible prosocial motives to act as cover stories for our less savory agendas. As Trivers puts it: “At every single stage [of processing information]—from its biased arrival, to its biased encoding, to organizing it around false logic, to misremembering and then misrepresenting it to others—the mind continually acts to distort information flow in favor of the usual goal of appearing better than one really is.”Emily Pronin calls it the introspection illusion, the fact that we don’t know our own minds nearly as well as we pretend to. For the price of a little self-deception, we get to have our cake and eat it too: act in our own best interests without having to reveal ourselves as the self-interested schemers we often are.
Primatology. Humans are primates, specifically apes. Human nature is therefore a modified form of ape nature. And when we study primate groups, we notice a lot of Machiavellian behavior—sexual displays, dominance and submission, fitness displays (showing off), and political maneuvering. But when asked to describe our own behavior—why we bought that new car, say, or why we broke off a relationship—we mostly portray our motives as cooperative and prosocial. We don’t admit to nearly as much showing off and political jockeying as we’d expect from a competitive social animal. Something just doesn’t add up.
Box 2: Our Thesis in Plain English 1. People are judging us all the time. They want to know whether we’ll make good friends, allies, lovers, or leaders. And one of the important things they’re judging is our motives. Why do we behave the way we do? Do we have others’ best interests at heart, or are we entirely selfish? 2. Because others are judging us, we’re eager to look good. So we emphasize our pretty motives and downplay our ugly ones. It’s not lying, exactly, but neither is it perfectly honest. 3. This applies not just to our words, but also to our thoughts, which might seem odd. Why can’t we be honest with ourselves? The answer is that our thoughts aren’t as private as we imagine. In many ways, conscious thought is a rehearsal of what we’re ready to say to others. As Trivers puts it, “We deceive ourselves the better to deceive others.” 4. In some areas of life, especially polarized ones like politics, we’re quick to point out when others’ motives are more selfish than they claim. But in other areas, like medicine, we prefer to believe that almost all of us have pretty motives. In such cases, we can all be quite wrong, together, about what drives our behavior.
Human beings are self-deceived because self-deception is useful. It allows us to reap the benefits of selfish behavior while posing as unselfish in front of others; it helps us look better than we really are. Confronting our delusions must therefore (at least in part) undermine their very reason for existing. There’s a very real sense in which we might be better off not knowing what we’re up to.
Just as camouflage is useful when facing an adversary with eyes, self-deception can be useful when facing an adversary with mind-reading powers. But the mind-reading powers of nonhuman primates are weak compared to our own, and so they have less need to obfuscate the contents of their minds.
Consider how awkward it is to answer certain questions by appealing to selfish motives. Why did you break up with your girlfriend? “I’m hoping to find someone better.” Why do you want to be a doctor? “It’s a prestigious job with great pay.” Why do you draw cartoons for the school paper? “I want people to like me.” There’s truth in all these answers, but we systematically avoid giving them, preferring instead to accentuate our higher, purer motives.
Our ancestors did a lot of cheating. How do we know? One source of evidence is the fact that our brains have special-purpose adaptations for detecting cheaters.3 When abstract logic puzzles are framed as cheating scenarios, for example, we’re a lot better at solving them. This is one of the more robust findings in evolutionary psychology, popularized by the wife-and-husband team Leda Cosmides and John Tooby.
When a hotel invites its guests to “consider the environment” before leaving their used towels out to be washed, its primary concern isn’t the environment but its bottom line. But to impose on guests merely to save money violates norms of hospitality—hence the pretext.
But here’s the puzzle: we don’t just deceive others; we also deceive ourselves. Our minds habitually distort or ignore critical information in ways that seem, on the face of it, counterproductive. Our mental processes act in bad faith, perverting or degrading our picture of the world. In common speech, we might say that someone is engaged in “wishful thinking” or is “burying her head in the sand”—or, to use a more colorful phrase, that she’s “drinking her own Kool-Aid.”
On the one hand, our sense organs have evolved to give us a marvelously detailed and accurate view of the outside world . . . exactly as we would expect if truth about the outside world helps us to navigate it more effectively. But once this information arrives in our brains, it is often distorted and biased to our conscious minds. We deny the truth to ourselves. We project onto others traits that are in fact true of ourselves—and then attack them! We repress painful memories, create completely false ones, rationalize immoral behavior, act repeatedly to boost positive self-opinion, and show a suite of ego-defense mechanisms.
Another domain is personal health. You might suppose, given how important health is to our happiness (not to mention our longevity), it would be a domain to which we’d bring our cognitive A-game. Unfortunately, study after study shows that we often distort or ignore critical information about our own health in order to seem healthier than we really are. One study, for example, gave patients a cholesterol test, then followed up to see what they remembered months later. Patients with the worst test results—who were judged the most at-risk of cholesterol-related health problems—were most likely to misremember their test results, and they remembered their results as better (i.e., healthier) than they actually were.7 Smokers, but not nonsmokers, choose not to hear about the dangerous effects of smoking.8 People systematically underestimate their risk of contracting HIV (human immunodeficiency virus), and avoid taking HIV tests.10 We also deceive ourselves about our driving skills, social skills, leadership skills, and athletic ability.
Poetic, maybe, but this Old School perspective ignores an important objection: Why would Nature, by way of evolution, design our brains this way? Information is the lifeblood of the human brain; ignoring or distorting it isn’t something to be undertaken lightly. If the goal is to preserve self-esteem, a more efficient way to go about it is simply to make the brain’s self-esteem mechanism stronger, more robust to threatening information. Similarly, if the goal is to reduce anxiety, the straightforward solution is to design the brain to feel less anxiety for a given amount of stress. In contrast, using self-deception to preserve self-esteem or reduce anxiety is a sloppy hack and ultimately self-defeating. It would be like trying to warm yourself during winter by aiming a blow-dryer at the thermostat. The temperature reading will rise, but it won’t reflect a properly heated house, and it won’t stop you from shivering.
In recent years, psychologists—especially those who focus on evolutionary reasoning—have developed a more satisfying explanation for why we deceive ourselves. Where the Old School saw self-deception as primarily inward-facing, defensive, and (like the general editing the map) largely self-defeating, the New School sees it as primarily outward-facing, manipulative, and ultimately self-serving. Two recent New School books have been Trivers’ The Folly of Fools (2011) and Robert Kurzban’s Why Everyone (Else) Is a Hypocrite (2013). But the roots of the New School go back to Thomas Schelling, a Nobel Prize–winning economist17 best known for his work on the game theory of cooperation and conflict.
A classic example is the game of chicken, typically played by two teenagers in their cars. The players race toward each other on a collision course, and the player who swerves first loses the game. Traditionally it’s a game of bravado. But if you really want to win, here’s what Schelling advises. When you’re lined up facing your opponent, revving your engine, remove the steering wheel from your car and wave it at your opponent. This way, he’ll know that you’re locked in, dead set, hell-bent—irrevocably committed to driving straight through, no matter what. And at this point, unless he wants to die, your opponent will have to swerve first, and you’ll be the winner. The reason this is counterintuitive is because it’s not typically a good idea to limit our own options. But Schelling documented how the perverse incentives of mixed-motive games lead to option-limiting and other actions that seem irrational, but are actually strategic. These include — Closing or degrading a channel of communication. You might purposely turn off your phone, for example, if you’re expecting someone to call asking for a favor. Or you might have a hard conversation over email rather than in person. — Opening oneself up to future punishment. “Among the legal privileges of corporations,” writes Schelling, “two that are mentioned in textbooks are the right to sue and the ‘right’ to be sued. Who wants to be sued! But the right to be sued is the power to make a promise: to borrow money, to enter a contract, to do business with someone who might be damaged. If suit does arise, the ‘right’ seems a liability in retrospect; beforehand it was a prerequisite to doing business.” — Ignoring information, also known as strategic ignorance. If you’re kidnapped, for example, you might prefer not to see your kidnapper’s face or learn his name. Why? Because if he knows you can identify him later (to the police), he’ll be less likely to let you go. In some cases, knowledge can be a serious liability. — Purposely believing something that’s false. If you’re a general who firmly believes your army can win, even though the odds are against it, you might nevertheless intimidate your opponent into backing down. In other words, mixed-motive games contain the kind of incentives that reward self-deception.
By this line of reasoning, it’s never useful to have secret gaps in your knowledge, or to adopt false beliefs that you keep entirely to yourself. The entire value of strategic ignorance and related phenomena lies in the way others act when they believe that you’re ignorant. As Kurzban says, “Ignorance is at its most useful when it is most public.” It needs to be advertised and made conspicuous.
Sabotaging yourself works only when you’re playing against an opponent with a theory-of-mind. Typically these opponents will be other humans, but it could theoretically extend to some of the smarter animals, as well as hypothetical future robots or aliens.
As Mark Twain may have said elsewhere, “If you tell the truth, you don’t have to remember anything.”
The point is, our minds aren’t as private as we like to imagine. Other people have partial visibility into what we’re thinking. Faced with the translucency of our own minds, then, self-deception is often the most robust way to mislead others. It’s not technically a lie (because it’s not conscious or deliberate), but it has a similar effect. “We hide reality from our conscious minds,” says Trivers, “the better to hide it from onlookers.”
Modeling the world accurately isn’t the be-all and end-all of the human brain. Brains evolved to help our bodies, and ultimately our genes, get along and get ahead in the world—a world that includes not just rocks and squirrels and hurricanes, but also other human beings. And if we spend a significant fraction of our lives interacting with others (which we do), trying to convince them of certain things (which we do), why shouldn’t our brains adopt socially useful beliefs as first-class citizens, alongside world-modeling beliefs? Wear a mask long enough and it becomes your face. Play a role long enough and it becomes who you are. Spend enough time pretending something is true and you might as well believe it.
The Cheerleader “I know this is true,” the Cheerleader says. “Come on, believe it with me!” This kind of self-deception is a form of propaganda. As Kurzban writes, “Sometimes it is beneficial to be . . . wrong in such a way that, if everyone else believed the incorrect thing one believes, one would be strategically better off.” The goal of cheerleading, then, is to change other people’s beliefs. And the more fervently we believe something, the easier it is to convince others that it’s true. The politician who’s confident she’s going to win no matter what will have an easier time rallying supporters than one who projects a more honest assessment of her chances. The startup founder who’s brimming with confidence, though it may be entirely unearned, will often attract more investors and recruit more employees than someone with an accurate assessment of his own abilities. When we deceive ourselves about personal health, whether by avoiding information entirely or by distorting information we’ve already received, it feels like we’re trying to protect ourselves from distressing information. But the reason our egos need to be shielded—the reason we evolved to feel pain when our egos are threatened—is to help us maintain a positive social impression. We don’t personally benefit from misunderstanding our current state of health, but we benefit when others mistakenly believe we’re healthy. And the first step to convincing others is often to convince ourselves. As Bill Atkinson, a colleague of Steve Jobs, once said of Jobs’s self-deception, “It allowed him to con people into believing his vision, because he has personally embraced and internalized it.”
The Cheater “I have no idea what you’re talking about,” the Cheater says in response to an accusation. “My motives were pure.” As we discussed in Chapter 3, many norms hinge on the actor’s intentions. Being nice, for example, is generally applauded—but being nice with the intention to curry favor is the sin of flattery. Similarly, being friendly is generally considered to be a good thing, but being friendly with romantic intentions is flirting, which is often inappropriate. Other minor sins that hinge on intent include bragging, showing off, sucking up, lying, and playing politics, as well as selfish behavior in general. When we deceive ourselves about our own motives, however, it becomes much harder for others to prosecute these minor transgressions. We’ll see much more of this in the next chapter.
Again, in all of these cases, self-deception works because other people are attempting to read our minds and react based on what they find (or what they think they find). In deceiving ourselves, then, we’re often acting to deceive and manipulate others. We might be hoping to intimidate them (like the Madman), earn their trust (like the Loyalist), change their beliefs (like the Cheerleader), or throw them off our trail (like the Cheater). Of course, these aren’t mutually exclusive. Any particular act of self-deception might serve multiple purposes at once. When the mother of an alleged murderer is convinced that her son is innocent, she’s playing Loyalist to her son and Cheerleader to the jury. The prizefighter who is grossly overconfident about his odds of winning is playing both Cheerleader (to his fans, teammates, and other supporters) and Madman (to his opponent).
The benefit of self-deception is that it can, in some scenarios, help us mislead others. But what about its costs? As we’ve mentioned, the main cost is that it leads to suboptimal decision-making. Like the general who erases the mountain range on the map, then leads the army to a dead end, self-deceivers similarly run the risk of acting on false or missing information.
We assume that there is one person in each body, but in some ways we are each more like a committee whose members have been thrown together working at cross purposes.
This is illustrated rather dramatically by the rare but well-documented condition known as blindsight, which typically follows from some kind of brain damage, like a stroke to the visual cortex. Just like people who are conventionally blind, blindsighted patients swear they can’t see. But when presented with flashcards and forced to guess what’s on the card, they do better than chance. Clearly some parts of their brains are registering visual information, even if the parts responsible for conscious awareness are kept in the dark.
What this means for self-deception is that it’s possible for our brains to maintain a relatively accurate set of beliefs in systems tasked with evaluating potential actions, while keeping those accurate beliefs hidden from the systems (like consciousness) involved in managing social impressions. In other words, we can act on information that isn’t available to our verbal, conscious egos. And conversely, we can believe something with our conscious egos without necessarily making that information available to the systems charged with coordinating our behavior. No matter how fervently a person believes in Heaven, for example, she’s still going to be afraid of death. This is because the deepest, oldest parts of her brain—those charged with self-preservation—haven’t the slightest idea about the afterlife. Nor should they. Self-preservation systems have no business dealing with abstract concepts. They should run on autopilot and be extremely difficult to override (as the difficulty of committing suicide attests). This sort of division of mental labor is simply good mind design. As psychologists Douglas Kenrick and Vladas Griskevicius put it, “Although we’re aware of some of the surface motives for our actions, the deep-seated evolutionary motives often remain inaccessible, buried behind the scenes in the subconscious workings of our brains’ ancient mechanisms.”
When we spend more time and attention dwelling on positive, self-flattering information, and less time and attention dwelling on shameful information, that’s self-discretion. Think about that time you wrote an amazing article for the school paper, or gave that killer wedding speech. Did you feel a flush of pride? That’s your brain telling you, “This information is good for us! Let’s keep it prominent, front and center.”
In summary, our minds are built to sabotage information in order to come out ahead in social games. When big parts of our minds are unaware of how we try to violate social norms, it’s more difficult for others to detect and prosecute those violations. This also makes it harder for us to calculate optimal behaviors, but overall, the trade-off is worth it.
“A man always has two reasons for doing anything: a good reason and the real reason.”—J. P. Morgan
From the point of view of the left hemisphere, the only legitimate answer would have been, “I don’t know.” But that’s not the answer it gave. Instead, the left hemisphere said it had chosen the shovel because shovels are used for “cleaning out the chicken coop.” In other words, the left hemisphere, lacking a real reason to give, made up a reason on the spot. It pretended that it had acted on its own—that it had chosen the shovel because of the chicken picture. And it delivered this answer casually and matter-of-factly, fully expecting to be believed, because it had no idea it was making up a story.
Rationalization, sometimes known to neuroscientists as confabulation, is the production of fabricated stories made up without any conscious intention to deceive. They’re not lies, exactly, but neither are they the honest truth.
Box 5: “Motives” and “Reasons” When we use the term “motives,” we’re referring to the underlying causes of our behavior, whether we’re conscious of them or not. “Reasons” are the verbal explanations we give to account for our behavior. Reasons can be true, false, or somewhere in between (e.g., cherry-picked).
But the conclusion from the past 40 years of social psychology is that the self acts less like an autocrat and more like a press secretary. In many ways, its job—our job—isn’t to make decisions, but simply to defend them. “You are not the king of your brain,” says Steven Kaas. “You are the creepy guy standing next to the king going, ‘A most judicious choice, sire.’ “ In other words, even we don’t have particularly privileged access to the information and decision-making that goes on inside our minds. We think we’re pretty good at introspection, but that’s largely an illusion. In a way we’re almost like outsiders within our own minds. Perhaps no one understands this conclusion better than Timothy Wilson, a social psychologist who’s made a long career studying the perils of introspection. Starting with an influential paper published in 1977 and culminating in his book Strangers to Ourselves, published in 2002, Wilson has meticulously documented how shockingly little we understand about our own minds.
If Peter introspected carefully enough, he could probably bring himself to notice these motives lurking in the back of his mind—but why bother calling attention to them? The less his Press Secretary knows about these motives, the easier it is to deny them with conviction. And meanwhile, the rest of his brain is managing to coordinate his self-interest just fine.
“It still seems remarkable to me how often people bypass what are more important subjects to work on less important ones.”—Robert Trivers
Consider why people buy environmentally friendly “green” products. Electric cars typically cost more than gas-powered ones. Disposable forks made from potatoes cost more than those made from plastic, and often bend and break more easily. Conventional wisdom holds that consumers buy green goods—rather than non-green substitutes that are cheaper, more functional, or more luxurious—in order to “help the environment.” But of course we should be skeptical that such purely altruistic motives are the whole story. In 2010, a team of psychologists led by Vladas Griskevicius undertook some experiments to tease out some of these ulterior motives.6 The researchers gave subjects a choice between two equivalently priced goods, one of them luxurious but non-green, the other green but less luxurious. For example, they gave subjects a choice between two car models, both $30,000 versions of the Honda Accord. The non-green model was a top-of-the-line car with a sporty V-six engine replete with leather seats, GPS navigation system, and all the luxury trimmings. The green model had none of the nice extras, but featured a more eco-friendly hybrid engine. Subjects were also given a choice between two household cleaners (high-powered vs. biodegradable) and two dishwashers (high-end vs. water-saving). Subjects in the control group, who were simply asked which product they’d rather buy, expressed a distinct preference for the luxurious (non-green) product. But subjects in the experimental group were asked for their choice only after being primed with a status-seeking motive. As a result, experimental subjects expressed significantly more interest in the green version of each product.
Davison dubbed this the “third-person effect,” and it goes a long way toward explaining how lifestyle advertising might influence consumers. When Corona runs its “Find Your Beach” ad campaign, it’s not necessarily targeting you directly—because you, naturally, are too savvy to be manipulated by this kind of ad. But it might be targeting you indirectly, by way of your peers. If you think the ad will change other people’s perceptions of Corona, then it might make sense for you to buy it, even if you know that a beer is just a beer, not a lifestyle. If you’re invited to a casual backyard barbecue, for example, you’d probably prefer to show up with a beer whose brand image will be appealing to the other guests. In this context, it makes more sense to bring a beer that says, “Let’s chill out,” rather than a beer that says, “Let’s get drunk and wild!” Unless we’re paying careful attention, the third-person effect can be hard to notice. In part, this is because we typically assume that ads are targeting us directly, as individual buyers; indirect influence can be harder to see. But it’s also a mild case of the elephant in the brain, something we’d rather not acknowledge. All else being equal, we prefer to think that we’re buying a product because it’s something we want for ourselves, not because we’re trying to manage our image or manipulate the impressions of our friends. We want to be cool, but we’d rather be seen as naturally, effortlessly cool, rather than someone who’s trying too hard.
In 1989, to explain some of these inefficiencies, the economist James Andreoni proposed a different model for why we donate to charity. Instead of acting strictly to improve the well-being of others, Andreoni theorized, we do charity in part because of a selfish psychological motive: it makes us happy. Part of the reason we give to homeless people on the street, for example, is because the act of donating makes us feel good, regardless of the results.
To figure this out, we’re going to examine five factors that influence our charitable behavior: 1. Visibility. We give more when we’re being watched. 2. Peer pressure. Our giving responds strongly to social influences. 3. Proximity. We prefer to help people locally rather than globally. 4. Relatability. We give more when the people we help are identifiable (via faces and/or stories) and give less in response to numbers and facts. 5. Mating motive. We’re more generous when primed with a mating motive. This list is far from comprehensive, but taken together, these factors help explain why we donate so inefficiently, and also why we feel that warm glow when we donate.
Consider what happens when a teacher cancels a class session because of weather, illness, or travel. Students who are there to learn should be upset; they’re not getting what they paid for! But in fact, students usually celebrate when classes are canceled. Similarly, many students eagerly take “easy A” classes, often in subjects where they have little interest or career plans. In both cases, students sacrifice useful learning opportunities for an easier path to a degree. In fact, if we gave students a straight choice between getting an education without a degree, or a degree without an education, most would pick the degree—which seems odd if they’re going to school mainly to learn.
The biggest lesson from Part I is that we ignore the elephant because doing so is strategic. Self-deception allows us to act selfishly without having to appear quite so selfish in front of others. If we admit to harboring hidden motives, then, we risk looking bad, thereby losing trust in the eyes of others. And even when we simply acknowledge the elephant to ourselves, in private, we burden our brains with self-consciousness and the knowledge of our own hypocrisy. These are real downsides, not to be shrugged off.
Another benefit to confronting our hidden motives is that, if we choose, we can take steps to mitigate or counteract them. For example, if we notice that our charitable giving is motivated by the desire to look good and that this leads us to donate to less-helpful (but more-visible) causes, we can deliberately decide to subvert our now-not-so-hidden agenda.
Signaling & Competition
Education isn’t just about learning; it’s largely about getting graded, ranked, and credentialed, stamped for the approval of employers.
Knowledge suppression is useful only when two conditions are met: (1) when others have partial visibility into your mind; and (2) when they’re judging you, and meting out rewards or punishments, based on what they “see” in your mind.
Often a species’ most important competitor is itself.
And we had to earn these things, in part, by outwitting and outshining our rivals. This is what’s known in the literature as the social brain hypothesis, or sometimes the Machiavellian intelligence hypothesis. It’s the idea that our ancestors got smart primarily in order to compete against each other in a variety of social and political scenarios.
Now if, as we’ve been arguing, people are biased toward emphasizing cooperation and downplaying competition, then it will serve us well to temporarily reverse this bias. In what follows, let’s emphasize and accentuate the more competitive aspects of our species’ history. In particular, we’re going to look at three of the most important “games” played by our ancestors: sex, social status, and politics.
Every woman who wants to (monogamously) mate with a high-quality man has to compete with all the other women, while every man who wants to mate with a woman has to be chosen by her, ahead of all his rivals. As in other competitions, like the competition for sunlight among the redwoods, mate competition in a sexually reproducing species leads to an evolutionary arms race. This is illustrated most iconically by the peacock’s brilliant tail, which serves as an advertisement of its owner’s physical and genetic fitness. Similarly, among humans, the competitive aspect of courtship implies that both men and women will be keen to advertise themselves on the mating market. We want potential mates to know that we have good genes and that we’ll make good parents. The logic of this isn’t particularly hard to understand, but the implications can be surprising. As Geoffrey Miller argues in The Mating Mind, “Our minds evolved not just as survival machines, but as courtship machines,” and many of our most distinctive behaviors serve reproductive rather than survival ends. There are good reasons to believe, for example, that our capacities for visual art, music, storytelling, and humor function in large part as elaborate mating displays, not unlike the peacock’s tail.
In this context, the advice in Matthew 7:1—”Judge not, lest you be judged”—is difficult to follow. It goes against the grain of every evolved instinct we have, which is to judge others readily, while at the same time advertising ourselves so that we may be judged by others. To understand the competitive side of human nature, we would do well to turn Matthew 7:1 on its head: “Judge freely, and accept that you too will be judged.”
We rely heavily on honest signals in the competitive arenas we’ve been discussing—that is, whenever we try to evaluate others as potential mates, friends, and allies. Loyal friends can distinguish themselves from fair-weather friends by visiting you in the hospital, for example. Healthy mates can distinguish themselves from unhealthy ones by going to the gym or running a marathon. Initiates who get gang tattoos thereby commit themselves to the gang in a way that no verbal pledge could hope to accomplish. Of course, we also use these honest signals whenever we wish to advertise our own value as a friend, mate, or teammate.
One thing that makes signaling hard to analyze, in practice, is the phenomenon of countersignaling. For example, consider how someone can be either an enemy, a casual friend, or a close friend. Casual friends want to distinguish themselves from enemies, and they might use signals of warmth and friendliness—things like smiles, hugs, and remembering small details about each other. Meanwhile, close friends want to distinguish themselves from casual friends, and one of the ways they can do it is by being unfriendly, at least on the surface. When a close friend forgets his wallet and can’t pay for lunch, you might call him an idiot. This works only when you’re so confident of your friendship that you can (playfully) insult him, without worrying that it will jeopardize your friendship. This isn’t something a casual friend can get away with as easily, and it may even serve to bring close friends closer together.
When signals are used in competitive games, like sex, status, and politics, an arms race often results. In order to outdo the other competitors, each participant tries to send the strongest possible signal. This can result in some truly spectacular achievements: Bach’s concertos, Gauguin’s paintings, Shakespeare’s sonnets and plays, Rockefeller’s philanthropic foundation, and Einstein’s theories of relativity. And sometimes, like the redwoods, humans too compete to reach for the sky, whether by climbing Mount Everest, building pyramids and skyscrapers, or launching rockets to the moon.
The problem with competitive struggles, however, is that they’re enormously wasteful. The redwoods are so much taller than they need to be. If only they could coordinate not to all grow so tall—if they could institute a “height cap” at 100 feet (30 meters), say—the whole species would be better off. All the energy that they currently waste racing upward, they could instead invest in other pursuits, like making more pinecones in order to spread further, perhaps into new territory. Competition, in this case, holds the entire species back. Unfortunately, the redwoods aren’t capable of coordinating to enforce a height cap, and natural selection can’t help them either. There’s no equilibrium where all trees curtail their growth “for the good of the species.” If a population of redwoods were somehow restraining themselves, it would take only a few mutations for one of the trees to break ranks and grab all the sunlight for itself. This rogue tree would then soak in more energy from the sun, and thereby outcompete its rivals and leave more descendants, ensuring that the next generation of redwoods would be even more rivalrous and competitive—until eventually they were all back to being as tall as they are today. But our species is different. Unlike other natural processes, we can look ahead. And we’ve developed ways to avoid wasteful competition, by coordinating our actions using norms and norm enforcement—a topic we turn to in the next chapter.
There’s a tension in all of this. In simple applications of decision theory, it’s better to have more options and more knowledge. Yet Schelling has argued that, in a variety of scenarios, limiting or sabotaging yourself is the winning move. What gives? Resolving this tension turns out to be straightforward. Classical decision theory has it right: there’s no value in sabotaging yourself per se. The value lies in convincing other players that you’ve sabotaged yourself. In the game of chicken, you don’t win because you’re unable to steer, but because your opponent believes you’re unable to steer. Similarly, as a kidnapping victim, you don’t suffer because you’ve seen your kidnapper’s face; you suffer when the kidnapper thinks you’ve seen his face. If you could somehow see his face without giving him any idea that you’d done so, you’d probably be better off.
The Madman “I’m doing this no matter what,” says the Madman, “so stay outta my way!” When we commit ourselves to a particular course of action, it often changes the incentives for other players. This is how removing the steering wheel helps us win the game of chicken, but it’s also why businesspeople, gang leaders, athletes, and other competitors try to psych out their opponents.
Signals need to be expensive so they’re hard to fake. More precisely, they need to be differentially expensive—more difficult to fake than to produce by honest means.19
Here’s another way to look at it. Every remark made by a speaker contains two messages for the listener: text and subtext. The text says, “Here’s a new piece of information,” while the subtext says, “By the way, I’m the kind of person who knows such things.” Sometimes the text is more important than the subtext, as when a friend gives you a valuable stock tip. But frequently, it’s the other way around. When you’re interviewing someone for a job, for example, you aren’t trying to learn new domain knowledge from the job applicant, but you might discuss a topic in order to gauge the applicant as a potential coworker. You want to know whether the applicant is sharp or dull, plugged-in or out of the loop. You want to know the size and utility of the applicant’s backpack.
We want leaders who are sharp and can prove it to us. “In most or all societies,” writes Robbins Burling, “those who rise to positions of leadership tend to be recognized as having high linguistic skills.”
And yet, as many observers have pointed out, even some of the poorest among us live better than kings and queens of yore. So why do we continue working so hard? One of the big answers, as most people realize, is that we’re stuck in a rat race. Or to put it in the terms we’ve been using throughout the book, we’re locked in a game of competitive signaling. No matter how fast the economy grows, there remains a limited supply of sex and social status—and earning and spending money is still a good way to compete for it.
Savvy marketers at Toyota, maker of the popular Prius brand of hybrid cars, no doubt had this in mind when they designed the Prius’s distinctive body. For the U.S. market, they chose to produce a hatchback instead of a sedan, even though sedans are vastly more popular. Why change two things at once, both the engine and the body? A likely reason is that a distinctive body makes the car more conspicuous.10 Whether out on the road or parked in a driveway, a Prius is unmistakable. If the Prius looked just like a Camry, fewer people would notice it.
Other desirable traits that consumers are keen to signal include the following: — Loyalty to particular subcultures. A Boston Bruins cap says, “I support my local hockey team, and by extension, the entire community of other fans and supporters.” An AC/DC T-shirt says, “I’m aligned with fans of hard rock (and the countercultural values it stands for).” These products function as badges of social membership. — Being cool, trendy, or otherwise “in the know.” Sporting the latest fashions or owning the hottest new tech gadgets shows that you’re plugged into the zeitgeist—that you know what’s going to be popular before everyone else does. — Intelligence. A Rubik’s Cube isn’t just a cheap plastic toy; it’s often an advertisement that its owner knows how to solve it, a skill that requires an analytical mind, not to mention a lot of practice. These, again, are just a few of the many traits our purchases can signal. Others include athleticism, ambition, health-consciousness, conformity (or authenticity), youth (or maturity), sexual openness (or modesty), and even political attitudes.
A trip to the Galápagos isn’t something we can tote around like a handbag, but by telling frequent stories about the trip, bringing home souvenirs, or posting photos to Facebook, we can achieve much of the same effect. (Of course, we get plenty of personal pleasure from travel, but some of the value comes from being able to share the experience with friends and family.) Buying experiences also allows us to demonstrate qualities that we can’t signal as easily with material goods, such as having a sense of adventure or being open to new experiences. A 22-year-old woman who spends six months backpacking across Asia sends a powerful message about her curiosity, open-mindedness, and even courage. Similar (if weaker) signals can be bought for less time and money simply by eating strange foods, watching foreign films, and reading widely.
Now, as consumers, we’re aware of many of these signals. We know how to judge people by their purchases, and we’re mostly aware of the impressions our own purchases make on others. But we’re significantly less aware of the extent to which our purchasing decisions are driven by these signaling motives.
In Spent, Geoffrey Miller distinguishes between products we buy for personal use, like scissors, brooms, and pillows, and products we buy for showing off, like jewelry and branded apparel.
Today there’s a stigma to wearing uniforms, in part because it suppresses our individuality. But the very concept of “individuality” is just signaling by another name. The main reason we like wearing unique clothes is to differentiate and distinguish ourselves from our peers. In this way, even the most basic message sent by our clothing choices—“I’m my own person, in charge of my own outfit”—would have no place or value in an Obliviated society.
If lifestyle ads work entirely by Pavlovian training, then it would never make sense to advertise to an audience that’s unable or unlikely to buy the product. Brands would try to target their ads as narrowly as possible to their purchasing demographic. Why pay to reach a million viewers if only 10,000 of them can afford your product? But if lifestyle ads work by the third-person effect, then there will be some products for which it makes good business sense to target a wider audience, one that includes both buyers and non-buyers. One reason to target non-buyers is to create envy. As Miller argues, this is the case for many luxury products. “Most BMW ads,” he says, “are not really aimed so much at potential BMW buyers as they are at potential BMW coveters.” When BMW advertises during popular TV shows or in mass-circulation magazines, only a small fraction of the audience can actually afford a BMW. But the goal is to reinforce for non-buyers the idea that BMW is a luxury brand. To accomplish all this, BMW needs to advertise in media whose audience includes both rich and poor alike, so that the rich can see that the poor are being trained to appreciate BMW as a status symbol.
Art is an animal behavior, after all, and we need something like the fitness-display theory to explain how art pays for itself in terms of enhanced survival and reproduction, especially in the primitive (“folk art”) context of our foraging ancestors.
Hopefully by now we’ve demonstrated that art is valued for more than its intrinsic beauty and expressive content. It’s also fundamentally a statement about the artist, that is, a fitness display.
The fitness-display theory explains why. Art originally evolved to help us advertise our survival surplus and, from the consumer’s perspective, to gauge the survival surplus of others. By distilling time and effort into something non-functional, an artist effectively says, “I’m so confident in my survival that I can afford to waste time and energy.”
Think about how rarely we’re impressed by truly unimpressive people. When it happens, we feel as though we’ve been taken in by a charlatan. It can even be embarrassing to demonstrate poor aesthetic judgment. We don’t want others to know that we’re inept at telling good art from bad, skilled artists from amateurs. This suggests that we evaluate each other not only for our first-order skills, but for our skills at evaluating the skills of others. Human social life is many layered indeed.
Often charities bracket donations into tiers and advertise only which tier a given donor falls into (rather than an exact dollar amount). For example, someone who gives between $500 and $999 might be called a “friend” or “silver sponsor,” while someone who gives between $1,000 and $1,999 might be called a “patron” or “gold sponsor.” If you donate $900, then, you’ll earn the same label as someone who donates only $500. Not surprisingly, the vast majority of donations to such campaigns fall exactly at the lower end of each tier. Put another way: very few people give more than they’ll be recognized for.
In what follows (much of which is cribbed from Bryan Caplan’s excellent new book The Case Against Education), we’ll show how “learning” doesn’t account for the full value of education, and we’ll present a variety of alternative explanations for why students go to school and why employers value educated workers.
In 2001, the Nobel Prize was awarded to economist Michael Spence for a mathematical model of one explanation for these puzzles: signaling. The basic idea is that students go to school not so much to learn useful job skills as to show off their work potential to future employers. In other words, the value of education isn’t just about learning; it’s also about credentialing. Of course, this idea is much older than Spence; he’s just famous for expressing the idea in math. In the signaling model, each student has a hidden quality—future work productivity—that prospective employers are eager to know. But this quality isn’t something that can be observed easily over a short period, for example, by giving job applicants a simple test. So instead, employers use school performance as a proxy.
Imagine interviewing a 22-year-old college grad for a position at your firm. Glancing down at her resume, you notice she got an A in the biology class she took during her sophomore year. What does this tell you about the young woman in front of you? Well, it doesn’t necessarily mean she understands biology; she might have retained that knowledge, but statistically speaking, she’s probably forgotten a lot of it. More precisely, it tells you that she’s the kind of person who’s capable of getting an A in a biology class. This is more than just a tautology. It implies that she has the ability to master a large body of new concepts, quickly and thoroughly enough to meet the standards of an expert in the field—or at least well enough to beat most of her peers at the same task. (Even if the class wasn’t graded on a strict curve, most professors calibrate their courses so that only a minority of students earn A’s.) In addition to what the A tells you about her facility with concepts, it also tells you that she’s the kind of person who can consistently stay on top of her workload. Every paper, project, and homework assignment has a deadline, and she met most if not all of them. Every test fell on a specific date, and she studied and crammed enough to perform well on those tests—all while managing a much larger workload from other classes, of course. If she got good grades in those courses too—wow! And if she did lots of extracurricular activities (or a small number at a very high level), her good grades shine even brighter. All of this testifies quite strongly to her ability to get things done at your firm, and none it depends on whether she actually remembers anything from biology or any of her other classes.
No one claims that signaling explains the entire value of education. Some learning and improvement certainly does take place in the classroom, and some of it is critical to employers. This is especially true for technical and professional fields like engineering, medicine, and law. But even in those fields, signaling is important, and for many other fields, signaling may completely eclipse the learning function. Caplan, for example, estimates that signaling is responsible for up to 80 percent of the total value of education.
For example, the fact that school is boring, arduous, and full of busywork might hinder students’ ability to learn. But to the extent that school is primarily about credentialing, its goal is to separate the wheat (good future worker bees) from the chaff (slackers, daydreamers, etc.). And if school were easy or fun, it wouldn’t serve this function very well. If there were a way to fast-forward all the learning (and retention) that actually takes place in school—for example, by giving students a magic pill that taught them everything in an instant—we would still need to subject them to boring lectures and nitpicky tests in order to credential them. Signaling also explains a lot of things we don’t see (that we might expect to see if school were primarily about learning). For example, if the value of a college degree were largely a function of what you learned during your college career, we might expect colleges to experiment with giving students a comprehensive “exit exam” covering material in all the courses they took. Sure, it would be difficult, and there’s no way to test the material in the same depth as final exams given at the end of each semester. But if employers actually cared about knowledge, they’d want to know how much students actually retain. Instead, employers seem content with information about students’ generic ability to learn things (and complete assignments on time).
Meanwhile, when you’re an individual student within a nation, getting more school can substantially increase your future earnings—not because of what you’ve learned, but because the extra school helps distinguish you as a better worker. And, crucially, it distinguishes you from other students. Thus, to the extent that education is driven by signaling rather than learning, it’s more of a competition than a cooperative activity for our mutual benefit. Sure, we’d like school to be a place where we can all get better together, but the signaling model shows us that it’s more of a competitive tournament where only so many students can “win.” “Higher education,” says Peter Thiel, a tech billionaire famously critical of college, sorts us all into a hierarchy. Kids at the top enjoy prestige because they’ve defeated everybody else in a competition to reach the schools that proudly exclude the most people. All the hard work at Harvard is done by the admissions officers who anoint an already-proven hypercompetitive elite. If that weren’t true—if superior instruction could explain the value of college—then why not franchise the Ivy League? Why not let more students benefit? It will never happen because the top U.S. colleges draw their mystique from zero-sum competition.
Signaling certainly goes a long way toward explaining why we value education and why schools are structured the way they are. But if schools today mainly function as a credentialing apparatus, it seems like there should be cheaper, less wasteful ways to accomplish the same thing. For example, an enterprising young man could drop out of school and work an entry-level job for a few years, kind of like an apprenticeship. If he’s smart and diligent, he could conceivably get promoted to the same level he would have been hired at if he’d taken the time to finish his degree—and meanwhile, he’d be making a salary instead of studying and doing homework for free. So why don’t we see more young people doing this? A partial (but unsatisfying) answer is that going to school is simply the norm, and therefore anyone who deviates from it shows their unwillingness to conform to societal expectations. It’s all well and good for Bill Gates or Steve Jobs to drop out of college, but most of us aren’t that talented. And what employer wants to risk hiring someone who was too antsy to complete a degree? A desire to break the mold may be attractive in a CEO, but not necessarily for someone working at a bank or paper company. By this logic, school isn’t necessarily the best way to show off one’s work potential, but it’s the equilibrium our culture happened to converge on, so we’re mostly stuck with it. But if school is really such a waste, we might expect to see people eagerly innovating to come up with alternatives. Certainly there are some efforts in this direction, like online courses and Thiel’s sponsorship for talented students to forego college. But by and large, most of us accept that school is a reasonable use of our time and money, in part because school serves a wide variety of useful functions, even beyond learning skills and signaling work potential.
To the extent that medicine functions as a caring signal, it’s going to be sensitive to context. If everyone around you spends a lot on medical care, you’ll need to spend a lot too, or risk looking like someone who doesn’t care enough.
Social Norms & Enforcement
When a toddler stumbles and scrapes his knee, his mom bends down to give it a kiss. No actual healing takes place, and yet both parties appreciate the ritual. The toddler finds comfort in knowing his mom is there to help him, especially if something more serious were to happen. And the mother, for her part, is eager to show that she’s worthy of her son’s trust. This small, simple example shows how we might be programmed both to seek and give healthcare even when it isn’t medically useful.
This dilemma, and the strong physiological reaction that accompanies it, is part of a behavioral toolkit that’s universal among humans, something we’ve inherited from our forager ancestors. Our behaviors and reactions may not always make sense in a modern context, but they evolved because our ancestors confronted situations like this all the time, and what was useful for them is still (mostly) useful for us, especially when we’re facing people we know rather than strangers on the street. As we saw in the previous chapter, redwood trees are trapped in unfettered competition with each other. Under natural selection, there’s no way for them to curtail their growth “for the good of the species.” But humans are different. Unlike the rest of nature, we can sometimes see ahead and coordinate to avoid unnecessary competition. This is one of our species’ superpowers—that we’re occasionally able to turn wasteful competition into productive cooperation. Instead of always bull-rushing to the front of a line, for example, we can wait patiently and orderly. But as the occasional line-cutter reminds us, there’s always a temptation to cheat, and maintaining order isn’t always easy.
In Debt, the anthropologist David Graeber tells the story of Tei Reinga, a Maori villager and “notorious glutton” who used to wander up and down the New Zealand coast, badgering the local fishermen by asking for the best portions of their catch. Since it’s impolite in Maori culture (as in many cultures) to refuse a direct request for food, the fishermen would oblige—but with ever-increasing reluctance. And so as Reinga continued to ask for food, their resentment grew until “one day, people decided enough was enough and killed him.” This story is extreme, to say the least, but it illustrates how norm-following and norm-enforcement can be a very high-stakes game. Reinga flouted an important norm (against freeloading) and eventually paid dearly for it. But just as tellingly, the fishermen who put him to death felt so duty-bound by a different norm (the norm of food-sharing) that they followed it even to the point of building up murderous resentment. “Couldn’t you just have said no to Reinga’s requests?!” we want to shout at the villagers. But similarly we should ask ourselves, “Can’t we just let it go when someone cuts in line?” These instincts run deep.
Among laypeople, gossip gets a pretty bad rap. But anthropologists see it differently. Gossip—talking about people behind their backs, often focusing on their flaws or misdeeds—is a feature of every society ever studied.
One of the first scientists to study this formally was Robert Axelrod, a political scientist and game theorist who constructed a simple but illustrative model of norm-related behavior. What Axelrod found is that, in most situations (involving a variety of different costs and benefits, including the costs of helping to punish), people have no incentive to punish cheaters. However—and this was Axelrod’s great contribution—the model can be made to work in favor of the good guys with one simple addition: a norm of punishing anyone who doesn’t punish others. Axelrod called this the “meta-norm.”
As we’ve mentioned, humans have developed a wide variety of norms to constrain individual behavior. Many of these, like the norms against murder, rape, assault, and theft, are so obvious, and so strongly enforced, that they simply aren’t relevant for this book. The norms we care about here are the subtle ones, violations of which are so hard to detect that we often don’t notice even when we do it ourselves. Typically, these are crimes of intent. If you just happen to be friendly with someone else’s spouse, no big deal. But if you’re friendly with romantic or sexual intentions, that’s inappropriate. By targeting intentions rather than actions, norms can more precisely regulate the behavior patterns that cause problems within communities. (It would be ham-fisted and unduly cumbersome to ban friendliness, for example.) But regulating intentions also opens the door to various kinds of cheating, which we’ll explore in Chapter 4. Part of our thesis is that these weaker norms, the ones that regulate our intentions, are harder to notice, especially when we violate them ourselves, because we’ve developed that blind spot—the elephant in the brain. For this reason, it pays to dwell on a few of them, to remind ourselves that there’s a lot of social pressure to conform to these norms, but that we would benefit from violating these norms freely, if only we could get away with it.
We’d be wary of Daniel Kahneman, for example, if he went around introducing himself as a Nobel Prize–winner; we’d wonder why he felt the need to put himself above everyone else. For this reason, we actively celebrate people for being humble, and enjoy seeing arrogant people brought down a peg or two. But note that there remains a strong incentive to brag and show off. We need people to notice our good qualities, skills, and achievements; how else will they know to choose us as friends, mates, and teammates? We want people to notice our charitable contributions, our political connectedness, and our prowess in art, sport, and school. If it weren’t verboten, we’d post to Facebook every time we donated to charity, got a raise at work, or made friends with an important person. But because bragging is frowned upon, we have to be a little more discreet—a topic we’ll explore in the next chapter.
Everybody cheats. Let’s just get that out up front; there’s no use denying it. Yes, some people cheat less than others, and we ought to admire them for it. But no one makes it through life without cutting a few corners. There are simply too many rules and norms, and to follow them all would be inhuman.
Scalping—the unauthorized reselling of tickets, typically at the entrance to concerts and sporting events—is illegal in roughly half of the states in the United States.13 That’s why you’ll often hear scalpers hawking their goods with the counterintuitive (yet perfectly legal) request to buy tickets. Like wrapping alcohol in a paper bag, this practice doesn’t fool the people who are charged with stopping it; the police and venue security personnel know exactly what’s going on. And yet scalpers find it overwhelmingly in their interests to keep up the charade. This is another illustration of how even modest acts of discretion can thwart attempts at enforcing norms and laws.
Real life norms have many gray areas and iffy boundary cases. This is because it’s impossible to create standards everyone can agree on. Wittgenstein famously argued that it’s impossible to define, in unambiguous terms, what constitutes a “game,” and the same argument applies to all complex cultural concepts, including norms.
More generally, any act of following or copying another person’s behavior—from mimicry on the dance floor to the call-and-response routines common in religious ceremonies—demonstrates a leader’s ability to inspire others to follow. In modern workplaces, for example, it’s almost always the boss who initiates the end of a meeting, perhaps by being the first to stand up from the table. It would be a faux pas for a subordinate to get up and leave before the boss signaled that everyone was free to go.
But listeners rarely try to shortchange speakers in this way. Instead, we’re typically happy to give speakers an appropriate amount of credit for their insightful remarks—credit we pay back not in terms of other information, but rather in terms of respect. And we’re incentivized to give them exactly as much respect as they deserve because we’re evaluating them as potential allies rather than as trading partners.
On the surface, this ad seems to be appealing directly to you as an individual. It’s making a kind of rational argument: “If you drink sugary beverages, you’re liable to get fat.” But consider also the effect this ad is likely to have on social creatures who judge each other based on what they consume. The campaign ran for three months and was seen by millions of New Yorkers. If you saw the ad, chances are good most of your peers saw it too. In light of this, how likely will you be to bring soda to a friend’s birthday party? How self-conscious will you be slurping a Big Gulp at the office all-hands meeting? Those globs of fat have stuck in everyone’s mind. Maybe better to reach for water or diet soda instead. Peer pressure is a powerful force, and advertisers know how to harness it to their advantage.
People often talk as if intelligence were the key element underlying both school and work performance. But ordinary IQ can’t be the whole story, because we have cheap and fast tests to reveal IQ. More to the point, however, raw intelligence can only take you so far. If you’re smart but lazy, for example, your intelligence won’t be worth very much to your employer. As Caplan argues, the best employees have a whole bundle of attributes—including intelligence, of course, but also conscientiousness, attention to detail, a strong work ethic, and a willingness to conform to expectations. These qualities are just as useful in blue-collar settings like warehouses and factories as they are in white-collar settings like design studios and cubicle farms. But whereas someone’s IQ can be measured with a simple 30-minute test, most of these other qualities can only be demonstrated by consistent performance over long periods of time.
In light of this, consider how an industrial-era school system prepares us for the modern workplace. Children are expected to sit still for hours upon hours; to control their impulses; to focus on boring, repetitive tasks; to move from place to place when a bell rings; and even to ask permission before going to the bathroom (think about that for a second). Teachers systematically reward children for being docile and punish them for “acting out,” that is, for acting as their own masters. In fact, teachers reward discipline independent of its influence on learning, and in ways that tamp down on student creativity. Children are also trained to accept being measured, graded, and ranked, often in front of others. This enterprise, which typically lasts well over a decade, serves as a systematic exercise in human domestication. Schools that are full of regimentation and ranking can acclimate students to the regimentation and ranking common in modern workplaces.
Now, some of this may seem heavy-handed and forebodingly authoritarian, but domestication also has a softer side that’s easier to celebrate: civilization. Making students less violent. Cultivating politeness and good manners. Fostering cooperation. In France, for example, school was seen as a way to civilize “savage” peasants and turn them into well-behaved citizens.
The point here is that whenever we fail to uphold the (perceived) highest standards for medical treatment, we risk becoming the subject of unwanted gossip and even open condemnation. Our seemingly “personal” medical decisions are, in fact, quite public and even political.
For an individual human living alone in the woods, it never makes sense to take a resource and just throw it away or burn it up. But add a few other humans to the scene, and suddenly it can be perfectly rational—because, as we’ve seen many times, sacrifice is socially attractive. Who makes a better ally: someone who’s only looking out for number one or someone who shows loyalty, a willingness to sacrifice for others’ benefit? Clearly it’s the latter. And the greater the sacrifice, the more trust it engenders.
To the secular mentality, many of these norms—like the one against contraception—make little sense, especially on moral grounds. Why shouldn’t an individual woman be allowed to use birth control? But in a tight-knit community, each woman’s “individual” choices have social externalities. If you’re using birth control, you’re also more likely to delay marriage, get an advanced degree, and pursue a dynamic, financially rewarding career. This makes it harder on your more traditional, family-oriented neighbors. Your lifestyle interferes with theirs (and vice versa), and avoiding such tensions is largely why we self-segregate into communities in the first place.
Communication & Body Language
Microsociology. When we study how people interact with each other on the small scale—in real time and face to face—we quickly learn to appreciate the depth and complexity of our social behaviors and how little we’re consciously aware of what’s going on. These behaviors include laughter, blushing, tears, eye contact, and body language. In fact, we have such little introspective access into these behaviors, or voluntary control over them, that it’s fair to say “we” aren’t really in charge. Our brains choreograph these interactions on our behalves, and with surprising skill. While “we” anguish over what to say next, our brains manage to laugh at just the right moments, flash the right facial expressions, hold or break eye contact as appropriate, negotiate territory and social status with our posture, and interpret and react to all these behaviors in our interaction partners.
For a piece of information to be “common knowledge” within a group of people, it’s not enough simply for everyone to know it. Everyone must also know that everyone else knows it, and know that they know that they know it, and so on. It could as easily be called “open” or “conspicuous knowledge.”
In his book Rational Ritual, the political scientist Michael Chwe illustrates common knowledge using email. If you invite your friends to a party using the “To” and “Cc” fields, the party will be common knowledge—because every recipient can see every other recipient. But if you invite your guests using the “Bcc” field, even though each recipient individually will know about the party, it won’t be common knowledge. We might refer to information distributed this way, in the “Bcc” style, as closeted rather than common.
Here’s another way to think about it. We typically treat discretion or secret-keeping as an activity that has only one important dimension: how widely a piece of information is known. But actually there are two dimensions to keeping a secret: how widely it’s known and how openly or commonly it’s known.
Imagine two guards patrolling the castle together who happen to have overheard the nobles. Both guards might individually suspect a plot, but they might also be secretly happy about it. (Maybe the king has mistreated them.) Neither could openly admit to endorsing treason, but because the nobles were whispering, each guard can pretend not to have heard. If, instead, the nobles had been speaking loudly and openly, the plot would become common knowledge between the guards, and they would feel compelled to arrest the conspirators. As a rule of thumb, whenever communication is discreet—subtle, cryptic, or ambiguous—it’s a fair bet that the speaker is trying to get away with something by preventing the message from becoming common knowledge. Examples include — Body language. A nod, a glance, a knowing smile, a quick roll of the eyes, or a friendly touch on the arm. In general, body language is discreet in a way that words aren’t, because they are harder to interpret and quote to third parties. “The meaning of a wink,” says Michael Chwe in Rational Ritual, “depends on it not being common knowledge.”17 We’ll take a closer look at body language in Chapter 7. — Cryptic communication. Using words or phrases whose meaning is obscure, but which are more easily understood by one’s target audience than by hostile eavesdroppers. This is one reason we develop and use so much slang for bad, questionable, or illegal behavior. Terms like “hooking up” (sex), “420” (marijuana), and “gaming” (gambling) all proliferate partly in order to stay half a step ahead of the authorities (be they parents, police, or judgmental peers). — Subtlety and subtext. Indirection, hints, and innuendo. Such tactics allow us to convey meaning while retaining enough semantic elbow room to deny the message later, if need be. Examples include veiled threats (“It would be a shame if something happened to that pretty face of yours”) and broaching bad behavior such as prostitution (“You looking to have a good time?”) or drugs (“Do you like to party?”). — Symbolism. In her novel Ethan Frome, Edith Wharton cleverly symbolizes the sexual relationships between her main characters using two uncanny dinner items: pickles and donuts. More seriously, symbols can be used to rally resistance against a corrupt regime. If a resistance movement becomes associated with a particular color, people can wear that color to support the resistance without making themselves as vulnerable to attack by the ruling regime.
These techniques can be useful even when there are only two people involved. Consider a man propositioning a woman for sex after a couple dates. If he asks openly—”Would you like to have sex tonight?”—it puts both of their “faces” on the line; everything becomes less deniable. The solution is a little euphemism: “Want to come up and see my etchings?” Both parties have a pretty clear idea of what’s being suggested, but crucially their knowledge doesn’t rise to the status of common knowledge. He doesn’t know that she knows that he was offering sex—at least not with certainty.
In terms of method, the experiments were fairly conventional: an image was flashed, some questions were asked, that sort of thing. What distinguished these experiments were their subjects. These were patients who had previously, for medical reasons, undergone a corpus callosotomy—a surgical severing of the nerves that connect the left and right hemispheres of the brain. Hence the nickname for these subjects: split-brain patients.
When we say “body language,” we’re referring not just to arm movements and torso positioning, but more generally to all forms of “nonverbal communication.” In fact, we’re using these terms synonymously. The concept includes facial expressions, eye behaviors, touch, use of space, and everything we do with our voices besides uttering words: tone, timbre, volume, and speaking style.
And owing to these consequences, body language is inherently more honest than verbal language. It’s easy to talk the talk, but harder to walk the walk.
So in matters of life, death, and finding mates, we’re often wise to shut up and let our bodies do the talking.
We offer one final example of nonverbal political behavior. Imagine yourself out to dinner with a close friend. At some point, the conversation may turn to gossip—discussing and judging the behavior of those who aren’t present. But before your friend makes a negative remark about someone, he’s liable to glance over his shoulder, lean in, and lower his voice. These are nonverbal cues that what he’s about to say requires discretion. He’s letting you know that he trusts you with information that could, if word got out, come back to bite him.
Social status influences how we make eye contact, not just while we listen, but also when we speak. In fact, one of the best predictors of dominance is the ratio of “eye contact while speaking” to “eye contact while listening.” Psychologists call this the visual dominance ratio. Imagine yourself out to lunch with a coworker. When it’s your turn to talk, you spend some fraction of the time looking into your coworker’s eyes (and the rest of the time looking away). Similarly, when it’s your turn to listen, you spend some fraction of the time making eye contact. If you make eye contact for the same fraction of time while speaking and listening, your visual dominance ratio will be 1.0, indicative of high dominance. If you make less eye contact while speaking, however, your ratio will be less than 1.0 (typically hovering around 0.6), indicative of low dominance.
Consider how we use our bodies to “say” a lot of things we’d get in trouble for saying out loud. It would be appallingly crass to announce, “I’m the most important person in the room”—but we can convey the same message, discreetly, simply by splaying out on a couch or staring at people while talking to them. Similarly, “I’m attracted to you,” is too direct to state out loud to someone you just met—but a smile, a lingering glance, or a friendly touch on the wrist can accomplish the same thing, with just enough plausible deniability to avoid ruffling feathers.
This is the magic of nonverbal communication. It allows us to pursue illicit agendas, even ones that require coordinating with other people, while minimizing the risk of being attacked, accused, gossiped about, and censured for norm violations. This is one of the reasons we’re strategically unaware of our own body language, and it helps explain why we’re reluctant to teach it to our children.
As Oscar Wilde said, “If you want to tell people the truth, make them laugh; otherwise they’ll kill you.”
To understand any behavior, it’s essential to understand its cost–benefit structure. And since conversation is a two-way street, we actually need to investigate the costs and benefits of two behaviors: speaking and listening. In what follows, we’re going to lean heavily on the insights of the psychologist Geoffrey Miller, whom we met in the introduction, as well as the computer and cognitive scientist Jean-Louis Dessalles. Their two books (The Mating Mind and Why We Talk, respectively) provide thoughtful perspectives on conversation as a transaction between speakers and listeners—a transaction constrained, crucially, by the laws of economics and game theory.
In a naive accounting, speaking seems to cost almost nothing—just the calories we expend flexing our vocal cords and firing our neurons as we turn thoughts into sentences. But this is just the tip of the iceberg. A full accounting will necessarily include two other, much larger costs: 1. The opportunity cost of monopolizing information. As Dessalles says, “If one makes a point of communicating every new thing to others, one loses the benefit of having been the first to know it.” If you tell people about a new berry patch, they’ll raid the berries that could have been yours. If you show them how to make a new tool, soon everyone will have a copy and yours won’t be special anymore. 2. The costs of acquiring the information in the first place. In order to have interesting things to say during a conversation, we need to spend a lot of time and energy foraging for information before the conversation. And sometimes this entails significant risk.
In other words, listeners generally prefer speakers who can impress them wherever a conversation happens to lead, rather than speakers who steer conversations to specific topics where they already know what to say.
Consumption & Economic Behavior
Veblen famously coined the term “conspicuous consumption” to explain the demand for luxury goods. When consumers are asked why they bought an expensive watch or high-end handbag, they often cite material factors like comfort, aesthetics, and functionality. But Veblen argued that, in fact, the demand for luxury goods is driven largely by a social motive: flaunting one’s wealth. More recently, the psychologist Geoffrey Miller has made similar arguments from an evolutionary perspective, and we draw heavily from his work as well.
Economic puzzles. When we study specific social institutions—medicine, education, politics, charity, religion, news, and so forth—we notice that they frequently fall short of their stated goals. In many cases, this is due to simple execution failures. But in other cases, the institutions behave as though they were designed to achieve other, unacknowledged goals. Take school, for instance. We say that the function of school is to teach valuable skills and knowledge. Yet students don’t remember most of what they’re taught, and most of what they do remember isn’t very useful. Furthermore, our best research says that schools are structured in ways that actively interfere with the learning process, such as early wake-up times and frequent testing. (These and many other puzzles will be discussed in Chapter 13.) Again, something doesn’t add up.
If we return to the backpack analogy, we can see why relevance is so important. If you’re interested primarily in trading, you might ask, “What do you have in your backpack that could be useful to me?” And if your partner produces a tool that you’ve never seen, you’ll be grateful to have it (and you’ll try to return the favor). But anyone can produce a curiosity or two. The real test is whether your ally can consistently produce tools that are both new to you and relevant to the situations you face. “I’m building a birdhouse,” you mention. “Oh, great,” he responds, “here’s a saw for cutting wood,” much to your delight. “But how will I fix the wood together?” you ask. “Don’t worry, I also have wood glue.” Awesome! “But now I need something to hold birdseed,” you say hopefully. Your ally thinks for a minute, rummaging through his backpack, and finally produces the perfect plastic feeding trough. Now you’re seriously impressed. He seems to have all the tools you need, right when you need them. His backpack, you infer, must be chock-full of useful stuff. And while you could—and will—continue to engage him in useful acts of trading, you’re far more eager to team up with him, to get continued access to that truly impressive backpack of his.29 We want allies who have an entire Walmart in their backpacks, not just a handful of trinkets.
Berelson learned that readers have other less noble-sounding uses for their newspapers: They use them as a source of pragmatic information—on movies, stocks, or the weather; they use them to keep up with the lives of people they have come to “know” through their papers—from the characters in the news stories to the authors of the columns; they use them for diversion—as a “time-filler”; and they use them to prepare themselves to hold their own in conversations.
When you think about people two or three rungs above you on the social ladder, especially the nouveau riche, it’s easy to question the utility of their ostentatious purchases. Does anyone really need a 10,000-square-foot house, a $30,000 Patek Philippe watch, or a $500,000 Porsche Carrera GT? Of course not, but the same logic applies to much of your own “luxurious” lifestyle—it’s just harder for you to see.
Later, when we’re shopping for a product, the positive associations come flooding back to us, and we’ll be more favorably disposed to buying the product. These ads are brainwashing us (the explanation goes), and they’re doing it to us as individuals.
The hypothesis we’ve been considering is that lifestyle or image-based advertising influences us by way of the third-person effect, rather than (or in addition to) Pavlovian training. Now, what evidence is there that this is actually what’s happening?
One study, for example, found that consumers appreciate the same artwork less when they’re told it was made by multiple artists instead of a single artist—because they’re assessing the work by how much effort went into it, rather than simply by the final result.
Today, of course, lobster is far less plentiful and much more expensive, and now it’s considered a delicacy, “only a step or two down from caviar.” A similar aesthetic shift occurred with skin color in Europe. When most people worked outdoors, suntanned skin was disdained as the mark of a low-status laborer. Light skin, in contrast, was prized as a mark of wealth; only the rich could afford to protect their skin by remaining indoors or else carrying parasols. Later, when jobs migrated to factories and offices, lighter skin became common and vulgar, and only the wealthy could afford to lay around soaking in the sun. Now, lobster and suntans may not be “art,” exactly, but we nevertheless experience them aesthetically, and they illustrate how profoundly our tastes can change in response to changes in extrinsic factors. Here, things that were once cheap and easy became precious and difficult, and therefore more valued. Typically, however, the extrinsic factors change in ways that make things easier rather than more difficult.
Then, starting in the mid-18th century, the Industrial Revolution ushered in a new suite of manufacturing techniques. Objects that had previously been made only by hand—a process intensive in both labor and skill—could now be made with the help of machines. This gave artists and artisans unprecedented control over the manufacturing process. Walter Benjamin, a German cultural critic writing in the 1920s and 1930s, called this the Age of Mechanical Reproduction, and it led to an upheaval in aesthetic sensibilities.37 No longer was intrinsic perfection prized for its own sake. A vase, for example, could now be made smoother and more symmetric than ever before—but that very perfection became the mark of cheap, mass-produced goods. In response, those consumers who could afford handmade goods learned to prefer them, not only in spite of, but because of their imperfections.
Painters could no longer hope to impress viewers by depicting scenes as accurately as possible, as they had strived to do for millennia. “In response,” writes Miller, “painters invented new genres based on new, non-representational aesthetics: impressionism, cubism, expressionism, surrealism, abstraction. Signs of handmade authenticity became more important than representational skill. The brush-stroke became an end in itself.”39 These technological and aesthetic trends continue well into the present day. Every year, new technology forces artists and consumers to choose between the difficult “old-fashioned” techniques and the easier, but more precise, new techniques.
His argument began with a simple premise: If you notice a boy drowning in a shallow pond right in front of you, you have a moral obligation to try to rescue him. To do otherwise—to stand by and let him drown—would be unconscionable. So far, this isn’t particularly controversial. But Singer went on to argue that you have the exact same moral obligation to rescue children in developing countries who are dying of starvation, even though they’re thousands of miles away. The fact that they aren’t dying right in your backyard isn’t justification enough to ignore their plight. Singer’s conclusion tends to make people uncomfortable, especially since most of us don’t help starving children in far-off places with the same urgency we would help a boy drowning in the local pond. (Your two coauthors certainly don’t.) The argument implies that every time we take a vacation, buy an expensive car, or remodel the house, it’s morally equivalent to letting people die right in front of us. According to one calculation, for the cost of sending a kid through college in America, you could instead save the lives of more than 50 children (who happen to live in sub-Saharan Africa). Yes, many of us do try to help people in extreme need, but we also spend a lot on personal indulgences.
The main recipients of American charity are religious groups and educational institutions. Yes, some of what we give to religious groups ends up helping those who desperately need it, but much of it goes toward worship services, Sunday school, and other ends that aren’t particularly charitable. Giving to educational institutions is arguably even less impactful (as we’ll argue in Chapter 13 when we take a closer look at schools). Overall, no more than 13 percent of private American charity goes to helping those who seem to need it most: the global poor.
When we analyze donation as an economic activity, it soon becomes clear how little we seem to care about the impact of our donations. Whatever we’re doing, we aren’t trying to maximize ROD. One study, for example, asked participants how much they would agree to pay for nets that prevent migratory bird deaths. Some participants were told that the nets would save 2,000 birds annually, others were told 20,000 birds, and a final group was told 200,000 birds. But despite the 10- and 100-fold differences in projected impact, people in all three groups were willing to contribute the same amount.13 This effect, known as scope neglect or scope insensitivity, has been demonstrated for many other problems, including cleaning polluted lakes, protecting wilderness areas, decreasing road injuries, and even preventing deaths. People are willing to help, but the amount they’re willing to help doesn’t scale in proportion to how much impact their contributions will make.
Patients feel better when given what they think is a medical pill, even when it is just a placebo that does nothing. And patients feel even better if they think the pill is more expensive.
One study, for example, tracked 3,600 adults over seven and a half years. Investigators reported that people who reside in rural areas lived an average of 6 years longer than city dwellers, nonsmokers lived 3 years longer than smokers, and those who exercised a lot lived 15 years longer than those who exercised only a little.
Religion & Community Cohesion
“We are social creatures to the inmost centre of our being.”—Karl Popper
Religion isn’t just about private belief in God or the afterlife, but about conspicuous public professions of belief that help bind groups together. In each of these areas, our hidden agendas explain a surprising amount of our behavior—often a majority.
People also cheat less in full (vs. dim) light, or when the concept of God, the all-seeing watcher, is activated in their minds.
From the perspective of an animal struggling to survive and reproduce, the Hajj seems like an enormous waste of resources. A pilgrim traveling from San Francisco, for example, will have to take a week off work, buy an expensive plane ticket to Saudi Arabia, and uproot from her breezy, temperate city to camp out in the sweltering desert—and all for what, exactly? Religion. There’s perhaps no better illustration of the elephant in the brain.
Around the world, worshippers routinely undermine their narrow self-interest by fasting, sacrificing healthy animals, abstaining from certain sexual practices, and undergoing ritual mutilations like piercing, scarification, self-flagellation, and circumcision. Christian Scientists swear off blood transfusions. Mormon men spend two of their prime years stationed off in remote provinces doing missionary work. Many people earmark 10 percent of their income for the church. Even the most mundane form of religious devotion—weekly attendance at church—is like a miniature Hajj: people from a wide geographic area converge at a single location to kneel, bow, pray, sing, chant, and dance in the name of their faith.
It’s tempting to try to collapse these two puzzles into one, by assuming that the strange supernatural beliefs cause the strange behaviors. This seems straightforward enough: We believe in God, therefore we go to church. We’re scared of Hell, therefore we pray. All that would be left to explain, then, is where the beliefs come from. Let’s call this the belief-first model of religious behavior, as in Figure 6. Figure 6. Belief-First Model of Religion Although this turns out not to be the view held by most anthropologists and sociologists, it’s nevertheless a popular perspective, in part because it’s so intuitive. After all, our beliefs cause our behaviors in many areas of life—like when believing “I’m out of milk” causes us to visit the market. In fact, the belief-first model is something that both believers and nonbelievers often agree on, especially in the West. Debates between prominent theists and atheists, for example, typically focus on the evidence for God or the lack thereof. Implicit in these debates is the assumption that beliefs are the central cause of religious participation. And yet, as we’ve seen throughout the book, beliefs aren’t always in the driver’s seat. Instead, they’re often better modeled as symptoms of the underlying incentives, which are frequently social rather than psychological. This is the religious elephant in the brain: We don’t worship simply because we believe. Instead, we worship (and believe) because it helps us as social creatures.
When Muslims face Mecca to pray, we call it “religion,” but when American schoolchildren face the flag and chant the Pledge of Allegiance, that’s just “patriotism.” And when they sing, make T-shirts, and put on parades for homecoming, that’s “school spirit.” Similarly, it’s hard to observe what’s happening in North Korea without comparing it to a religion; Kim Jong-un may not have supernatural powers, but he’s nevertheless worshipped like a god. Other focal points for quasi-religious devotion include brands (like Apple), political ideologies, fraternities and sororities, music subcultures (Deadheads, Juggalos), fitness movements (CrossFit), and of course, sports teams—soccer, notoriously, being a “religion” in parts of Europe and most of Latin America. The fact that these behavioral patterns are so consistent, and thrive even in the absence of supernatural beliefs, strongly suggests that the beliefs are a secondary factor.
Compared to their secular counterparts, religious people tend to smoke less, donate and volunteer more, have more social connections, get and stay married more, and have more kids. They also live longer, earn more money, experience less depression, and report greater happiness and fulfillment in their lives. These are only correlations, yes, which exist to some extent because healthier, better-adjusted people choose to join religions. Still, it’s hard to square the data with the notion that religions are, by and large, harmful to their members. If religions are delusions, then, they seem to be especially useful ones.
Time and energy are perhaps the easiest resources to waste, and we offer them in abundance. Examples include weekly church attendance, sitting shiva, and the Tibetan sand mandalas we saw earlier. This helps explain why people don’t browse the web during church. Yes, you probably have “better things to do” than listen to a sermon, which is precisely why you get loyalty points for listening patiently. In other words, the boredom of sermons may be a feature rather than a bug.
Note, however, that a community’s supply of social rewards is limited, so we’re often competing to show more loyalty than others—to engage in a “holier than thou” arms race. And this leads, predictably, to the kind of extreme displays and exaggerated features we find across the biological world. If the Hajj seems extravagant, remember the peacock’s tail or the towering redwoods. But note, crucially, that sacrifice isn’t a zero-sum game; there are big benefits that accrue to the entire community. All these sacrifices work to maintain high levels of commitment and trust among community members, which ultimately reduces the need to monitor everyone’s behavior. The net result is the ability to sustain cooperative groups at larger scales and over longer periods of time. Today, we facilitate trust between strangers using contracts, credit scores, and letters of reference. But before these institutions had been invented, weekly worship and other costly sacrifices were a vital social technology.
The other important set of religious norms governs sex and family life. As Jason Weeden and colleagues have pointed out, religions can be understood, in part, as community-enforced mating strategies. Human mating patterns vary a lot around the world and depend on many factors, like resource availability, sex ratios, inheritance rules, and the economics of childrearing. One particularly interesting pair of strategies represents a divide in many Western countries (the United States in particular). On one side is the mating strategy pursued by members of the traditional, religious right, which involves early marriage, strict monogamy, and larger families. On the other side is the strategy pursued by members of the liberal, secular left, which involves delayed marriage, relaxed monogamy, and smaller families. Of these two mating strategies, the traditional one functions best in a tight-knit community, since it benefits from strong communal norms. As such, religious communities tend to frown on anything that interferes with monogamy and high fertility, including contraception, abortion, and divorce, along with pre- and extramarital sex.
Imagine a preacher addressing a congregation about the virtue of compassion. What’s the value of attending such a sermon? It’s not just that you’re getting personal advice, as an individual, about how to behave (perhaps to raise your chance of getting into Heaven). If that were the main point of a sermon, you could just as well listen from home, for example, on a podcast. The real benefit, instead, comes from listening together with the entire congregation. Not only are you learning that compassion is a good Christian virtue, but everyone else is learning it too—and you know that they’re learning it, and they know that you’re learning it, and so forth. (And if anyone happens to miss this particular sermon, don’t worry: the message will be repeated again and again in future sermons.) In other words, sermons generate common knowledge of the community’s norms. And everyone who attends the sermon is tacitly agreeing to be held to those standards in their future behavior. If an individual congregant later fails to show compassion, ignorance won’t be an excuse, and everyone else will hold that person accountable. This mutual accountability is what keeps religious communities so cohesive and cooperative. For better or worse, this dynamic works even for controversial norms. If a preacher rails against contraception or homosexuality, for example, you might personally disagree with the message. But unless enough people “boo” the message or speak out against it, the norm will lodge itself in the common consciousness. Thus, by attending a sermon, you’re learning not just what “God” or the preacher thinks, but also what the rest of your congregation is willing to accept.
But why do communities care what we believe? Why do our peers reward or punish us? Consider the belief in an all-powerful moralizing deity—an authoritarian god, perhaps cast as a stern father, who promises to reward us for good behavior and punish us for bad behavior. An analysis of this kind of belief should proceed in three steps. (1) People who believe they risk punishment for disobeying God are more likely to behave well, relative to nonbelievers. (2) It’s therefore in everyone’s interests to convince others that they believe in God and in the dangers of disobedience. (3) Finally, as we saw in Chapter 5, one of the best ways to convince others of one’s belief is to actually believe it. This is how it ends up being in our best interests to believe in a god that we may not have good evidence for.
In this way, many orthodox beliefs are like the hat and hairstyle requirements we mentioned earlier. They can be entirely arbitrary, as long as they’re consistent and distinctive. It doesn’t really matter what a sect believes about transubstantiation, for example, or the nature of the Trinity. In particular, it doesn’t affect how people behave. But as long as everyone within a sect believes the same thing, it works as an effective badge. And if the belief happens to be a little weird, a little stigmatizing in the eyes of nonbelievers, then it also functions as a sacrifice.
In the same way, the craziness of religious beliefs can function as a barometer for how strong the community is—how tightly it’s able to circle around its sacred center, how strongly it rewards members for showing loyalty by suppressing good taste and common sense. The particular strangeness of Mormon beliefs, for example, testifies to the exceptional strength of the Mormon moral community. To maintain such stigmatizing beliefs in the modern era, in the face of science, the news media, and the Internet, is quite the feat of solidarity. And while many people (perhaps even many of our readers) would enjoy being part of such a community, how many are willing to “pay their dues” by adopting a worldview that conflicts with so many of their other beliefs, and which nonbelievers are apt to ridicule? These high costs are exactly the point. Joining a religious community isn’t like signing up for a website; you can’t just hop in on a lark. You have to get socialized into it, coaxed in through social ties and slowly acculturated to the belief system. And when this process plays out naturally, it won’t even feel like a painful sacrifice because you’ll be getting more out of it than you give up.
Politics & Group Dynamics
The primatologist Robin Dunbar has spent much of his career studying social grooming, and his conclusion has since become the consensus among primatologists. Social grooming, he says, isn’t just about hygiene—it’s also about politics. By grooming each other, primates help forge alliances that help them in other situations. An act of grooming conveys a number of related messages. The groomer says, “I’m willing to use my spare time to help you,” while the groomee says, “I’m comfortable enough to let you approach me from behind (or touch my face).” Meanwhile, both parties strengthen their alliance merely by spending pleasant time in close proximity. Two rivals, however, would find it hard to let their guards down to enjoy such a relaxed activity.
“The worst problems for people,” says primatologist Dario Maestripieri, “almost always come from other people.”
Aristotle famously called humans “the political animal,” but it turns out, we aren’t the only species who merit that title. In 1982, primatologist Frans de Waal published his influential book Chimpanzee Politics, which made a splash by ascribing political motives to nonhuman animals.
Scientists have documented coalition politics in a variety of species. Primates, clearly, are a political bunch, as are whales and dolphins, wolves and lions, elephants and meerkats. But we know of no species more political than our own. Just as human brains dwarf those of other species, both in size and in complexity, so too do our coalitions. These take many forms and go by many names. In government, coalitions appear as interest groups and political parties; in business, they are teams, companies, guilds, and trade associations. In high school, coalitions are called cliques or friends. On the street and in prison, they’re called gangs. Sometimes they’re simply called factions. They can be as small as two people voting a third off the island or as large as a globe-spanning religion. They have membership criteria (however formal or informal), the ability to recruit new members, and the ability to kick out current members.
Despite the fact that it’s possible to cooperate, politically, in ways that “enlarge the pie” for everyone, this is the exception rather than the rule—especially for our distant ancestors. In most contexts, for one coalition to succeed, others must fail. Importantly, however, members within a coalition can earn themselves a larger slice of pie by cooperating—a fact that makes politics such an intoxicating game.
Similarly, when your boss steals credit for your ideas at work, you can be certain of it—but good luck convincing your boss’s boss. In general, it’s much easier for firsthand witnesses to detect a crime than to convince others who are far removed.
And as we’ve seen, a pretext doesn’t need to fool everyone—it simply needs to be plausible enough to make people worry that other people might believe it.
The Loyalist “Sure, I’ll go along with your beliefs,” says the Loyalist, thereby demonstrating commitment and hoping to earn trust in return. In many ways, belief is a political act. This is why we’re typically keen to believe a friend’s version of a story—about a breakup, say, or a dispute at work—even when we know there’s another side of the story that may be equally compelling. It’s also why blind faith is an important virtue for religious groups, and to a lesser extent social, professional, and political groups. When a group’s fundamental tenets are at stake, those who demonstrate the most steadfast commitment—who continue to chant the loudest or clench their eyes the tightest in the face of conflicting evidence—earn the most trust from their fellow group members. The employee who drinks the company Kool-Aid, however epistemically noxious, will tend to win favor from colleagues, especially in management, and move faster up the chain. In fact, we often measure loyalty in our relationships by the degree to which a belief is irrational or unwarranted by the evidence. For example, we don’t consider it “loyal” for an employee to stay at a company when it’s paying her twice the salary she could make elsewhere; that’s just calculated self-interest. Likewise, it’s not “loyal” for a man to stay with his girlfriend if he has no other prospects. These attachments take on the color of loyalty only when someone remains committed despite a strong temptation to defect. Similarly, it doesn’t demonstrate loyalty to believe the truth, which we have every incentive to believe anyway. It only demonstrates loyalty to believe something that we wouldn’t have reason to believe unless we were loyal.
“The man who reads nothing at all is better educated than the man who reads nothing but newspapers.”—Thomas Jefferson (attributed)
The corresponding downside, of course, is that we’re less likely to help victims who aren’t identifiable. As Joseph Stalin is reported to have said, “A single death is a tragedy; a million deaths is a statistic.”
This suggests that public K–12 schools were originally designed as part of nation-building projects, with an eye toward indoctrinating citizens and cultivating patriotic fervor. In this regard, they serve as a potent form of propaganda.
It seems that the governments that most need to indoctrinate their citizens do in fact pay for more school.
Parents of children in public school are not more supportive of government aid to schools than other citizens; young men subject to the draft are not more opposed to military escalation than men too old to be drafted; and people who lack health insurance are not more likely to support government-issued health insurance than people covered by insurance.
The 1-in-60-million figure we saw earlier applies to the average U.S. voter. Individual voters, however, aren’t necessarily average, and their odds of deciding a presidential election depend on which state they live in. During the 2008 race, for example, voters in “battleground” or “swing” states, like Colorado and New Hampshire, had relatively high odds of deciding the election, at 1 in 10 million. But in states like Oklahoma and New York, where one party is all but guaranteed to win, the odds were closer to 1 in 10 billion. That’s an astonishing 1,000-fold difference.
Now, while an earnest Do-Right might freely admit ignorance about some political issues, real voters rarely do. When people are asked the same policy question a few months apart, they frequently give different answers—not because they’ve changed their minds, but because they’re making up answers on the spot, without remembering what they said last time. It is even easy to trick voters into explaining why they favor a policy, when in fact they recently said they opposed that policy.
As long as our politicians talk a good game, we don’t seem to care whether they’re skilled at crafting bills and shepherding them through the system. Across the board, we seem to prefer high-minded rhetoric over humble pragmatism.
The next time we feel manipulated by an advertisement, sermon, or political campaign, we should remember the third-person effect: messages are often targeted at us by way of our peers.
Author
Mauro Sicard
CEO & Creative Director at BRIX Agency. My main interests are tech, science and philosophy.