Why Every Racist Mentions Their Black Friend

When something is thoroughly covered by both the New Republic and Urban Dictionary it has clearly reached a point of sufficient social saturation. So there’s no need to go into great detail about the trope of the accused racist who cites minority friends as proof that they don’t have a single racist bone in their body.

But what makes this defense so popular? Why is there such an urge to bring up something as nondescript as having a friend?

A new study by Daniel Effron of the London Business School provides an answer. Effron found that threats to moral identity increase the degree to which people believe past actions have proven their morality. In other words, the threat of appearing racist leads people to overestimate how much their past non-racist actions—like making friends with somebody of another race—are indicative of their non-racist attitudes.

In one set of experiments, participants had the opportunity to make a non-racist choice—for example, reading about a theft and correctly identifying a White rather than Black suspect as the thief. Participants who made the non-racist choice then had to either anticipate a threatening situation (having to defend a statement that compared Blacks unfavorably to Whites) or a non-threatening situation (defending a statement unrelated to race.) Participants then rated how much their initial selection of the White suspect was diagnostic of their non-racist attitudes.

Compared to participants who did not have to face a threatening situation, participants who felt threatened believed their decision to finger the White suspect was significantly more indicative of non-racist attitudes. Threatened participants still believed in the increased importance of their decision even when told that 98% of participants had also chosen the White suspect as the thief.

Might the threatened participants be justified in their beliefs? Do others actually see a previous non-racist decision as meaningful?

Probably not. In follow up experiments outside observers did not believe that selecting the white suspect was a sign of non-racist attitudes. Furthermore, Effron found that overestimating your non-racist “credentials” (e.g. believing you’re not racist because you have a Black friend) is more likely than underestimating your credentials to be seen as a sign of prejudice.

Taken together, the results illuminate the psychological mechanisms behind one of the most popular rationalization of racism. Somebody feels their image of being racially tolerant is under threat, so they overestimate how much previous behavior—having a beer with a Black guy, for example—is a sign of their tolerance. But highlighting this behavior has the opposite of the intended effect because people see the overestimation of the behavior’s importance as a sign of prejudice.

The conclusion is nothing that society hasn’t already figured out. If you’re accused of any kind of inappropriate -ism, don’t defend yourself by citing a particular action or relationship. It’s understandable that doing so seems like the best solution, but it’s probably better to keep your mouth shut. Or at least be prepared to cite 50+ data points rather than the vague existence of “some” friends.
Effron, D. (2014). Making Mountains of Morality From Molehills of Virtue: Threat Causes People to Overestimate Their Moral Credentials Personality and Social Psychology Bulletin DOI: 10.1177/0146167214533131

Sorry Talking Heads, You Know Nothing About What Matters in the NFL Playoffs

For years, sports commentators who spew evidence-free clichés about the keys to athletic victory have monopolized our airwaves. But recently a technique some of them view as akin to witchcraft, but that’s more commonly known as “statistical analysis,” has begun to bring an end to their reign of terror.

The latest volley in this ongoing battle comes from a new study by Joshua Pitts of Kennesaw State University. Pitts analyzed all 445 NFL playoff games from 1966 through 2012. Among the factors he examined were the number of previous playoff games and playoff wins of quarterbacks and head coaches, the playoff experience of all players relative to their opponents, statistical measures of offensive and defensive quality in terms of both regular season passing and rushing, whether a team was playing at home, and the degree to which a team entered a playoff game on a winning streak. The outcomes Pitts examined included whether a team won their playoff game, and their points scored, points allowed, and margin of victory or defeat.

The most noteworthy of Pitts’ findings is that there’s little evidence playoff experience matters.

Perhaps a surprising result is that neither the previous playoff experience of a quarterback/head coach nor the number of previous playoff wins for a quarterback/head coach has a significant impact on any of the measured outcomes in this study after holding current team quality constant.


All the various measures of previous playoff experience included in this study, including measures of quarterback, head coach, and team playoff experience, are rarely statistically significant determinants of the outcomes measured in this study. Furthermore, even when these measures do prove statistically significant, the magnitude of the impact on the dependent variables tends to be extremely small.

So much for that talking point. At least there’s always the “Playoff success starts with running the football and stopping the run” cliché. Oh wait…

The relative productivity of a team’s passing offense and defense in the regular season is consistently a statistically significant determinant of all of the outcomes measured in this study. However, the relative productivity of a team’s rushing defense is not a statistically significant determinant of any of the measured outcomes. Furthermore, the relative productivity of a team’s rushing offense is only statistically significant at the 10% level in a single specification presented in column 8 of Table 4. This provides overwhelming evidence, as suggested by Arkes (2011), that the key to victory in the NFL postseason is controlling the passing game.

Pitts also found almost no evidence that being on a winning streak (“They’re peaking at the right time!”) had an impact on the outcome of playoff games.

But the news for institutional clichés wasn’t all bad. It turns out the defense does make you marginally more likely to win a championship.

For every 1.4% increase in relative offensive productivity, a team is about 1% more likely to win a postseason game. Similarly, for every 1.1% increase in relative defensive productivity, a team is about 1% more likely to win in the postseason. In general, the marginal effect of defensive productivity is a little larger than the marginal effect of offensive productivity which provides slight support in favor of proponents of ‘‘defense wins championships.’’

Another noteworthy finding concerned home field advantage:

The home team is about 18–29% more likely to win, depending on the specification of the model and holding other factors constant. The results shown in Table 4 predict a 6–10 point advantage for the home team in a matchup between two identical opponents, essentially implying that home-field advantage is worth at least a touchdown in the postseason.

Bookies generally give a team about three points for home field advantage, so this implies that in the playoffs people may be underestimating that edge.

The fact that the way pro football is played can change in just a few years means these results should be taken with a grain of salt. Perhaps there’s been something important in the last five or ten years but it’s been masked by the prior 30 years of data. Alternatively, it’s possible that some of things that do appear to make a difference have already ceased to matter or will cease to matter in the next few years. But overall the study provides some compelling evidence that the clichés you hear about winning in the postseason are often disconnected from reality.

Finally, because no NFL playoff post is complete without a discussion of Tim Tebow, Pitts also used his models to predict the greatest playoff upsets of all time. And wouldn’t you know it, the Broncos Tebow-led overtime win against the Steelers in 2011 ranked in the top 10 in Pitts’ model based on expected win percentage and his model based on expected points margin. (The Vikings 1987 Divisional round win over the 49ers and the Patriots win over the Rams in the 2001 Super Bowl were ranked first, respectively.) In other words, to the surprise of nobody, Tebow’s playoff victory was a statistical anomaly (i.e. fluke.)
Pitts, J. (2014). Determinants of Success in the National Football League’s Postseason: How Important Is Previous Playoff Experience? Journal of Sports Economics DOI: 10.1177/1527002514525409

There’s a Placebo Effect For Sleep

The placebo effect is known far and wide. Give somebody a sugar pill, tell them it’s aspirin, and they’ll feel better. What’s less well-known is that there’s evidence of the placebo effect in domains that go beyond the commonly known medical scenarios.

One study (pdf) found that hotel maids who were told their work was good exercise later scored higher than a control group on a range of health indicators. Another study found that when participants were told athletes had excellent vision, they demonstrated better vision when doing a more-athletic activity relative to a less-athletic activity. Many studies have also shown that placebo caffeine can have an impact. In one experiment caffeine placebos improved cognitive performance among participants who were in the midst of 28 hours of sleep deprivation.

Given that caffeine placebos can mitigate the effects of sleep deprivation, Christina Draganich and Kristi Erdal of Colorado College decided to take the logical next step and investigate whether the effects of sleep deprivation could be influenced by perceptions about sleep quality. In other words, could making people think their sleep quality was better or worse influence the cognitive effects of sleep?

In an initial experiment participants were given brief lesson on the relationship between sleep quality and cognitive functioning, and told the normal proportion of REM sleep was between 20% and 25%. Participants were then hooked up to a machine and told it would measure their pulse, heart rate, and brain frequency, after which a program would use the data to calculate the amount of REM sleep they had had the night before. (Very few participants reported having suspicions about the machine.) Some participants were told they got 16.2% REM sleep (below average sleep quality) and some were told they got 28.7% REM sleep (above average sleep quality.)

After being told what the machine said, participants self-reported their own perception of their sleep quality. Finally, participants were administered the “Paced Auditory Serial Addition Test” (PASAT), a cognitive exercised that required adding many numbers together.

Draganich and Erdal found that participants who were told they had below average sleep quality performed significantly worse on the PASAT. At the same time, self-reported sleep quality was unrelated to PASAT performance. A follow-up experiment that included additional controls and three other cognitive tests largely confirmed the initial findings. In addition, the performance of participants on a verbal fluency test called the COWAT showed that not only does telling people they had below average sleep quality lead to inferior performance, telling them they had above average sleep can lead to superior performance.

Given the global importance of getting a good night’s rest the idea of placebo sleep seems potentially far-reaching. For example, you always hear that you should get a lot of sleep before a big test or interview, but that grandmotherly piece advice becomes even more important if the knowledge that you got too little sleep can harm your performance in a way that goes beyond the direct negative impact of not getting enough sleep.

The sleep placebo also suggests that finding a way to improve your sleep may be more important than you think. If you’re able to convince yourself that your bedtime routine is working — whether it’s reading, exercising, or eating honey — you might see the cognitive benefits of improved sleep even on nights when you don’t actually sleep better.
Draganich C, & Erdal K (2014). Placebo Sleep Affects Cognitive Functioning. Journal of experimental psychology. Learning, memory, and cognition PMID: 24417326

A Theory About Why the Powerful Don’t Care For the Powerless

Humans are skilled at perceiving the world in a way that makes life more enjoyable. One thing that helps with this goal is the tendency to view the world as a fair and orderly place, a bias often termed the “Just-world fallacy.”

There are benefits to believing injustice is rare. It makes you feel nice and warm on the inside. Research also suggests it increases your focus on larger long-term rewards rather than smaller short-term rewards. After all, if the world is a chaotic place where nobody gets what they deserve, there’s less reason to work hard or stick to long-term plans.

But it can be hard to believe in a just world because injustice is everywhere. There are repressive governments and natural disasters. Bad things surely happen to good people. And thinking about these innocent victims comes into direct conflict with the desire to believe the world is just. In lab experiments participants often resolve this conflict by derogating the victim, perhaps by coming up with some explanation for why the victim deserved their fate. In the real world you can see traces of this in people who believe food stamp recipients are wholly responsible for their own plight. If people are to blame for their misfortune, the world remains just.

A group of researchers led by Mitchell Callan of the University of Essex reasoned that if derogating victims increases the belief in a just world, and the belief in a just world helps people focus on long-term rewards, then it stands to reason that derogating victims could help people focus on long-term rewards.

To test their hypothesis Callan and his team conducted an initial experiment in which participants read about somebody who was mugged. Some participants learned that the mugger was apprehended (the world is just!) while others learned that the mugger was never caught (the world is unfair!) Afterward participants answered questions about how much they liked the victim and the degree to which they felt the victim was careless or responsible.

To measure commitment to long-term rewards, the researchers gave participants a “delay-discounting” task in which they revealed their preferences for accepting different amounts of money at different future times. Participants who were willing to wait longer for larger sums (i.e. those who didn’t “discount” a delayed payment) were measured as expressing a stronger commitment to the long-term.

The results were as expected. Among participants who had their perception of a just world threatened (because the mugger was not apprehended), those who rated the victim favorably, and thus perceived a more unjust world, were more likely to prefer quick payments. On the other hand, those who derogated the victim by rating him as unlikable and irresponsible were more likely to say they would hold out for larger long-term rewards. It appeared as though derogating the victim increased commitment to the long-term by helping to restore faith in a just world.

A follow-up experiment followed a similar procedure, but prior to the experiment researchers measured each participants’ “baseline” tendency to delay-discount. In addition, the level of injustice was manipulated by telling some participants the victim was a drug dealer. The results were the same as in the initial experiment. When participants saw victims as more deserving of their fate, participants were less likely to weaken their commitment to future rewards.

Here’s what’s so troubling about the study: The ability to put off small short-term rewards for larger long-term rewards is important if you want to attain a position of power. Very few people who are unable to turn down cocaine at a frat party the night before a chemistry exam are going to end up as a Congressman. Becoming powerful generally requires hard work, persistence, and a focus on large rewards that will arrive in a distant future.

Callan’s experiment shows that people can develop the ability to focus on long-term rewards by derogating victims. The implication is that our politicians and CEOs are more likely to come from a population that, on average, is less empathetic toward victims. Being able to view victims as undeserving of assistance may have helped them get where they are.

On a slightly less somber note, the study is a good reminder that it’s impossible to attain psychological perfection. As the internet fills with research-based tips on things like cognitive biases and habit formation, it can feel like there’s a solution for everything. But many times two laudable goals are incompatible. If you want to feel a sufficient amount of empathy for victims, the lost faith in a just-world could make it harder to stick to future plans. If you want to stick to future plans, it may be helpful to convince yourself that certain victims weren’t so innocent. So don’t be discouraged if it feels like it’s impossible to reach a psychological state with no drawbacks. Because it is.

(cross-posted from The Inertia Trap)

Callan, M.J., Harvey, A.J., & Sutton, R.M. (2013). Rejecting Victims of Misfortune Reduces Delay Discounting Journal of Experimental Social Psychology DOI: 10.1016/j.jesp.2013.11.002

Why You Shouldn’t Put Fruit In Your Dessert

Few situations are more agonizing for a health-conscious eater than deciding what to do about dessert. Should you have any at all? How much? What should you have? The debate between the little biblical foes on your shoulders tends to be a multi-round affair.

Fortunately, the proprietors of trendy new dessert establishments have been working to make things easier. For example, not only do many frozen yogurt shops allow you to measure your consumption by serving yourself, the available toppings often include healthy options like nuts and fruit. But therein lies another problem. Though the buffet of fresh fruit that stands between you and the cashier may appear to be a kind-hearted nudge toward a healthy diet, a new study suggests it might as well be a diabolical ploy aimed at increasing your consumption of junk food.

The researchers, Ying Jiang and Jing Lei, reasoned that because people generally want to eat tasty (i.e. unhealthy) things, they’ll try to find a justification for doing so, and one strategy is using the presence of a healthy topping to convince yourself that the unhealthy “base” isn’t all that bad. Jiang and Lei hypothesized that adding a healthy topping would lead people to believe that an unhealthy food contained fewer calories. Conversely, a topping ought to have no effect if the base food was healthy, because there would be no need to justify eating it, or if the topping was unhealthy, because then it couldn’t be used for justification.

In two initial experiments participants were told about a healthy or an unhealthy “base” food (salad or non-fat froyo vs. chocolate cake or ice cream) that had either a healthy topping (fruit), an unhealthy topping (chocolate sauce, whipped cream, or ranch dressing), or no topping. As predicted, when an unhealthy food was topped with fruit participants estimated it had fewer calories than when it had no topping. It would seem the addition of a healthy topping made people think an unhealthy food was less unhealthy. And because people are likely to consume more of something when they think it’s healthier, the finding reflects rather poorly on fruit-topped desserts.

To further test the role of the desire to justify unhealthy eating, the researchers conducted a similar follow-up experiment, but this time one group of participants was asked to imagine they were celebrating an important accomplishment. The idea was that these participants would have an external justification for indulging themselves and thus wouldn’t feel a need to justify eating something unhealthy by fudging calorie estimations. Sure enough, the participants who were not given an external justification estimated that unhealthy foods with healthy toppings had significantly fewer calories than participants who had the external justification of celebrating an accomplishment.

But what about the question of whether these estimates influence actual behavior? In a final experiment the researchers brought participants into the lab under the ruse that they were participating in an experiment about TV advertisements. As participants watched a series of TV ads there was a plate of small chocolate pastries in front of them. One group was given standard pastries with no toppings, while a second group was given pastries with a slice of strawberry on top. When the 45 minutes of TV-watching had concluded, the group whose pastries had strawberries on top had consumed a significantly higher volume of food. Of course it’s possible the strawberry pastries were more appealing for non-health reasons, but the experiment does provide some evidence that healthy toppings have a real effect on behavior.

The study dovetails nicely with recent research on “moral licensing,” or the psychological tricks we pull to give ourselves permission to do things a part of us knows we shouldn’t be doing. For example, one recent study found that imagining an unhealthy behavior you successfully avoided can make you more likely to do something unhealthy in the future. Furthermore, when faced with temptation, people will exaggerate the “badness” of the behavior they avoided in order to make giving in to the temptation more palatable.

Studies have also shown that the actions of others can be used to justify bad behavior. For example, there is evidence that people are more likely to act in a prejudiced manner when they observe somebody in their group (e.g. same ethnicity, nationality, profession, etc.) act in a non-prejudiced manner. Your group members’ good behavior essentially earns you the right to be bad. Conversely, research also suggests that you’re more likely to engage in bad behavior when you observe somebody with whom you’re “psychologically close” also engage in bad behavior. Such an influence may seem rather pedestrian until you realize that you can feel psychologically close to somebody simply because they share your name or your birthday.

Seen in this light, misconceptions about dessert toppings are just another element in the growing bag of tricks we use to justify doing something that has obvious drawbacks. In the same way a TV burglar throws a steak to distract the guard dog, we unearth some loosely crafted justification in order to silence the thoughts that tell us we should know better.

Of course the specific lesson of the study, if you haven’t learned it already, is that it’s probably best to avoid trying to put lipstick on a pig when it come to unhealthy foods. Just as McDonald’s chicken smothered in ranch dressing doesn’t become healthy when you put it on a bed of iceberg lettuce, a half pound of frozen yogurt won’t be healthy no matter how many blueberries you put on top.
Jiang, Y., & Lei, J. (2013). The Effect of Food Toppings on Calorie Estimation and Consumption Journal of Consumer Psychology DOI: 10.1016/j.jcps.2013.06.003

The Hazards of Debating Race and Inequality

Imagine there is a certain advantaged group of people that supports a policy that harms a disadvantaged group, and you believe there are hints of racial or ethnic bias underlying their position. Even if the advantaged group doesn’t literally believe that the disadvantaged group is less deserving, it’s impossible to view their insensitivity to the plight of those at the bottom of the system without considering race.

Now imagine you’re a prominent activist or politician gearing up to take on the advantaged group’s inequitable policy. What’s the optimal approach? Should you characterize their views in the most despicable way possible? Or would it be better to tone down the sensitive rhetoric and attempt to signal that you aim to be reasonable?

If there’s any kind of public element to the debate, most people will want to avoid being too conciliatory. After all, you need the public to know the scope of the injustice you’re fighting. There’s also research that suggests it can be harmful to avoid calling a spade a spade. A recent study led by Heather Rasinski found that after passing up an opportunity to confront somebody who exhibited prejudice, participants viewed the offending person more favorably and believed that confronting them was less important. Rasinski and her team concluded that in order for participants to reconcile the difference between their beliefs, which held that prejudice was always unacceptable, and their actions, which suggested that prejudice might not be so unacceptable, they revised their beliefs to reflect their actions. Thus, it’s important not to hold back when confronted with injustice because your lack of intensity can influence the strength of your beliefs.

On the other hand, it may be even more important to avoid drifting too close to the other extreme. Simple common sense tells you that a negotiation will go poorly if you demonize the other side, but accusations of racial or ethnic bias will be particularly damaging because of the strong threat they pose to a person’s self-concept and their belief that they’re ultimately a good person. When faced with such a threat people will respond by finding ways to mitigate it. The question then, is how exactly people mitigate the threat.

new study led by Tamar Saguy suggests a discouraging answer. Saguy and her team conducted three experiments that examined attitudes about inequality in the context of the relationships between Americans and Hispanics, Israeli Jews and Arabs, and Italians and African Immigrants. They found that the more the advantaged group felt wronged by accusations of racial bias, the more they viewed the inequality they benefited from as legitimate, and the less they expressed a willingness to take action to reduce inquality. In other words, when people felt unfairly accused of racial bias, they responded by legitimizing the system that put them in an advantaged position, and that led to a reduced desire to help the disadvantaged group.

So, for example, if somebody says your group of rich white men doesn’t care about Hispanic immigrants, that poses a threat to the moral standing of your group. But since your brain is generally focused on making you think your group is awesome, it will attempt to preserve that perception of awesomeness by strengthening the belief that your group’s wealth is the outcome of a system devoid of unfairness. In a sense, you’re maintaining your level of “goodness” by countering the increased possibility that you’re prejudiced with the increased possibility that your advantaged status is legitimate. Unfortunately, a stronger belief in the legitimacy of the system will lead to a weaker desire to help the people who are on the bottom of it.

All of this puts our hypothetical activist in a tricky position, but might there be another tool researchers have found for effectivley dealing with the issue of race-based inequality? Some studies have shown that people in an advantaged group are more likely to act to reduce inequality when the gap is framed as resulting from their own advantage rather than another group’s disadvantage. One study, which was based on the reasoning that people don’t like being seen as advantaged, found that when inequality was framed as resulting from White advantage, Whites were more likely to support policies that reduced economic opportunites for their own race (but not more likely to suport policies that increased opportunities for minorities.) Another study found that when income inequality was framed as the higest earners making more money rather than everybody else making less money, conservatives were more likely to support raising taxes on the wealthy. The researchers attributed their findings to the idea that framing inequality in terms of advantage makes people more aware of how external factors (e.g. place of birth, luck) contributed to their success.

Yet these studies don’t directly address the problem of race-related accusations strengthening support for an inequitable system. At the margin, framing inequality in terms of advantage may help mitigate the negative effects of these accusations, but such frames don’t truly provide a better a way to talk about sensitive issues involving race. Ultimately, the best advice for debating race and inequality may sound like something a parent would tell a 4th grader. Make sure you let the person know you’re unhappy with their behavior, but try not to hurt their feelings.

(cross-posted from The Inertia Trap)
Saguy, T., Chernyak-Hai, L., Andrighetto, L., & Bryson, J. (2013). When the powerful feels wronged: The legitimization effects of advantaged group members’ sense of being accused for harboring racial or ethnic biases European Journal of Social Psychology DOI: 10.1002/ejsp.1948

Rasinski, H., Geers, A., & Czopp, A. (2013). “I Guess What He Said Wasn’t That Bad”: Dissonance in Nonconfronting Targets of Prejudice Personality and Social Psychology Bulletin DOI: 10.1177/0146167213484769

How Your Social Status Influences the Way You’re Judged

One of the palpable weaknesses in the American justice system is the tendency for it to produce different outcomes for people from different social classes. Part of this is a result of discrepancies in the quality of legal representation people can afford, but part of it is also due to inconsistencies in the way morally questionable activities are judged.

Research on judgments of misdeeds often focuses on “moral licensing,” or the idea that certain circumstances can make us more amenable to bad behavior. One study that’s received a lot of attention found that people who bought organic food rather than a control food were less likely to volunteer to help a needy stranger. The implication is that doing one good deed made people feel it was less wrong to pass up an opportunity to do another good deed. Other studies have found that this kind of licensing can be influenced by the the actions of people in your group or people with whom you share a random characteristic such as a birthday. One study even found that people grant themselves a license to do bad things by exaggerating the evil of things they previously decided not to do.

However, thus far most research on moral licensing deals with people making judgements about their own actions. We still don’t know much about when or why people are willing to grant third parties a license to do something questionable. Given that you have less knowledge about other people, it seems likely that different factors would be involved.

The University of Wisconsin’s Evan Polman sought to fill this gap by specifically examining whether the social status of a third party influenced the degree to which they were granted a moral license. In two studies Polman and his team told participants about somebody who engaged in questionable activity, but they manipulated status through the person’s name (Billy-Bob (low) vs. Winston Rivington (high) vs. James (control)) or job (janitor vs. CEO). The researchers found that both high- and low-status actors were judged significantly less harshly than actors in the control condition. It would seem that both high- and low-status people tend to be given more leeway to do bad things than people in the middle.

Of course it seems unlikely that high- and low-status people would be granted a moral license for the same reason. That led Polman to investigate a more interesting question, which is how exactly status influences moral licensing. Polman hypothesized that high-status people were given “credentials” while low status people were given “credits.”

While credentials and credits similarly elicit less negative responses to misbehavior, they differ with respect to whether they alter observers’ perception of the negativity of the behavior itself (a form of perceptual change) or merely affect observers’ perception of the extent to which engaging in negative behavior is understandable or justified (a form of attitudinal change). Credentials bias perceptions of norm-violating behavior, leading people to perceive dubious behavior as less dubious; almost as if the behavior was not even a transgression (Effron & Monin, 201o)…

Credits, however, do not change observers’ perception of the behavior but offer counterbalancing capital so that a wrongdoer can transgress as long as their transgressions (so-called moral debits; Miller & Effron, 2010) do not exceed their credits (Nisan, 1991; Zhong, Liljenquist, & Cain, 2009). Thus, credits influence the extent to which observers sanction misbehavior by altering their attitudes toward the wrongdoing, such as viewing the wrongdoing as justified and tolerable.

In other words, when a high-status person does something questionable (e.g. stealing), you judge it less harshly because you believe the action is objectively less wrong (e.g. it was actually a clever manipulation of tax laws.) But when a low-status person does something questionable, you judge it less harshly because you understand why the person did it and you’re sympathetic to their situation (e.g. they needed money for food.)

Polman reasoned that when an action was ambiguous, high status people would be judged less harshly because it would be easier to reinterpret their action as something justifiable. He decided to test this hypothesis by directly manipulating whether the morally questionable action was ambiguous or not. Participants read about a janitor or a CEO who hired only white candidates, but some participants were told the person admitted being reluctant to hire African-Americans (unambiguous condition), and some participants were told the hiring was legitimately based on merit (ambiguous condition). Sure enough, when the action was ambiguous, participants rated the action less harshly when it was done by the CEO rather than the janitor. On the other hand, when the action was unambiguous, and thus it was impossible to view the action as permissible, participants judged the action more harshly when it was done by the CEO and less harshly when it was done by the janitor.

Polman also measured the dispositional sympathy of participants. As predicted, participants who were more sympathetic tended to judge those who were low-status less harshly. However, dispositional sympathy had no effect on the judgments of high-status people. To Polman, this was evidence that low status people were in fact being judged less harshly because their struggles were earning “sympathy credits” rather than “credentials.”

The findings suggest that high- and low-status people are judged differently, but not in a way that is universally advantageous to either of them. When the indecency of an action is ambiguous, high-status people will be judged less harshly because their actions are more likely to be interpreted in a positive light. However, when an action is unambiguous, low-status people will be judged less harshly because they are more likely to elicit sympathy.

As Polman points out, the research has implications for how you ought to defend yourself. If you’re clearly guilty, and thus your transgression is unambiguous, the optimal strategy may be to lower your status in the hope of receiving sympathy. For example, by begging the victim for forgiveness. Alternatively, if there’s ambiguity in your actions, it’s worthwhile to try and raise your status in order to make it more likely your actions will be viewed in a more positive light. Similarly, if you’re a high-status person but the ambiguity of your actions is open to debate, the best defense may be to obfuscate the circumstances of your crime. Because you’re high status, when your behavior is ambiguous a judge is more likely to “credential” your behavior by deciding that your actions weren’t all that bad.

Is there evidence of any of this in the real world? You can certainly string together enough high-status crimes for a solid bout of confirmation bias. For example, when high-status people do something unambiguous, such as commit murder, there doesn’t seem to be any special leniency. On the other hand, when their actions are more ambiguous, such as those involving regulatory improprieties in the banking sector, high-status people do seem to often go unpunished. Of course this doesn’t really prove anything — some actual data is needed. Either way, Polman’s research is a good reminder that social status matters, but not always in the way you might think.
Polman, E., Pettit, N., & Wiesenfeld, B. (2013). Effects of wrongdoer status on moral licensing Journal of Experimental Social Psychology, 49 (4), 614-623 DOI: 10.1016/j.jesp.2013.03.012