How Knowledge Can Make You Stupid

(cross-posted from The Inertia Trap)

The human ability to infer what other people are thinking is a big reason we’re able to understand and cooperate with others. Along with the ability to take pictures of our food, it’s what separates us from lesser primates.

But we’re not born with this ability. Experiments involving what’s called the “change-of-location” or “false-belief” task show that it tends to develop between the ages of three and five. In these experiments children observe or are told about a person who hides an object and then leaves the room. While this protagonist is gone, a second person comes in and moves the object. Children are then asked where the protagonist will look for the object when they return. Younger children are unable to separate their own knowledge of the new location from the knowledge of the protagonist, and thus they tend to say the person will look in the object’s new location. Older children are able to understand that the protagonist can hold a false belief, and thus they tend to correctly say the protagonist will look in the object’s original location.

For years, that’s all there was to it. Once kids reached elementary school, we assumed they could keep their own knowledge separate from what they perceived others to know. But a new study led by Jessica Sommerville of the University of Washington throws some variation into the change-of-location task, and the results suggest that we may not grow out of this stage as much as we think.

While the standard false belief task involves putting the object is two distinct locations (e.g. table, cupboard, closet, etc.), Sommerville and her team created a continuous set of locations by conducting the experiment within a sandbox. This allowed the researchers to detect lesser degrees of influence because rather than requiring participants to absurdly guess the wrong location in order to show bias, all participants needed to do was guess the wrong location by a matter of centimeters.

Whereas the classic change-of-location task is designed to assess whether participants appreciate that a protagonist can hold a false belief, our Sandbox task focuses on a different but related issue. The goal of our task is to test the degree to which participants’ knowledge of the object in its new location biases their representation of where the protagonist thinks the object is located. Thus, our task is designed to focus on the amount of bias (measured in centimeters) that the participants’ own privileged knowledge exerts on their representation of another person’ s belief about a location in space.

Sommerville’s experiments were similar to the standard change-of-location experiments. In the “false-belief” condition, the experimenter narrated and acted out the story, first placing the object in the initial location in the sandbox, and then moving it to the second location when the protagonist in the story would have been absent. Participants were then asked to predict where the protagonist would look for the object. Participants also engaged in a control condition in which they were simply asked to recall where the object was initially placed.

The researchers found that not only did the 3- and 5-year-olds show bias on the false-belief condition relative to the control, adults did too. That is, when adults were asked where the protagonist would look for the object, they chose a spot that, compared to their memory of the object’s initial location, was significantly closer to the object’s new location. It appears that adults’ own knowledge of where the object was hidden influenced where they thought the person would look for it.

Within the lab, this may not seem like a big deal, but in other contexts this kind of bias can be problematic. For example, imagine that instead of understanding how somebody can falsely believe an object is four feet from the end of the sandbox when it’s really two feet from the end, you understand how your neighbor can believe your toaster is worth $20 when you know it’s worth $40. If the neighbor wants to buy your toaster, this understanding leads to the conclusion that $30 is a fair compromise.

But Sommerville’s research suggests that things may not work that smoothly. Even if all objective evidence points to your neighbor thinking that the toaster is worth $20, your own knowledge that it’s *really* worth $40 can bias your estimate of his belief. So instead of $30 being the fair compromise, you might think he actually believes the toaster is worth $22 or $23, and therefore $31 or $32 is the fair compromise. At the margin, this makes reaching an agreement more difficult.

Now imagine that instead of toaster prices, one person is a senator who knows the optimal tax rate on income over $250,000 is 39.3%, and the other person is a senator known to believe the optimal rate is 36.4%. Suddenly the prospect of the first senator’s belief influencing his perception of what his colleague believes is a pretty big deal.

Of course this is all a rather lengthy extrapolation from a single study, and it’s unclear how biased estimates about an object buried in a sandbox will translate to more natural settings. Furthermore, when it comes to high-stakes negotiations, there are so many factors involved that it’s debatable whether these biases would even have a marginal impact.

Nevertheless, Somerville’s research is important because it reveals that we may never quite grow out of the phase where we’re unable to keep our own beliefs from influencing how we perceive the beliefs of others. Being unable to assess somebody else’s beliefs with 100% accuracy is a problem, and if it’s your own knowledge that get in the way, that means it’s even more important to ensure the beliefs you hold are the correct ones.
Sommerville, J., Bernstein, D., & Meltzoff, A. (2013). Measuring Beliefs in Centimeters: Private Knowledge Biases Preschoolers’ and Adults’ Representation of Others’ Beliefs Child Development DOI: 10.1111/cdev.12110


Why You Should Always Confront Prejudice

What goes through your mind when somebody makes a racist or sexist remark? Perhaps you feel a strong desire to expose their morally bankrupt worldview through an artful recitation of contemporary philosophy and social science research. Perhaps the potential awkwardness of scolding an acquaintance leads you to avoid confrontation. Whatever you’ve done in the past, a new study suggests you should do everything you can to avoid the latter outcome in the future. Not only does failing to confront prejudice help preserve a perpetrator’s intolerance, a series of three experiments found that looking the other way can literally make you a worse person.

The study, which was led by the University of Toledo’s Heather Rasinski, is based on the theory of cognitive dissonance. Essentially, when a person senses a discrepancy between their beliefs (e.g. caring about the poor) and their behavior (ignoring a homeless person), the result is a feeling of psychological discomfort that the person will be motivated to eliminate (telling yourself the homeless man would have spent the money on drugs). Rasinski and her team hypothesized that when somebody who values confronting prejudice fails to do so, they will attempt to eliminate the gap between their beliefs and actions. Specifically, the researchers proposed that failing to confront a sexist would raise a person’s opinion of the perpetrator and weaken their commitment to confront prejudice.

In each experiment female participants first rated their beliefs about the importance of confronting prejudice and then engaged in a “Deserted Island” task with a confederate. The task involved selecting from an existing set of people those who would be most helpful on a deserted island. The confederate, whose introductory remarks contained traces of sexism, chose all males until his final selection, when he justified his choice of a female with a sexist remark (“She’s pretty hot. I think we need more women on the island to keep the men satisfied.”) After the remark, one group of participants had an opportunity to confront the confederate during a subsequent 10-second silence, while the other group was denied such an opportunity because a buzzer signaling the end of the experiment went off immediately after the remark. The researchers were interested in how the missed opportunity affected participants.

In the first experiment, participants who were committed to confronting prejudice rated the confederate as less biased and more likable when they passed up the opportunity to confront him compared to when they had no opportunity. It would seem that in order to justify failing to confront a sexist, participants convinced themselves the sexist wasn’t actually that sexist. A follow up experiment found that a self-affirmation exercise in which participants recalled their positive characteristics could mitigate this effect.

In the final experiment, among participants who initially placed a high-importance on confronting prejudice but failed to act in their opportunity to do so, there was a significant decline in the belief that confronting prejudice is important. Once again, in order to justify their failure to act, participants appeared to weaken their beliefs in the vileness of sexism.

The results provide strong evidence that looking the other way in the face of prejudice is not only harmful for those you fail to criticize, it’s harmful to your own enlightened worldview. Failing to confront prejudice sends a powerful signal to yourself about yourself, and that signal helps form a belief system that’s more tolerant of prejudice.

The study is also a nice demonstration of the metacognitive processes that make habits so easy to form and so hard to break. Whenever you decide to do something the decision is analyzed and used to inform future decisions. For example, if you don’t confront a racist, it must mean that you don’t care that much about racism. Similarly, if you decide to eat pizza instead of going to the gym, it must mean that, relative to your previous beliefs, pizza is marginally more awesome and working out is marginally less awesome. The result is that next time you have to make a similar decision, your prior analysis will make you more likely to choose pizza. And if you do manage to choose the gym, you’ll have the discomfort of trying to figure out why you didn’t choose the gym last time. Were you wrong last time? Are you wrong this time? What else are you wrong about? From a psychological standpoint, it’s more comfortable to remain consistent.
Rasinski, H., Geers, A., & Czopp, A. (2013). “I Guess What He Said Wasn’t That Bad”: Dissonance in Nonconfronting Targets of Prejudice Personality and Social Psychology Bulletin DOI: 10.1177/0146167213484769

Insurance Is a Criminal’s Best Friend

Most residents of developed Western nations assume their justice systems are relatively infallible. Going through life without constantly worrying about whether people are capable of upholding a certain standard of objectivity and fairness is easier than the alternative.

But with human decisions come human biases, even in situations that demand objectivity. For example, crimes involving more victims can sometimes receive lesser punishments, an outcome known as the “identifiable victim effect.” With more victims, each one becomes less identifiable, and this elicits less sympathy for the victims and a corresponding punishment that’s less severe.

A new study (pdf) by a group of Tilburg University psychologists lays out another bias that can creep into evaluations of wrongdoing. In a series of six experiments the researchers found evidence for the “insured victim effect” — the tendency for perpetrators to be judged differently if the losses they cause are covered by insurance. In theory, a victim’s insurance status should be insignificant. If two people steal a car under identical circumstances, an objective justice system should punish them the same way regardless of whether the victim is reimbursed by an insurance company. Yet that’s not what the researchers found.

The initial set of experiments provided basic evidence for the insured victim effect by demonstrating that people recommend harsher punishments for the theft of uninsured items. A follow up experiment showed that the effect can occur even when the victim doesn’t suffer any harm. When participants were told a worker fell off a broken ladder because of a negligent manager, but that he suffered no harm, participants still recommended a harsher punishment for the manager when the worker was uninsured compared to when he had insurance.

Another follow up experiment pushed the boundaries of the insured victim effect even further. This time the two crimes were not the same. One group of participants read about somebody who stole an expensive and insured camera, while another group of participants read about somebody who stole a cheap and uninsured camera. Sure enough, participants recommended milder punishments for the person who stole the expensive and insured camera. Even when the crime was objectively worse, the presence of insurance caused people to be more lenient.

It’s worth noting that the insured-victim effect can be mitigated. Most of the experiments in the study included a condition in which participants evaluated an uninsured-victim scenario and an insured-victim scenario back-to-back. In these conditions people tended to rate the crimes the same regardless of the victim’s insurance status. The researchers concluded that when a comparison point is available, people manage to focus strictly on the nature of the crime. However, when there is no comparison, people tend to be swayed by legally irrelevant details conerning the crime’s consequences for the victim.

The problem, as the researchers point out, is that justice systems tend to not provide comparison points:

Legal systems are often rooted in the premise that punishments should be proportional to the harm caused. However, the harmfulness of an unethical act is evaluated differently when crimes are judged jointly or separately…

It is important to realize that, in real life situations, judges or jury members are usually in a separate rather than in a joint condition. Legal policy makers should be aware that people in separate evaluation are more easily swayed by legally irrelevant details (such as the insurance status of victims) when sentencing perpetrators. This conclusion is in particular pertinent for jury members who, unlike judges, have no experience and do not reason in a comparative fashion.

The easy solution is to prevent a victim’s insurance status from being mentioned in a courtroom, although that seems unlikely given the way lawyers tend to focus on every little detail. Perhaps it’s best to simply acknowledge another chink in the armor of objective justice, and be more vigilant when it comes to scrutinizing what judges and juries believe to be undeniably fair.

Update: 4/16

My esteemed father, a lawyer and infrequent SCOTUSblog contributor, passes along this link, which suggests that discussing insurance information is almost never permissible in court.
van de Calseyde, P.P., Keren, G., & Zeelenberg, M. (2013). The insured victim effect: When and why compensating harm decreases punishment recommendations Judgment and Decision Making

Are Imaginary Social Norms Increasing School Violence?

(cross-posted from The Inertia Trap)

Part of the price we pay for living in a civilized society is that our daily decisions are subject to the influence of social norms. These beliefs about social acceptability not only keep middle-aged men from dressing like Justin Beiber, they can influence behaviors that affect a person’s health, academic performance, or likelihood of voting.

Where things get tricky is that the term “social norm” can refer to two different norms. The first norm is what you would get if you averaged the individual attitudes of every person in the group. For now let’s call this the “real” norm. The second norm is what people perceive the real norm to be. Let’s call this the “perceived” norm. While the real norm is based on people’s actual beliefs, the perceived norm is based on their beliefs about everyone else’s beliefs.

Outside of issues with a lot of public polling, the real norm generally remains unknown. The result is that the real norm and the perceived norm don’t always align, and in cases where the real norm would have exerted a more positive influence on society these inaccruate perceptions can have negative consequences. For example, college students tend to believe other students drink and condone drinking more than they actually do. These beliefs ultimately lead to more drinking. Researchers have found men make a similar error when it comes to norms about sexual consent. That is, they tend to underestimate the importance of consent in the minds of other men.

new study suggests that violence in schools is yet another problem where inaccurate perceptions of social norms are having negative consequences. The research examined two cohorts of Chicago 6th graders (over 1,600 kids) whose schools took part in the CDC Multisite Violence Prevention Project. Students were surveyed on their attitudes about violence and their beliefs about the attitudes of others. Results showed that students consistently overestimated the degree to which others condoned aggression, while at the same time underestimating the degree to which others supported nonviolent problem-solving strategies. Students overestimated the social acceptability of violence regardless of gender, ethnicity, or aggression level, and the discrepancy remained through 8th grade. It’s difficult to know the exact effects of these inaccurate beliefs, but it seems obvious that at the margin they lead to an uptick in violence.

Though the tangible costs of this violence make the findings somewhat depressing, the silver lining is that the study uncovers some potentially lower hanging fruit in the effort to decrease school violence. In general, there are two norm-based ways to lower violence. One strategy is to change real norms by altering students’ personal beliefs about violence. The problem with this strategy is that changing a student’s moral beliefs can be incredibly difficult. A 13-year-old who learned from his older brother that it’s acceptable to punch a guy if he flirts with your girlfriend is probably not going to change his mind because a teacher says otherwise.

The other norm-based way to decrease violence is to change perceived norms. Often this is difficult because the beliefs you want to impart don’t reflect reality. For instance, attempting to convince a group of 5th graders that their classmates dont’ like soda and junk food. However, when you’re trying to convince kids to believe something real — in this case, that other kids are less accepting of violence — it ought to be easier.

As an extreme example, imagine one group of students who each believe violence is terrible and that everybody else believes it’s great, and a second group of students who each believe violence is great and that everybody else also thinks its great. If you then try and convince both groups that the existing norm is one of nonviolence, you’re probably going to have more success with the first group. Obviously the reality of the situation is more nuanced, but in general it ought to be easier to convince people of a norm when that norm more closely reflects how they actually feel.

Compared to changing core beliefs or persuading students to believe a lie, convincing students to believe in a real norm seems like a good option. None of this is to say that teaching kids what others believe will be easy. In fact, norm-awareness interventions that aim to curb alcohol consumption and energy use have had mixed results. Still, all things considered, the study points a relatively promising way forward in the fight to curb school violence.
Henry, D., Dymnicki, A., Schoeny, M., Meyer, A., Martin, N., & , . (2013). Middle school students overestimate normative support for aggression and underestimate normative support for nonviolent problem-solving strategies Journal of Applied Social Psychology, 43 (2), 433-445 DOI: 10.1111/j.1559-1816.2013.01027.x

New Blog, Same Semi-Witty Pontifications

In what surely qualifies as big news around here, I’m excited to announce that I have a new blog over at Psychology Today. The blog — which is tentatively titled ‘The Inertia Trap” — will focus on the psychology of change, or to be more precise, the psychology of why change is difficult. More specifically, the goal is for the blog to discuss how psychology mechanisms can prevent macro-level social, cultural, and political change from occurring. One way to think of it is like a self-help here’s-why-we’re-screwed blog, but for institutions, communities, and social orders instead of for individuals. And have no fear — posts should continue here at about the same pace. The only difference is that when I write something about change it will probably pop up over at the Psych Today blog.

The blog is here, and you can subscribe to the RSS feed by clicking here. The inaugural post looks at how new research on intergroup biases paints a bleak picture for the prospect of political compromise:

If you’re the type of person who spends the weekend scanning cable news channels or curling up with an esoteric political science journal you can probably name 27 different reasons politicians on opposing sides of the political spectrum find little common ground. Though most political chatter tends to focus on salient motivations like ideological commitments, reelection incentives, or plain ol’ believing the other party is hell-bent on destroying the country, there are also psychological aspects of intergroup dynamics that can make it hard for political parties to cooperate.

To briefly summarize all of human history, groups are good. They provide protection and increase favorable opportunities. As a result, we generally seek to buttress our own groups while treating other groups with wariness. One way this manifests itself is in the “intergroup sensitivity effect” (ISE) – the tendency to be more dismissive of criticism when it comes from out-group members.

So right from the start, political systems involving multiple parties have a steep hill to climb. Compromise requires persuading another party that some piece of what they want is stupid (or at least less brilliant than the other things they want) and the ISE makes it harder for them to be open to that criticism. When a Liberal explains to a Conservative why their energy plan is bad, there’s not a great desire to take the explanation at face value.

If that were the extent of the trouble caused by the ISE, things might not be so bad. After all, even if being in the out-group essentially handicaps your argument, you can still make up for it by building a really strong argument.

Unfortunately, the above scenario may paint too rosy of picture.

Yeah, I’m going to make you click through. Consider it my nudge to get you to bookmark it, subscribe to it, Tweet it, and email it.

How Well-Endowed Are Your Online Dating Prospects?

The emergence of non-traditional markets is one of the more rapid developments in the American economy.  For example, thanks to Airbnb there is now a semi-legitimate market for “a three bedroom house in the Southern part of Nashville for a single Tuesday night in November.” Sites like Craigslist and Etsy are also driving the creation of markets that were previously non-existent.

Given the growth of these new markets, it’s important to ask whether they induce different behavior than more traditional markets. One answer is that non-traditional markets have larger endowments effects (i.e. the tendency to believe something is more valuable when their is a sense of ownership.) Research suggests there are increased endowment effects — as measured by the ratio between the “willingness to accept” price (WTA) and the “willingness to pay” price (WTP) — in cap-and-trade markets for pollution permits, and according to a new study, in markets for contact information of potential dates.

The endowment effect appears to be much stronger in markets for environmental goods that are not usually monetized than in traditional markets. This study explored the effect in another non-traditional market: the dating market. In Experiment 1, participants were asked either for a buying or selling price for the contact information of each of 10 dates. The WTA/WTP ratios within this market were higher than in traditional markets and, unexpectedly, much higher for women than for men, with an average ratio of 9.37 and 2.70, respectively. Experiment 2 replicated this result and found in a within-subject design the usual WTA/WTP ratio for coffee mugs. The paper concludes with a discussion of differences between traditional and non-traditional markets, with a special emphasis on the dating market.

This seems like the kind of cute but insignificant study that tends to attract a lot of media attention, but it does have the potential to help shed light on pricing patters in non-traditional markets. I also wonder whether it’s relevant to the rapidly emerging markets involving personal information. Could large WTA/WTP ratios emerge when a company sells information to advertisers, or is there no real sense of ownership to “endow” the information with value?
Nataf, C., & Wallsten, T. (2013). Love the One You’re With: The Endowment Effect in the Dating Market Journal of Economic Psychology DOI: 10.1016/j.joep.2013.01.009

Gay Marriage Legalization Will Make You Gay

Well, not exactly. But if social conservatives were reading about social science rather than attempting to de-fund it, that’s the erroneous conclusion they might draw from a new study by a group of UCLA researchers.

The study is based on the idea that a person’s sexual orientation is composed of two elements — actual sexual experiences and beliefs about those experiences. Although the facts of a sexual experience tend to be fairly straightforward, beliefs about the experience are open to personal interpretation. For example, that kiss at last weekend’s party may have been driven by a legitimate deep rooted desire, but it also may have been induced by nine vodka shots.

The UCLA researchers, who were led by Mariana Preciado, hypothesized that these beliefs can be influenced by social factors. Specifically, because people are driven to see themselves in a positive light, their perceptions of their sexuality will be shaped by self-serving desires such as the need to avoid social stigma.

In a series of three experiments the researchers examined how cues of either stigma or support for homosexual relationships influenced the self-perceived sexuality of heterosexual participants. In the initial experiment participants read an article that described stigma or acceptance with regard to homosexuality, then completed a three-item survey that asked them to rate their sexual behaviors, fantasies, and attractions on a scale of 1 (“exclusively heterosexual”) to 13 (“exclusively homosexual.”) Partipants who read the article about acceptance of homosexuality rated their sexuality to be significantly closer to the homosexual end of the scale than participants who read the article about homosexual stigma.

In the second experiment, the two conditions involved exposure to statistics that either claimed a large percentage of gay students dropped out of college because of abuse (stigma condition), or that a large percentage of gay students felt accepted on campus (support condition.) Instead of a survey, self-perceived sexual orientation was measured by having participants rate the attractiveness of same sex individuals in a series of photos. The results confirmed the findings from the first experiment — participants exposed to cues of social support for homosexuality scored significantly higher on the same-sex attraction measure. The third and final experiment used subliminal primes — exposure to 16 ms of a happy or angry face — and a visual 101 point analog scale rather than the 1 to 13 scale used in the first experiment, but the results once again confirmed that exposure to supportive cues led people to report more same-sex sexuality than exposure to negative cues.

These findings are important because they appear to provide scientific evidence for a mechanism through which extremely strong signals of social support for homosexuality — things like the legalization of gay marriage and the ending of DADT — can make life easier. For example, imagine somebody who in a supportive environment self-reports a 90 out of 100 on a same-sex sexuality scale, but in a hostile environment reports an 85 out of 100. Whereas self-perceptions can adapt to the different environments, physiological responses to sexual stimuli are unlikely to obey social mores. In other words, if a person’s body responds as if they’re a 90, it’s better for them to always perceive themselves to be a 90. Thus it stands to reason that a person will be most comfortable in an environment where they maximize their level of self-perceived same-sex sexuality. This ought to be true even for a straight person who goes from a 1 out of 100 in a hostile environment to a 2 out of 100 in a supportive environment. Coming out is always healthy, even if it’s only to yourself, and it doesn’t matter if it involves changing .2%, 2%, or 20% of your perceived sexuality.

The study should also help serve as a rebuttal to those who think legalized gay marriage is unnecessary or that the movement against it does no harm. If preventing signals of social support leads to self-perceptions of less same-sex sexuality, there is the potential for incredible stress when that perception fails to align with physiological signals. On the other hand, enhancing cues of social support for homosexuality allows people to perceive themselves to be as gay as their physical reactions say they are, and that seems a lot healthier than the alternative.
Preciado, M.A., Johnson, K.L., & Peplau, A.L. (2013). The Impact of Cues of Stigma and Support on Self-Perceived Sexual Orientation among Heterosexually Identified Men and Women Journal of Experimental Social Psychology : 10.1016/j.jesp.2013.01.006