Which Teachers Do Principles Fire?

One of the more outlandish ideas tacitly tossed about in the education reform debate is the notion that making it easier to fire teachers will somehow lead to excellent teachers being “accidentally” fired as a result of misguided metrics. Given the care that goes into crafting education policy, it’s an absurd claim, but if you need more evidence there’s a new study that looks at firings in the aftermath of a 2004 collective bargaining agreement that gave Chicago principals carte blanche to fire teachers.

With the cooperation of the CPS, I matched information on all teachers who were eligible for dismissal with records indicating which teachers were dismissed. With these data, I estimate the relative weight that school administrators place on a variety of teacher characteristics. I find evidence that principals do consider teacher absences and value-added measures, along with several demographic characteristics, in determining which teachers to dismiss.

With all the opportunity in the world to screw up, principles decided to fire the teachers who…had the poorest attendance records and lowest effectiveness ratings. The principles also tended to fire teachers who had less experience and who had previously been fired.

One could raise doubts about the accuracy of the effectiveness ratings in question, but my point is that even with few guidelines principals seemed to have done a pretty good job identifying which teachers to fire. Evaluating the effectiveness of a teacher is clearly more complex than evaluating the effectiveness of a paper towel, but the implication that a teacher is some kind of nebulous vortex that transcends human judgment is absurd. Anybody who claims that a carefully constructed qualitative and quantitative evaluation in the hands of capable administrators will do more harm than good shouldn’t be taken seriously.
Jacob, B. (2011). Do Principals Fire the Worst Teachers? Educational Evaluation and Policy Analysis, 33 (4), 403-434 DOI: 10.3102/0162373711414704


Teaching Kids to Struggle

The Achilles heel of education research is that findings tend to have a hard time moving from the journal page to the classroom. Nowhere is this more apparent than with the work on theories of intelligence, one the most simple, high-impact, and thoroughly researched concepts in educational psychology. Studies show that when students believe intelligence is malleable rather than fixed, they put forth more effort, persist longer in the face of difficulty, and have higher achievement in both the short term and long term. As your neighborhood drug dealer would say, “it’s good shit.”

Unfortunately, little progress has been made on actually ensuring students develop proper views about intelligence. Most of the research involves interventions that literally teach students the brain is a muscle that can get stronger, but it turns out that schools are loathe to take a few hours a month to teach kids a seemingly arbitrary lesson about their brains. The challenge has been to smoothly integrate theses lessons into existing curricula.

Given that students already spend too much time studying facts about famous people, the most simple solution would be to make sure some of those facts are about mistakes, struggles, or gradually becoming better at whatever it is that made the people famous. Right on cue, a new study shows that even when integrated into curricula, lessons about intelligence are still effective.

Two hundred and seventy-one high school students were randomly assigned to 1 of 3 conditions: (a) the struggle-oriented background information (n =90) condition, which presented students with stories about 3 scientists’ struggles in creating the content knowledge that the students were learning through online physics instructional units; (b) the achievement-oriented background information (n =88) condition, in which students learned about these 3 scientists’ lifetime achievements; and (c) a no background information (n=93) condition, a control group in which students mainly learned information about the physics contents they were studying. Our measures assessed perceptions of scientists, interest in physics lessons, recall of science concepts, and physics problem solving. We found that the achievement-oriented background information had negative effects on students’ perceptions of scientists, producing no effects on students’ interest in physics lessons, recall of science concepts, or their solving of both textbook-based and complex problems. In contrast, the struggle-oriented background information helped students create perceptions of scientists as hardworking individuals who struggled to make scientific progress. In addition, it also increased students’ interest in science, increased their delayed recall of the key science concepts, and improved their abilities to solve complex problems.

The key is that these types of selective history lessons shouldn’t be too much of a disruption. Before teaching the laws of motion it’s easy for teachers to take two minutes to show that Newton struggled. Or to show how Pythagoras failed. Or how FDR was unsure of his decisions. There is no reason these lessons can’t be built into every subject.

On a related note, I think it would be interesting to investigate whether the “struggle” narratives that are so widespread in art and writing contribute to the fact that, at least anecdotally, people seem to persist in writing and making art more than, say, attempting to become a physicist. (Alternatively, that may be the case because the starving artists of the world have no other skills.)
Hong, H., & Lin-Siegler, X. (2011). How learning about scientists’ struggles influences students’ interest and learning in physics. Journal of Educational Psychology DOI: 10.1037/a0026224

The Omission Bias and Penn State

There has been a fair amount written on the social psychology of the Penn State state situation (e.g. David Brooks), and I think it’s interesting that most of the discussion has centered on the bystander effect while glossing over the simple fact that people are biased against action. Sure, it makes us feel better about humanity to say there’s some kind of intense self-deception that keeps ups from acting (and certainly there is to some extent), but in reality placing the blame on self-deception may be too generous. The fact is, people generally don’t have moral issues with inaction.

There are a number of studies that illustrate this “omission bias,” including the trolley problem study I mentioned the other day (in which many subjects prefer to let five people die rather than take a semi-active role in the death of one or two people), and an eye-opening 1990 study by Ilana Ritov and Jonathan Baron on decisions to vaccinate a child.

The present study concerns the role of two biases in hypothetical decisions about vaccinations. One bias is the tendency to favor omissions over commissions, especially when either one might cause harm. We show that some people think that it is worse to vaccinate a child when the vaccination can cause harm than not to vaccinate, even though vaccination reduces the risk of harm overall. The other bias is the tendency to withhold action when missing information about probabilities is salient – such as whether the child is in a risk group susceptible to harm from the vaccine – even though the missing information cannot be obtained.

Obviously this doesn’t explain the decisions of Joe Paterno and the Penn State administrators to not investigate Jerry Sandusky more aggressively. However, when you combine the type of passivity endorsed by omission bias + incomplete information with the king-like status of Paterno at Penn State, you begin to see how students could feel so outraged that he would be fired for “doing nothing.” (I also think a large portion of the protest can be attributed to the fact that college students like congregating in large raucous crowds under unique circumstances regardless of the inspiration for the gathering, but that’s a psychological discussion for another post.)
Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: Omission bias and ambiguity Journal of Behavioral Decision Making, 3 (4), 263-277 DOI: 10.1002/bdm.3960030404

Shallow, C., Iliev, R., & Medin, D. (2011). Trolley problems in context Judgment and Decision Making, 6 (7), 593-601

The Similarity Effect and Republican Obstructionism

The GOP’s “just say ‘no'” response to the Obama agenda has been surprisingly successful. They got away with nearly crashing the economy and they managed to turned public opinion against health care reform at a time when the economy and the President’s approval were far from rock bottom. Although a number of economic, political, and strategic factors likely helped shape public opinion, this blog is about psychology, and from a psychological standpoint all signs point to the influence of the “similarity effect.”

The similarity effect essentially says that when evaluating two options the presence of a third option that’s similar to one of the first two will bias choices away from the similar options. For example, if you can’t decide whether to order pizza or Thai food, and then I offer the option to order Malaysian, chances are you’ll decide to order pizza. A new study shows this effect can even influence decisions in life or death moral quandaries (and thus it’s probably strong enough to influence political preferences).

The study examined people’s approval of actions in a modified trolley problem. In the standard trolley problem subjects are told there is a runaway train that will kill five people. They are then asked if they would approve of flipping a switch to redirect the trolley onto a track with one person, or approve of stopping the train by pushing a fat person in its path. Even though both actions are effectively the same (killing one person to save five people), subjects generally approve of flipping the switch but disapprove of pushing the person off the bridge.

In the modified trolley problem two groups of subjects were given three choices (action, action, and nothing) instead of just two (action and nothing). Both groups were told they could push the person into the train’s path (one dies, five are saved) or do nothing (five die.) However, one group was told they could also flip a switch to redirect the train onto a track with two people, while the other group was told they could flip a switch to redirect the train onto a track with four people. The difference was that for the first group the third option was similar to pushing the person off the bridge (letting two people die instead of one), while for the second group the third option was similar to doing nothing (letting four people die instead of five).  The researchers found that when flipping the switch was similar to doing nothing, pushing the person off the bridge had the highest approval rating and doing nothing was the least approved option. However, when flipping the switch was similar to pushing the person off the bridge, doing nothing had the highest approval rating.  The mere presence of the “switch” option had drastic effects on people’s approval of the pushing and inaction options.

The application to public policy is relatively simple. When it’s just “Obamacare” and “No Obamacare,” democrats may have a sufficient amount of support. However, when the discussion starts to include Obamacare with a public option, Obamacare without a public option, Obamacare with medicare for all, and Obamacare with stricter abortion requirements, the similarity of all those options starts to make “No Obamacare” more appealing. The same thing may have happened with financial regulation and the debt ceiling negotiations. Always saying “no” may seem uncreative and unproductive, but thanks to the similarity effect it works.
Shallow, C., Iliev, R., & Medin, D. (2011). Trolley problems in context Judgment and Decision Making, 6 (7), 593-601

Why Girls Don’t Want STEM Careers

Given the funding NSF has put into STEM issues, I’m not ruling out that they would be willing to commit a few felonies in order to get some answers. Fortunately, it appears they may get some answers the old fashion way. A new study by a group of researchers from the University of Miami finds more evidence for the theory that female disinterest in STEM careers is due to a higher desire to facilitate communal goals.

In their initial study the Miami researchers found that women are more likely than men to endorse communal goals, and that STEM careers are perceived as more likely to impede communal goals. However, there was no causal evidence that these beliefs affected career choice. In the follow up study they sought to confirm the causal connection with experimental evidence.

The researchers first activated communal goals in some of the subjects and then asked all subjects about their interest in STEM careers. The results showed that the activation of communal goals decreased interest in STEM careers in both sexes, but had no effect on interest in female-stereotypic careers or non-STEM male-stereotypic careers. In a follow up experiment subjects were given descriptions of various careers that related to collaborative or independent work (e.g. “Mentor new members of my statistics group in doing data analysis.”) Women, but not men, were more likely to favor careers that they were told involved more collaborative work. The results from the two studies provide robust support for the idea that women shy away from STEM carreers because they perceive them to impede the facilitation of communal goals.

In the past, the simple explanation for the lack of women in STEM fields was that it was a PR problem. Because the fields were full of men, women didn’t feel like they would be comfortable there. The good news about these findings is that even though they provide a more scientifically rigorous social-cognitive explanation, the prescription is essentially the same. It’s a PR problem in need a PR solution. Now that science has done its job it’s time for NSF to hire a fancy advertising agency to convince girls that scientists and engineers don’t sit in a lab by themselves for 10 hours a day.
Diekman, A., Clark, E., Johnston, A., Brown, E., & Steinberg, M. (2011). Malleability in communal goals and beliefs influences attraction to stem careers: Evidence for a goal congruity perspective. Journal of Personality and Social Psychology, 101 (5), 902-918 DOI: 10.1037/a0025199

Can Computers Reduce Bias?

Artificial intelligence has proven to be a the perfect antidote for certain failings of human intelligence. Exhibit A: Alarm clocks that move around the room in order to prevent weak-willed individuals from repeatedly snoozing. A less trivial weakness of the human mind is its susceptibility to numerous cognitive biases, and once again artificial intelligence may have an answer. A new experiment by a group of German researchers finds that a computer recommendation system that recommends information inconsistent with a person’s prior beliefs can alleviate confirmation bias.

In Study 1, preference-inconsistent recommendations led to a reduction of confirmation bias and to a more moderate view of the controversial topic of neuro-enhancement. In Study 2, we found that preference-inconsistent recommendations stimulated balanced recall and divergent thinking.  Together these studies showed that preference-inconsistent recommendations are an effective approach for reducing confirmation bias and stimulating divergent thinking.

Subjects in the experiment were initially asked a series of questions to gauge their thoughts on the topic of neuro-enhancement. Subjects were then presented with a list of eight arguments about neuro-enhancement, four of which were in favor of the practice and four of which were against it.  A third of the subjects were then recommended an argument consistent with their beliefs about neuro-enhancement (preference-consistent), a third were recommended an argument inconsistent with their beliefs about neuro-enhancement (preference-inconsistent), and a third received no recommendation (control condition).  When subjects were later given the opportunity to view more arguments or explain their views on neuro-enhancement, subjects in the preference-inconsistent condition showed less confirmation bias, more balanced recall of the arguments, and a greater ability to generate novel arguments.

These kinds of AI systems will surely come in handy when I’m in charge of a totalitarian democratic society that forces people to watch an unbiased, non-partisan two hour policy-education video before voting or expressing an opinion. For now, the challenge is finding a way to convince people to use preference-inconsistent recommendation systems. I don’t think it would be difficult for search engineers to create some kind of “reverse-Google” that returns preference-inconsistent information, but educators, policy makers, and people fed up with their ignorant friends would have to get creative in finding incentives that would actually get people to use it. The experimenters astutely point out that online recommendations systems are generally designed to give you the most preference-consistent recommendation possible. If you like pizza, Yelp will tell you about pizza places. It won’t say “Have you thought about trying Japanese?”
Schwind, C., Buder, J., Cress, U., & Hesse, F. (2011). Preference-inconsistent recommendations: An effective approach for reducing confirmation bias and stimulating divergent thinking? Computers & Education DOI: 10.1016/j.compedu.2011.10.003

When Do Children Start to Hate Inequality?

Research has shown that children as young as three are willing to share resources with others, but thus far little is known about how children actually feel about those decisions. For example, are they happy about giving away some of their resources, or do they just do it because they think that’s what society wants them do?

A recent study aimed to answer this question by examining the emotional reactions of kindergarteners, 2nd graders, and 4th graders to their decisions in the dictator game. The results show that while a majority of kids choose to give away half their resources by the time they reach 2nd grade, it is not until 4th grade that they feel satisfaction from their decision to curb inequality.

We present two studies (using the dictator and ultimatum games) suggesting that young children (5-6 years old) are aware of the norms of fairness but choose to act selfishly and prefer not to share. Slightly older children aged 7-8 adopt these norms in their actual behavior but do not feel happier when they share half of their endowments than when they share less than half. Finally, true inequity aversion only appears at the ages of 9-10, when children not only give more, but they correspondingly also feel better when their endowments are equally divided.

The study demonstrates that although kids will fight inequality because they think it’s the right thing to do, it is not until they’re older that they actually get satisfaction from doing the right thing. In other words, even when kids know what they should do, it takes a few years for that action to also become what they truly want to do.

The study also means that 2nd grade teachers worrying about unequal pencil distribution leading to an “occupy the classroom” situation can probably breathe a sigh of relief. Fourth grade teachers may not be so lucky. —————————————————————————————————————————————
Kogut, T. (2011). Knowing what I should, doing what I want; From selfishness to Inequity aversion in young children’s sharing behavior Journal of Economic Psychology DOI: 10.1016/j.joep.2011.10.003