Does Access to Birth Control Reduce Poverty?

In American politics the proliferation of birth control is important because of how it affects the eternal resting place of our immortal souls. But believe it or not, there are also non-metaphysical policy consequences to increasing access to birth control. A new study by a pair of economists — Stephanie Browne of J.P. Morgan and Sara LaLumia of Williams College — suggests that access to birth control led to a significant reduction in female poverty rates.

This paper examines the relationship between legal access to the birth control pill and female poverty. We rely on exogenous cross-state variation in the year in which oral contraception became legally available to young, single women. Using census data from 1960 to 1990, we find that having legal access to the birth control pill by age 20 significantly reduces the probability that a woman is subsequently in poverty. We estimate that early legal access to oral contraception reduces female poverty by 0.5 percentage points, even when controlling for completed education, employment status, and household composition.

A second analysis with less robust controls found that access to the pill reduced poverty rates by one full percentage point. Given that the mean poverty rate for women over the relevant time period was 10%-15%, the findings suggest that access to the pill led to a 3 to 10 percent reduction in the female poverty rate. According to Browne and LaLumia, the low end of their estimated impact is equivalent to about a 1 percentage point decrease a state’s unemployment rate.

But wait, there’s more! The results also supported previous findings that suggest access to birth control leads to a statistically significant reduction in the chances a woman will get divorced.

So there you have it. Poverty reduction and strong marriages. The pill is everything a social conservative could ever want.
Browne, S., & LaLumia, S. (2014). The Effects of Contraception on Female Poverty Journal of Policy Analysis and Management DOI: 10.1002/pam.21761

Graduate Students Should Be Able to Specialize In Replication

Now that the need for more replication has forced its way onto the scientific agenda we should begin thinking about how to build systems to support its growth and institutionalization. New publications and conferences are all good steps, but we should go beyond relying on a loosely organized group of scientists who dedicate time to something that has collective utility but little individual utility.

I think one solution is to create well-defined graduate programs that focus on replication. The programs would vary depending on the fields they are designed to police. For example, the psychology replication program would train students in methods for replicating psychology studies and identifying methodological weaknesses that might make a study a good candidate for replication. The programs would effectively be mechanisms for credentialing official science cops.

The programs would accomplish a number of things relative to current initiatives:

1. Create the sense of an unbiased authority. The current system relies on researchers policing the “competition” or policing themselves. Both situations incentivize skewed results. With “Replicators” there would be fewer conflicts of interest — even if they conducted some of their own original research — because official credentials would establish them as unquestioned authorities and put their work under the microscope. Replicators would also make it harder for people to deem replication attempts illegitimate and launch a war in the court of public opinion (see: Bargh, John).

2. Support the innovation of replication best practices. If Replicators do choose to do “original” research a good portion of it could focus on improving the methods for judging the validity of a study. (For example, Gregory Francis has a new paper on using effect sizes to test for publication bias.) A lot this research would overlap with statistics research, but there are also qualitative advances that can be made.

3. Create a self-sustaining system. The ponzi scheme structure of academia is designed so that fields can grow exponentially. Making replication a part of the academic system will create a future supply of Replicators.

The one downside is that by concentrating official responsibility in the hands of a few you exempt everyone else from doing better science. Unfortunately, until changes to the tenure process alter publishing incentives I’m skeptical that a loosely organized system of individual responsibility would be better. Of course there’s no reason we can’t have journals, conferences, individual efforts, and academic programs. As every presidential candidate says, we need an “all of the above” approach.
Francis, G. (2012). Publication bias and the failure of replication in experimental psychology Psychonomic Bulletin & Review DOI: 10.3758/s13423-012-0322-y

False-Equivalence Leads to Inaccurate Views On the Connection Between Vaccines and Autism

The discussion about false-equivalence in the media tends to focus on abstract philosophical questions about the media’s role and responsibilities. This media-centric view isn’t unwarranted — after all, journalists are the ones who will have to solve the problem —  but it does ignore an important part of the equation: How does false-equivalence specifically influence public opinion?

According to a new study by Cornell’s Graham Dixon and George Mason’s Christopher Clarke, the answer is significantly, and in a bad way. Specifically, they found that articles presenting a false-balance between opposing views on the link between vaccines and autism make people more unsure about the absence of such a link:

To investigate how balanced presentations of the autism-vaccine controversy influence judgments of vaccine risk, we randomly assigned 327 participants to news articles that presented balanced claims both for and against an autism-vaccine link, antilink claims only, prolink claims only, or unrelated information. Readers in the balanced condition were less certain that vaccines did not cause autism and more likely to believe experts were divided on the issue. The relationship between exposure to balanced coverage and certainty was mediated by the belief that medical experts are divided about a potential autism-vaccine link.

At the very least, the study helps refute the great myth of false-equivalence — that people form opinions by cleverly navigating opposing arguments rather than using the quantity of coverage as a heuristic for figuring out what’s legitimate. As the authors point out, the results should be replicated using a wider range of articles in which various degrees of false-equivalence have been quantified, but the study is a good start toward building a body of experimental evidence that demonstrates the downside of false-equivalence.

It’s also worth noting that participants who read balanced articles were not any more likely to doubt the autism-vaccine link than people who read only a pro-link article. (In fact, people who read the balanced story were less certain than pro-link readers that there was no link, although the difference was not significant.) In other words, those who read pro-X and anti-X were just as likely (or slightly more likely) to believe X than those who read only pro-X. This strikes right at the fear of those who worry about false equivalence because it hints at the legitimizing effect of balance. When somebody reads a story that only presents one side, they may still understand that the other side exists and is the accepted view. But when the two sides are presented together, it legitimizes the unsubstantiated view and hammers home the idea that the views are equals. In terms of the autism-vaccine link, somebody who reads a story that only argues in favor of the link might still have the prior knowledge necessary to believe it’s a one-sided story about bogus science. But when when they see the pro- and anti-link views presented side by side it can cloud their perception and make them uncertain that there is a difference in the respectability of the two views.

Dixon, G.N., & Clarke, C.E. (2012). Heightening Uncertainty Around Certain Science: Media Coverage, False Balance, and the Autism-Vaccine Controversy Science Communication DOI: 10.1177/1075547012458290

Jonah Lehrer, the Writing Industry, and the Reversal of the Fundamental Attribution Error

Whenever a successful writer gets busted for “cheating,” the narrative always involves the collective wondering of why they would take such a risk. We saw this with the downfall of Jayson Blair and Johann Hari, and most recently with Jonah Lehrer. For example, Erik Kain called Lehrer’s actions “strange and baffling.” Curtis Brainard at CJR straight-up asks what’s on everybody’s mind:

Following the revelations of self-plagiarism, outright fabrication, and lying to cover his tracks, we were bewildered. How could such a seemingly talented journalist, and only 31 years old, have thrown it all away?

What’s interesting is that this question takes a noble view of the offender. The implication is always that the person got to the top on their merits, and then drastically changed their behavior due to situational pressures. People rarely consider that the offender might have risen to the top because they’re predisposed to bending rules or inhabiting the gray areas in an advantageous way. I don’t bring this up specifically with regard to Lehrer — for all I know his recent transgressions were the first impure things he’s ever done — nor am I implying that the people who’ve been caught red handed only reached their apex because of cheating. However, writing tends to be a winner-take-all market full of talented people, and I have no doubt that for every writer who rockets to the top there are a handful of writers nearly as talented left to toil away in the corners of the interent. It certainly seems plausible that bending the rules here or there could make a big difference. Why assume that everything the offenders accomplished up until their downfall was based purely on virtuous actions?

Furthermore, research on the fundamental attribution error (FAE) predicts that people would not attribute the mistakes of somebody like Lehrer to situational pressures. The FAE describes the tendency to believe that a person’s behavior and mental state correspond to a degree that is logically unwarranted by the situation. For example, when people read an essay advocating a pro-Castro position, whether or not the writer was required to advocate for that position makes little difference when people are asked to rate how pro-Castro they think the writer is. Situational factors tend to be ignored, and that means when somebody cheats, we tend to assume that they have always been, and forever will be, a cheater.

Why then do writers tend to give Lehrer the benefit of the doubt by focusing the pressures of his situation? One explanation is that people are more likely to overcome the FAE when their judgments of a person’s motivations can influence their own outcomes. For example, psychologist Roos Vonk conducted a study in which participants engaged in a prisoner’s dilemma scenario with one other opponent. However, before the game they read an essay that advocated for more selfish or more cooperative behavior. Some participants were told the essay was written by their future opponent while some were not. Finally, some were told the writer had no choice in the essay topic, while some were told the writer freely chose to advocate their position.

As expected, for participants who were reading the essay of a random person, whether or not the person chose their topic had little effect on how the the participant rated the writer’s selfish or cooperative attitudes. However, when participants were reading the essay of their opponent, and thus their judgements could affect their own outcomes in the prisoner’s dilemma game, participants judged attitudes expressed in the no-choice essays to be weaker than when the writer had a choice. In other words, they were less susceptible to the fundamental attribution error and correctly accounted for situational factors.

While most writers aren’t literally engaged in a game that makes their outcome “dependent” on the personality characteristics of writers who break the rules, I think it’s fair to say that because of the nature of the industry most writers do have a personal interest in understanding why writers fabricate, how they should be judged, and what the consequences should be. Thus writers may be more likely to overcome the FAE and take situational factors into account when judging the misdeeds of other writers.

This explanation also dovetails nicely with more-superficial reasons for giving cheating writers the benefit of the doubt. Neither aspiring writers nor those who are already successful want to believe that making it as a successful writer requires unsavory practices. The former group doesn’t want to feel pressure to break their moral code, while the latter group wants to see themselves and their colleagues in the most favorable light.

Finally, I think it’s worth mentioning Seth Mnookin’s recent post highlighting previously unknown errors made by Lehrer. Mnookin concludes by essentially saying that Lehrer is a cheater, and has always been a cheater.

This is not the work of someone who lost his way; it’s the work of someone who didn’t have a compass to begin with.

My point in this post is that it’s odd we don’t think more negatively about rule-breaking writers when there is a lack of evidence about their past actions, so I don’t think Mnookin’s revelations are all that relevant here. However, perhaps they provide one piece of evidence for the idea that when it comes to writers commenting on other writers who fabricate, there is a reversal in the direction of the FAE. Not only do they overcome the bias towards attributing actions to internal factors, they are actually biased in over-attributing actions to situational factors.
Vonk, R. (1999). Effects of Outcome Dependency on Correspondence Bias Personality and Social Psychology Bulletin DOI: 10.1177/0146167299025003009

Jones, E.E. & Harris, V.A. (1967). The attribution of attitudes Journal of Experimental Social Psychology DOI: 10.1016/0022-1031(67)90034-0

Scientists Need to Be Bigger Assholes

Dave Weigel’s piece on Lorraine Minnite reads like a fairy tale. Minnite is a political scientist who was called to the Pennsylvania voter-ID-law hearing to testify about the myth of voter fraud. When dubious scientific evidence was brought up by Deputy Attorney General Patrick Crawley, there was no hedging or niceties from Minnite. Just an all out attack on what she perceived to be bad science.

Crawley started to asked his witness about Spakovsky’s work, but she didn’t bite.

“We could go through each one, if you want, and I could talk about what I object to. But in general, he has made claims about voter fraud that, upon investigation, are not correct. In fact, I have written a rebutal to a claim he’s made a lot about voter fraud in Brooklyn in 1982,” published on a “highly-read election listserv.”

“I didn’t want to go through his testimony,” said Crawley. But would Minnite admit he was an expert with different views?

“No,” she said. ‘He’s not an academic. He doesn’t do the kind of research that he should do before making these sorts of claims. He’s not an academic. He’s a lawyer. He’s employed by the Heritage Foundation… but he has no standing as an academic. He’s never produced academic research.”

Another merciless rebuttal:

“I can tell by the look on your face that you’re familiar with Mr. Von Spakovsky and his work?”

She was. She and Crawley briefly disagreed on whether Von Spakovsky ever served on the FEC — “I couldn’t remember which thing whe was nominated to that he didn’t actually receive.” She dismissed Van Spakovsky’s work on election boards, because he was “never an administrator.”

And one more:

“Your formal education, if I read your CV correctly, does not include specific training in election administration, does it?” he asked.

“I don’t know what you mean by training,” said Minnite.

“Did you get any degree or take courses that were specifically geared toward election administraion?”

“Actually, there are no degrees in election administration.”

It reads like an Aaron Sorkin script.

Here’s the broader point. One of the problems for people who do legitimate science is that they’re incapable of really unleashing their fervor on people who do illegitimate science. One reason for this is that doing so would be unscientific. To belittle your opponent and say their work is 100% super-definitely-incorrect is to ignore the tiny chance that new evidence will emerge.

The result is that when scientists levy criticism it’s usually built around an official-sounding reference to “evidence at this time.” It lacks the snark and bitterness of a Romney campaign response to an Obama speech, and it certainly isn’t powerful enough to oppose the concerted effort made by those who aim to discredit good science. But if scientists were more like Minnite, and repeatedly ripped bad science to shreds like a derisive 12-year-old giving in to every destructive Machiavellian impulse, maybe things would be different. Or maybe not.

Read Weigel’s longer Slate feature on the hearing here.

The Looming Knowledge Bubble

If you owned a McMansion in 2007 or in 1999, the “bursting” of the bubble essentially meant that money you thought you had ceased to exist. With researchers placing a new emphasis on replication efforts, and academia beginning to undergo a serious transformation in general, I wonder whether we’ll soon find out that we’ve been living through a knowledge bubble. That is, the bubble will “burst,” and a lot of knowledge about the world we thought we had will cease to exist.

Academia has always been an institution that regulated itself, and within it there was never much incentive to go beyond the call of duty in ensuring published findings truly reflected the state of the world. What was there to gain by placing an emphasis on replication? A failure to replicate would cast doubt on everybody’s work, and a successful replication would merely take valuable time away from bolsterring reputations through more original research.

There are a few reasons why things are now starting to change. Scientists themselves are finally increasing efforts to ensure that published results accurately reflect real knowledge about the actual state of the world. The most well-known of these initiatives are the Reproducibility Project, which attempts to reproduce important findings, and PsychFileDrawer, which aims to catalog negative results so any apparent finding can be viewed in the context of previous attempts to uncover it. Replication studies are also becoming increasingly easier to perform. For example, The Economist has a nice story about how Mechanical Turk is facilitating the replicaiton process, particularly when it comes to seeing if WEIRD results hold up across cultures.

Last but not least, the push for open access and the growing role of non-published work (e.g. blogs, speeches, etc.) are poised to disrupt the fragile publication-based tenure system that’s long ruled academia. For years researchers have been afraid to disrupt the system because older academics built their reputations on it, and younger academics planned careers around it. But it won’t last forever. What will happen when journals start disappearing because their publishers refuse to make them accesible? A rapidly changing journal landscape will make the value of being published in a particular journal much less standardized. As the system breaks down, there is bound the be less emphasis on the quantity and location of what’s published, and more emphasis on its quality.

All of these factors will contribute to a slowdown in new published research, as well as a “loss” of existing research. Eventually the bubble of knowledge will deflate. Because it will be driven by the labor-intensive act of conducting replication research, it’s not really a bubble — it’s more like an air mattress with a small hole that you need to repeatedly jump on in order to expel air.

Whatever ill-fitting metaphor you choose, the bottom line is that a bunch of things we “know” about the world could disappear, and we’ll be left with less knowledge. While some have speculated about the dire consequences for science, and the field of psychology in particular, in the long run big structural changes will produce a healthier and more stable institution of scientific research. We’ll just have to get used to idea that for the first time recent history, we’ll know less about many things than we once did.