The Year In Sports Research

Think that being an academic is incompatible with being a die-hard sports nut? Think again. The greatest minds of our time are still hard at work figuring out exactly what’s going on with athletes, teams, and fans. Here’s the best of what they uncovered in 2012:

Tax rates matter. A pair of new studies examined how local income tax rates influence a team’s ability to sign free agents. Cornell’s Nolan Kopkin analyzed the NBA free agent market from 2001-2008 and found that an increase in the marginal income tax rates paid by players on a given team leads to a decrease in the average skill of free agents the teams signs. In other words, higher taxes mean worse free agents. Tulane’s James Alm led a similar study that looked at MLB free agents. Alm and his team found that teams in states with higher income tax rates pay higher free agent salaries, and therefore teams in low tax areas have the advantage of being able to pay lower salaries. Don’t be surprised if you see Grover Norquist start taking credit for World Series titles.

It’s good to be a generalist. New research by Long Wang and J. Keith Murningham suggests there is more interest in players with a general skillset even when specialized skills are needed. Not only do fans prefer players with a wide variety of skills, general managers have the same bais. Wang and Murningham analyzed the contract outcomes of free agent guards who they categorized as either two-point specialists or three-point specialists. They found that the salaries of three-point specialists were based on their two-point scoring rather than their three point scoring. Oddly, three-point shooters were evaluated based on their more general, but less apt, two-point scoring ability. No word on whether the sample was weighted toward decisions made by the Wizards, Bobcats, and Kings.

The Wonderlic Test matters for draftees, but only for certain positions. Few tests have received as much attention as the 12 minute “intelligence” evaluation given to potential NFL draftees. While the importance of the test as a predictor for future success has been downplayed in recent years, two Cal State-Fullerton economists found that scores still have an impact on a player’s draft position if they’re a quarterback, tight end, or offensive lineman. The findings mean kickers can continue their tradition of staying up to get drunk the night before the test.

The benefits of national sports fandom are progressive (in Germany). We can’t be sure what a study of Americans would show, but a survey of over 5,000 German residents found that when Germans were victorious, the biggest increases in happiness and pride were among women, individuals with a low educational background, and people with low incomes. So it turns out all those  Ivy League rowers do care about the poor.

NCAA basketball officials want to appear fair. Fans like to comfort themselves by saying that calls will even out in the end, but when it comes to NCAA basketball that might actually be true. In 2009 Kyle Anderson and David Pierce found evidence that college basketball officials tend to even up foul counts, and a new study by Cecilia Noecker and Paul Roback both confirms and extends those results. Their study examined Big Ten, Big East, and ACC games from the 2004-05 and 2009-10 seasons. They found that every increase in the foul differential between home and visiting teams increased the chances a foul would be called on the home team. In addition, they found that offensive foul calls, which tend to be more subjective, are more likely to be influenced by the foul differential between the two teams than the rate at which other fouls are called during the game (with no bias, the latter metric would be a better predictor.) Evening up foul counts may seem unfair, but remember that these are student athletes, and thus we need to protect them from the cruel reality of an objective world.

Teams might actually get caught “looking ahead.” Northwestern’s Jennifer Brown and Cal’s Dylan B. Minor examined the relationship between the strength of future opponents and player performance in tennis tournaments. They found that strong players were more likely to win when their expected opponent in the next stage of the tournament was weaker. Because opponent strength is all relative, you could spin the result around and say that players are less likely to win when their expected future opponent is stronger. In other words, it’s better for a looming opponent to be weak than strong. Does that mean postgame analysis from talking heads about “looking ahead” deserves a shred of legitimacy? Meh.

Better to host a soccer tournament than the Olympics.  People are growing more skeptical about the economic effects of hosting a sporting event, and this study in the International Journal of Sport Policy and Politics will only add fuel to the fire. The study looked at the impact of event hosting on “foreign direct investment” (FDI). The researchers found middling effects, although the more-detailed version of their analysis suggests that countries hosting the World Cup or UEFA Euro Championship saw more FDI than those hosting the Olympics. A muli-level regression also found that hosting non-Olympic tournaments allows you to create less-idiotic mascots.
———————————————————————————————————————————————————–
Kopkin, N. (2011). Tax Avoidance: How Income Tax Rates Affect the Labor Migration Decisions of NBA Free Agents Journal of Sports Economics, 13 (6), 571-602 DOI: 10.1177/1527002511412194

Wang, L., & Keith Murnighan, J. (2013). The generalist bias Organizational Behavior and Human Decision Processes, 120 (1), 47-61 DOI: 10.1016/j.obhdp.2012.09.001

Hallmann, K., Breuer, C., & Kühnreich, B. (2012). Happiness, pride and elite sporting success: What population segments gain most from national athletic achievements? Sport Management Review DOI: 10.1016/j.smr.2012.07.001

The Inability of Poor Students to Navigate College is Not the Problem, College is the Problem

This weekend’s NYT story on the plight of three girls from low-income families who excelled in high school but struggled once they got to college has quickly been labelled as “must read.” Though the empathy shown towards the girls by the loud response to the story is real, it is also shallow. It reeks of an “aw shucks, that’s a shame, things should be different, we should do more to help” attitude, but nobody dares to truly question the broader environment that allowed the story’s events to take place. Nobody questions a system that every decision maker in America came through, but which only works for 20%-40% of the country. Nobody questions a system that’s supposed to be the key American vehicle for social mobility, but which often has a sticker price of $150,000. If I told you a poor African country had a system that allowed impoverished villagers to have a middle class life in the city, but that it cost $20,000 to take part in it, you would immediately say it’s perverse. Yet that’s essentially what we have in America.

There are a number of reasons people rarely point this out. First, because our university system has been around for so long in it’s current form, nobody thinks there’s another way. Listen to any politician talk about education and it’s always about getting more kids to graduate from college. From a practical standpoint, that’s the best thing a young person can do today, but that doesn’t mean it’s a good system. We’ve irrationally stamped the bachelor’s degree as the unquestioned end goal without wondering whether awarding a B.A. to young people who score well on a standardized test and complete four years of college is an efficient system for providing social mobility. Why don’t we seriously consider whether a system that was built to efficiently stratify society 50 years ago is still the right system to do that today?

Psychologically, different variations of the status quo bias also make it difficult to imagine another system. Because powerful people all went to college, they consciously believe that if it worked for them it can work for anybody. There’s also anxiety that any change to the system will detract value from a person’s current credentials. From an unconscious standpoint it’s also hard to question the current system. People want to believe they live in a just society, and admitting that our system of social mobility was designed by elites, for elites, and disproportionately benefits elites causes a lot of psychological discomfort. It’s easier to assume the current system is fine and all we need are marginal improvements that help it better serve poor people.

At some point in the future I’ll want to go into more detail about specific ideas for building a more equitable higher education system, but if we’re serious about creating more opportunity for poor students there are four key long-term radical changes we should be thinking about:

1. Unbundle the bachelor’s degree. People understand that bundles of cable channels screw the consumer by forcing them to pay for channels they don’t want in order to get the channels they do want. Bachelor’s degrees function the same way. For example, at many selective universities students are required to take three semesters of a foreign language in order to graduate. If you’re a poor kid who wants to be an engineer or a psychologist, this is an unnecessary burden. Knowing a foreign language appeals to our elitist notion of a liberal arts education, but it’s unfair to impose those ideals on the 70% of society that can’t afford to purchase them. In the long run we need a system that awards “mini-degrees” based on smaller bundles of classes. For example, instead of a B.A. from Harvard, a student could be a “Harvard Level 7 Mechanical Engineer,” “Harvard Level 2 Historian,” and a “Harvard Level 5 Physicist.” Unbundling the components that make up a Bachelor’s degree will allow students to pay for exactly as much education as they want. It would also allow professionals who want more education to get the necessary credentials from two or three classes rather than an entire post-graduate program.

2. End College Admissions & 3. Make Classes Free. Currently, geographical and financial roadblocks restrict the number of people who have the opportunity to get the word “Harvard” on their resume. Yet many of these people, whether they live in a poor Indian village or an American community without social supports, have the ability to excel in Harvard classes and earn the mini-degrees mentioned above (if not a full B.A.). Imagine that any of these people could go on the internet, take a series of free Harvard classes, and then take a series of tests that would grant them a valid credential for their work in those classes. Schools would be able to maintain their reputations by making the tests so difficult the passing rates match their admission rates. However schools decide to do it, we should have a system where if you’re smart enough to prove you know something, you should be able to prove it. It’s that simple.

4. Charge Money For Acquiring the Credential, Not For the Learning. Giving free education to anybody who wants is probably not a sustainable financial model, so to make money universities can charge for the opportunity to take their assessments. Under this system, instead of having to pay up front and without knowing whether you’ll reap the benefits of passing the class, you can take the class for free and only pay money if you believe you learned something and want to prove it. Schools will have to navigate various tradeoffs involving revenue, test difficulty, and academic reputation, but the opportunity to sell credentials to entire world has to potential to create a monetary windfall. Of course there’s also no reason our current higher education system can’t exist alongside this new version. If the American upper-class believes small in-person classes and the social development provided by life on college campuses is worth $30,000 a year, they can still pay for that experience. But that shouldn’t be the only way to jump into a higher income bracket.

The bottom line is that we should have a system where anybody can take any class at any university, and if they pass an assessment, be able to advertise their knowledge to employers in a manner that’s easily understood. Any system that prevents that from happening is putting the financial well-being of universities and the American upper class ahead of the financial well-being of the American poor. The steps outlined above are a radical transformation that could only take place over many years, but that should be our end goal, not making marginal improvements to how poor students can navigate our current higher education system.

Is the “Feminization” of Professional Titles a Good Thing?

One item on the to-do list of progressive societies is creating a more gender-neutral language. There are generally two ways of doing this. The first, “neutralization,” involves using one gender-neutral word to refer to men and women — for example, referring to both math and female professors as “professor” instead of the Spanish method of using profesora and profesor. The second method, “feminization,” involves creating feminine forms of nouns in order to make the presence of a female more salient — for example, using the word chairwoman instead of chairman or chair.

Most recent changes to the English language tend to involve feminization rather than neutralization, and although these changes are all positive developments in the abstract, questions remain about what specific effect these new words have on the people they describe. For example, could these new words hurt a woman’s reputation, either because they’re unfamiliar or because they emphasize the undermining of gender norms?

That’s the question a group of Polish researchers recently set out to answer. In two initial experiments they created a fake job title and then presented participants with people who had that job. One group was presented with a female described by the feminine version of the job title, a second group was presented with a female described by the masculine version of the job title, and a third group was presented with a male described by the masculine version of the job title. The job titles were in Polish — diarolog (masculine) and diarolożka (feminine) —  but if they had been in English you could imagine they were something seemingly legitimate like “senior reductorman” and “senior reductorwoman.” The important thing is that participants were sold on the authenticity of the job titles.

Once participants read about the “diarologs” they were asked to evaluate their ability to do the job. The researchers found that females with the feminine job title received lower evaluations than both the men and the women who had masculine job titles. Feminization appeared to be harming women, although the researchers still weren’t sure of the cause. One explanation was that women were in fact being punished for taking on male roles, but an alternative explanation was simply that unfamiliarity with the feminine version of the job title led to lower ratings.

To solve this issue the researchers conducted a third experiment that asked about candidates for a beautician job — a role that’s lower status and stereotypically female — and a nanotechnology job — a role that’s high status and stereotypically male. The researchers found that people who had the feminine beautician job title received lower ratings than those with the masculine beautician job title, the same outcome that occurred with the nanotechnology job and in earlier experiments. Because the feminine job title led to lower ratings even in stereotypically female jobs, the researchers concluded that it’s the oddity of feminine job titles that causes the lower ratings, not their ability to upset gender norms.

This finding is important because it means there’s a tradeoff between short-term costs and long-term benefits when it comes to adding gender-neutrality to a language.

Feminine forms may sound strange, and negative connotations may be prominent. But the more feminine job titles are created, the more frequently and systematically they are used in reference to women, the more normal they will sound and the more neutral the feminine suffixes should become in the long run (see the mere exposure effect in Zajonc, 2001). They may then unfold their positive potential with few traces of side effects.

In the short-run somebody who’s described as a chairwoman or spokeswoman may have their status diminished, but as those words cease to raise even the most sexist person’s eyebrow, those drawbacks will disappear. Moreover, in the long-run the existence of feminine job titles ought to help entrench the idea of women in powerful positions and lead to a marginally higher level of equality. So if you have a job with -woman attached to the end, make sure that 30 years from now the young girls are thanking you for all you’ve done for them.
—————————————————————————————————————————————————————-
Formanowicz, M., Bedynska, S., Cisłak, A., Braun, F., & Sczesny, S. (2012). Side effects of gender-fair language: How feminine job titles influence the evaluation of female applicants European Journal of Social Psychology DOI: 10.1002/ejsp.1924

If You Hate Standardized Testing, Blame the Higher Education System

As the pushback against standardized testing continues to expand, the quest for less testing remains squarely focused on K-12 decision makers. The feeling seems to be that all we need is the fortitude and boldness to buck the scoundrels who are ramming test-based accountability down our throats. At first glance this makes perfect sense. After all, elementary and middle schools are the places where standardized testing seemingly engulfs the entire school calendar.

But the focus on K-12 education reveals a lack of awareness for why standardized testing is so robust. People complain that we’re creating a generation of students whose only skill is taking a test, but being able to do well on a test is arguably the most important skill for a young person to have. To get a good job you need to graduate from a good college, and in order to get into a good college you need to know how to do well on the SAT or the ACT. So the rampant standardized testing in elementary schools isn’t an arbitrary method of accountability. Schools measure succes in test scores because that’s how society measures success. Our elementary schools aren’t driving the testing movement, they’re responding to a world that demands it.

Imagine you offered every parent a choice when their child was eight-years-old. Their child could either score in the 99th percentile on their SAT, or they could graduate from a good high school that claims to focus on important social, emotional, and problem solving skills, although there would be no guarantee about the child’s SAT performance. Many parents would take the first offer. And it’s not clear that’s the wrong decision when it comes to maximizing their child’s future. Similarly, you can’t blame a bureaucrat or an elected official for thinking that the easiest way to help an at-risk child is to make sure they can do well on the SAT.

When viewed as a key element of our country’s system of social mobility, standardized testing becomes more difficult to do away with. Parents ultimately want a way to be sure their kids are on track to get a good job, and as long as good jobs require good college degrees, and good college degrees require good performance on a standardized test, K-12 standardized tests are going to seem like an ok solution. None of this changes the fact that too much testing is bad, but having no accountability or a system that’s purely subjective wil never pass muster. That means there has to be an accountability alternative to testing. Thus far, conceptualizing such a thing has been difficult.

Fortunately, there is another option. We can make standardized testing unimportant in K-12 education by making it unimportant for higher education. But that means making significant changes to the system we’ve constructed for allowing social mobility. We need radical reconstructions of the way universities do admissions, credit allocation, and credentialing. Of course we need to start small, and that’s why it’s good news that colleges are beginning to drop SAT requirements.

According to the National Center for Fair and Open Testing, this year nearly 850 schools are opting to accept freshman who haven’t taken either one. Schools like DePaul University and Smith College don’t require the tests at all while institutions like Bryn Mawr and NYU waive them in favor of SAT subject tests or AP or IB exam results. According to FairTest, over “40 liberal arts colleges ranked among the top 100 do not require all or many applicants to submit ACT/SAT scores before admissions decisions are made.” Other schools waive the tests if students are applying to specific programs, or have grades that put them over a particular GPA threshold.

Imagine that top cooking schools based half of their applicant evaluation on the applicant’s ability to cook a spectacular piece of grilled salmon. A K-12 cooking education system would probably spend a disproportionate amount of time on grilling salmon relative to other types of seafood dishes. Yet in this scenario people wouldn’t attack the K-12 system, they would focus on the crazy admissions practice of putting an absurd amount of emphasis on grilled salmon.

At the moment our K-12 education system functions in a similar manner. We overlook the role of testing in the college admission and professional credentialing process because it’s relatively practical and nobody knows a different way of life , but its effect on the K-12 education system is enormous. If the anti-testing crowd wants to make a real difference in discouraging K-12 test-taking, they should focus on moving away from a higher education system that rewards test-taking skills.

Why Do People Declare Things to Be Immoral?

A thought-provoking paper by Michael Bang Peterson proposes that moralization is used as a defense mechanism when somebody has no allies:

Over the course of human evolutionary history, individuals have required protection from other individuals who sought to exploit them. Moralization – broadcasting relevant behaviors as immoral – is proposed as a strategy whereby individuals attempt to engage third parties in the protection against exploitation. Whereas previous accounts of strategic morality have focused on the effect of individual differences in mating strategies, we here argue for the importance of another factor: differences in the availability of alternative sources of protection. Given the potential costs of moralization, it is predicted that it is primarily used among individuals lacking protection in the form of social allies. Consistent with this, a large cross-national set of surveys is used to reveal how individuals without friends moralize more. In contrast, however, support from other social sources such as family or religious individuals increases moralization.

It’s an interesting idea that’s fairly intuitive. When you don’t have any protection moral arguments are appealing because the subjectivity of morality makes the arguments irrefutable. It’s also easy to find current situations where the paper might apply. For example, perhaps the moral appeals of what’s left of the opposition to gay marriage are driven in part by feelings of dwindling support.

The Struggle For a Rational Sandy Hook Response

Cedar Riener has a great post on how the nature of human cognition leaves us ill-equipped to constructively respond to the Sandy Hook murders:

If I were to pick a psychological topic for people in this debate to understand more fully, it would be the concept that in calculating the likelihood of events (future or past), or how things are caused, we take our thoughts, our memories, and our imagination as data. We might recognize that our views are subjective and we may try to account for our own values and experience, but what we do not account for is that we are not merely subjective, but we are all biased. We are biased because our imaginations are biased. It is simply easier to think of some things that others.

Depending on how it is applied, this tendency is sometimes called the availability heuristic, sometimes the simulation heuristic. When judging what causes something else (was it the guns or the deranged mind?), we engage in counterfactual thinking (what could have stopped this?) and we judge things that are more mutable (things we can imagine changing) as more important to causing an event than those we can’t imagine changing.

This feels like logic, but it is not. A thought experiment is not an experiment.

In other words, it’s easy to imagine the killer being subdued by a teacher or unable to purchase a gun. But even if different circumstances had made those things more likely to occur, it doesn’t mean they were likely to occur. Riener continues:

In a case as horrible as this, how could we not nudge our memories and our imaginations to make it not happen? Isn’t it merely human to imagine this evil man-child, this villain, this terrorist, blown away at the door by a vigilant police officer or quick thinking super hero-teacher? Isn’t it equally human to imagine this monster, angry and frustrated, only being able to access a small handgun and a small clip, then walking into this school and *only* killing half the class?

These are human responses, and when I confront tragedies large and small I do the same thing. But when we are designing laws and policies, I think we can do better than what some columnist thought about on a cab ride home. We have to force ourselves outside of our own imagination, both by expanding our imagination, but also by consulting the science of how people actually behave and evidence of how people have actually behaved in the past.

This is an issue when it comes to solving all problems, whether they involve the tax code, climate change, or murder. Simply put, it’s easier to imagine solutions that are easy to imagine. And when the problem burrows deep inside your emotional core like Sandy Hook does, it’s even more tempting to invest yourself in whatever solution seem feasible. Unfortunately, the solutions that are easy to imagine aren’t necessarily the best or most promising. That makes it so important to “force ourselves outside of our own imagination,” even if it means confronting a world where there’s temporarily no solution to a terrifying problem.

Is Being Not-Greedy More Important Than Being Generous?

Studies show that by the time a child is six-years-old he will have learned an average of 17 aphorisms about the need to be generous. Ok, I made up that last tidbit, but it’s clear that the remnants of the country’s Puritan foundation have stitched a reverence of charity into the American fabric. There’s even an unassailable tax break for charitable contributions. Clearly this is a good thing, but how important is it? Does a person’s generosity influence interactions they’re not involved in? Do people “pay it forward” by responding to generosity with generosity toward a third party. And what about people who aren’t generous? Do people respond to greed with more greed?

A new study (pdf) led by UNC’s Kurt Gray attempted to find an answer. The study was composed of a series of experiments that used a variation of the “dictator game,” which consists of one player receiving some mount of a particular resource (e.g. money) and then deciding how much to keep for himself and how much to give to a second player. Gray’s experiments used either money and labor as the resource. For example, in one experiment participants were given two fun tasks and two boring tasks, and then had to choose which two tasks to pass on to another person. Similar experiments were conducted using money.

The twist in Gray’s experiments is that before allocating their resources participants were told about what another player had chosen to give to them. Participants in the equality condition were given one fun task and one boring task (or $3 out of a possible $6); participants in the greed condition were given two boring tasks (0r $0); participants in the generosity condition were given two fun tasks (or $6.)

The experiments revealed three interesting findings. First, exposure to greed had a stronger influence on future allocations than exposure to generosity. That is, greed made participants most likely to “pay it forward.” Second, there was not a significant difference between the generosity shown by people who were treated with generosity and people who were treated with equality. Regardless how much a participant had benefitted from somebody else’s generosity, people seemed to believe an equal split was sufficient to pay it forward. Finally, Gray and his team found evidence that the propensity to respond to greed with more greed is driven by negative affect. If the initial greed doesn’t lead to an inferior emotional state, the person might not respond with more greed.

What does this mean for society? The researchers speculate that it might be wise to switch our focus from generosity to equality:

From the perspective of the person who is paying-it-forward, the asymmetry between greed and generosity may stem from a misconception of the threshold required for an act to truly reflect paying it forward. The person who awakes to gratefully find his long driveway cleared of snow may feel that he has “paid forward” the generous act by brushing off a bit of snow from a nearby car, but this discount rate is sufficiently high that the perpetuation of good will likely ends there. On the other hand, the person who awakes to find his driveway completely blocked from an errant snowplow may pile all that extra snow onto another car, thereby creating a significantly longer chain of ill will. This asymmetry suggests that to create chains of positive behavior, people should focus less on performing random acts of generosity, and more on treating others equally—while refraining from random acts of greed.

I think taking the focus off generosity would be a mistake. Even if everything the study suggests is 100% true, we should still stress being generous in order to establish strong generosity norms. In other words, we need to try to shift the generosity “overton window.” If the norm is equality, many people will choose to be less generous and act greedy. But if the norm is generosity, many of those who choose to be less generous than norms dictate will still treat others equally. Ideally, stressing equality would lead to less greed because it’s an easier lift, but in reality it’s hard to envision that happening because social norms would move in the wrong direction. Thus the big takeaway from the study is not the importance of equality, but the destructiveness of greed as it incites a chain of paid-forward tit-for-tat greed.

So to answer the question at the top of this post, yes, technically the marginal increase in “goodness” when you stop being greedy (and start treating others equally) is larger than at the point when you start acting generously. Treating people as equals is extremely important. But precisely because it’s so important we need to pretend that being generous is all that matters — that’s the best way to prevent as few people as possible from becoming greedy. Thus for all practical purposes it would be wise to continue to consider being generous more important than being not-greedy.
—————————————————————————————————————————————————————
Gray, K., Ward, A., & Norton, M. (2012). Paying It Forward: Generalized Reciprocity and the Limits of Generosity. Journal of Experimental Psychology: General DOI: 10.1037/a0031047

claimtoken-50d26d7f1b388