Being a 13-Year-Old Is Hard

A few weeks ago Slate published an article titled, “Why Does My Kid Freak Out?” The piece explains how the wild tantrums toddlers throw are actually completely rational reactions given what they’re going through.

2-year-olds are also going through a hellish personal crisis: They have just learned how to walk and use tools, so they really want to explore the world; at the same time, they are terrified of what that world contains and constantly fearful that their parents, whom they love and trust to a terrifying degree, will suddenly abandon them. Oh, and those same parents? They’re suddenly barking “no” all the time, seemingly just for fun. What the hell?

It’s no coincidence that kids start having tantrums around the time that parents start enforcing rules. When you say no, sweetie, you can’t have that butcher knife, your 20-month-old has no idea that you are depriving her of this awesomely shiny contraption for her own safety. “Since it’s the parent, whom they rely on for everything, who is taking it away, it’s perceived as a withdrawal of love, essentially,” says Alicia Lieberman, a professor of Infant Mental Health at the University of California-San Francisco.

On some level, I think that middle school students go through a similar crisis. When kids enter adolescence there is a sudden and rapid rise in the importance of the social hierarchy, yet positions in that hierarchy continue to be based on physical appearance, material possessions, and a host of other things that are outside the control of most teenagers. Even worse, the personal characteristics parents have been stressing — qualities like kindness and intelligence that are prized in the adult world — often seem to not matter at all.

This is the context for the research Collin Hitt cites while sensibly suggesting that Chicago close its middle schools before shuttering elementary schools.

Middle schools became prominent in the 1960s and 70s. There was little academic justification for creating them; most of the middle school pedagogy found today was developed after middle schools were built. There wasn’t much anxiety about comingling adolescents and younger children; that was a post hoc justification. Districts simply built middle schools to house sixth through eighth graders from elementary schools that, after the Baby Boom, were actually overfull. Recent research shows that this was a huge unforced error. Districts should have just built more K-8 schools. For students, the transition from elementary to middle schools has negative, long-term impacts.

A pioneering study of middle schools was published in 2010 by Columbia University researchers Jonah Rockoff and Benjamin Lockwood. They compared New York City students at middle schools and K-8 elementary schools. They found that middle schools had a large negative impact on students test scores. Almost all of the learning losses were suffered by disadvantaged students with lower incoming test scores.

Harvard researchers Marty West and Gino Schwerdt have used the same methods to examine Florida middle schools. They found practically identical, negative effects in urban areas like Miami…

Changing schools can be an anxiety-inducing process, particularly in places where students have a range of choices that necessitate applications and real decisions (e.g. New York City). Given that middle schoolers have enough on their mind, replacing middle schools with K-8 schools seems like a good decision. This ought to be particularly true in Chicago, where elementary schools are now closing while middle schools remain open.

The district should stop feeding students into half-empty middle schools. Instead, it should allow kids to stay at their current elementary schools by simply adding an older grade to the school. As elementary schools add one grade per year, they’d eventually become K-8 schools — they certainly have the space to do so. Middle schools would shrink in size and staffing levels, since they’d have no more incoming classes.* Eventually, the middle schools would have no more students left, since all of their present students will graduate to high school. In a couple short years, most underused middle schools could be closed.

Under this scheme, no student would be forced to leave her current school. The district could close a large number of underused buildings. And student performance would improve.

When it comes to middle schools, research and common sense point to the same conclusion. Middle schools have a negative impact on students, an impact that grows stronger when they’re kept open at the expense of elementary or K-8 schools.

Are Savings More Important Than Income For Poor College Students?

Noah Smith has a good piece in the Atlantic about ways low-income families can save more money. Because the article focuses on solutions, Smith mostly pays lip service to the benefits of increased savings, but if you have any doubts, a new paper by Vernon Loke illustrates the crucial role that savings can play in a low-income student’s pursuit of higher education.

While most research on poverty and higher education tends to focus on family income, Loke focused the effects of wealth or asset accumulation. Using data from the government’s NLSY79  longitudinal survey of youth born in 1986, Loked examined families belonging to four different types of asset classes. Families with “High and Stable” assets began with assets significantly higher than zero, but experienced non-significant growth. Families in the “High and Accumulating” class began with significant assets and saw a significant amount of growth over time. The “Low and Stable” class began with assets not significantly different from zero and saw stable but non-significant growth. Finally, the “Low and Accumulating” class began with assets not significantly different from zero but saw stable and significant growth over time.

Loke found was that when it comes to college attendance and college graduation, a family’s rate of asset accumulation appeared to be more important than their initial level of wealth.

The data indicate that having higher asset holdings during childhood by itself is insufficient for the direct effects of assets to be evidenced with regard to college graduation. Rather, it is in the experience of both having higher asset holdings, and experiencing a significant accumulation of assets over time, that the direct effects of assets are seen. In addition, youths from households that initially had asset holdings that were not significantly higher than zero around the time of their birth, but which experienced significant asset accumulation over the course of their childhoods, had statistically similar outcomes compared to youths from higher net worth households.

Loke also found that for low-income families the effects of asset accumulation were mediated by parental expectations, although it’s hard to know the direction of causality. It could be that parents who expect their kids to go to college are more likely to try and save money, but it’s also possible that parents who save money begin to believe it’s increasingly possible for their children to attend college.

In general, the results mesh well psychological research on framing and anchoring effects. Even when a family begins with higher wealth, if they are anchored to that baseline the additional cost of a college education can seem enormous if there is no asset accumulation. Meanwhile, for a family that starts out with zero wealth, significant asset accumulation can make the cost of college seem less daunting because they are much better off relative to where they began.

To get back to Smith’s article, Loke’s study means that encouraging saving becomes more than a way help low-income families gain access to the earning potential of assets such as stocks. Saving more can also significantly increase the chances their children attend and graduate from college, and that can increase the family’s earning potential.
——————————————————————————————————————————————————————-
Loke, V. (2013). Parental asset accumulation trajectories and children’s college outcomes Economics of Education Review, 33, 124-133 DOI: 10.1016/j.econedurev.2012.12.002

How Should You Craft A Political Message?

It depends on your audience:

Three studies examined the production of political messages and their persuasive impact on recipients as a function of speaker–audience similarity. The first two studies found support for the hypothesis that political leaders (Study 1) and party activists (Study 2) formulate more abstract messages when the audience is politically similar to them than when the audience is dissimilar or heterogeneous. The third study examined the persuasive impact of message abstractness versus concreteness. We predicted and found that abstract messages are more effective in convincing an audience whose political positions are similar to the speaker’s and concrete messages are more effective in convincing an audience whose political positions differ from the speaker’s or are heterogeneous. Implications of these findings for the relation between language and social cognition are discussed.

The Closet Is Real, and It’s Bad

Not that we need science to convince people that concealing key aspects of your identity can be unhealthy, but some important new research led by Harvard’s Alexandra Sedlovskaya helps clarify the psychological consequences of constantly concealing part of who you are.

In the study’s initial set of experiments participants who concealed stigmatized identities (e.g. gay men) were faster than participants without stigmatized identities at categorizing attributes as part of either their “self-at-work” or “self-at-home.”  The faster times suggest that when people conceal their identities it makes the distinction between their public and private selves more accesible. In two follow-up experiments the researchers found that this cognitive distinction between the selves not only led to psychological distress, it did a better job explaining participants’ distress than the broader act of concealment.

The present studies are the first to use social psychological theory and methods to test popular claims that the experience of concealing a stigmatized social identity leads to a divided self. Studies 1a and 2 established that public–private schematization occurred among people who have stigmatized concealable social identities relative to people who do not…Using two different measures of distress—perceived social stress (Study 4) and depressive symptoms (Study 5)—among samples of employed gay men, we showed in Studies 4 and 5 that public–private schematization accounted for the association between concealment and heightened distress. Cumulatively, these studies support the hypothesis that for people with stigmatized social identities, routine concealment of these identities in public contexts is associated with a greater influence of public and private social contexts on the architecture of the self-concept, with costs for psychological well-being. 

It’s important to note that this doesn’t merely apply to sexual orientation. Every day people around the world are concealing their immigration status, religious beliefs, and political views, and these potentially necessary concealments will ultimately lead to substandard psychological health.

Another issue to consider is whether the development of new personas via Facebook, Twitter, or other online communities will lead to an overall increase in the stress that results from divided selves. Keeping your active membership in an online gaming community hidden from your classmates isn’t quite the same as concealing a stereotypical stigmatized identity, but it certainly seems like at the margin it could lead to a detrimental strengthening of the distinction between public and private selves (or a public self and a second public self.)
—————————————————————————————————————————————————
Sedlovskaya, A., Purdie-Vaughns, V., Eibach, R., LaFrance, M., Romero-Canyas, R., & Camp, N. (2013). Internalizing the Closet: Concealment Heightens the Cognitive Distinction Between Public and Private Selves. Journal of Personality and Social Psychology DOI: 10.1037/a0031179

Are Imaginary Social Norms Increasing School Violence?

(cross-posted from The Inertia Trap)

Part of the price we pay for living in a civilized society is that our daily decisions are subject to the influence of social norms. These beliefs about social acceptability not only keep middle-aged men from dressing like Justin Beiber, they can influence behaviors that affect a person’s health, academic performance, or likelihood of voting.

Where things get tricky is that the term “social norm” can refer to two different norms. The first norm is what you would get if you averaged the individual attitudes of every person in the group. For now let’s call this the “real” norm. The second norm is what people perceive the real norm to be. Let’s call this the “perceived” norm. While the real norm is based on people’s actual beliefs, the perceived norm is based on their beliefs about everyone else’s beliefs.

Outside of issues with a lot of public polling, the real norm generally remains unknown. The result is that the real norm and the perceived norm don’t always align, and in cases where the real norm would have exerted a more positive influence on society these inaccruate perceptions can have negative consequences. For example, college students tend to believe other students drink and condone drinking more than they actually do. These beliefs ultimately lead to more drinking. Researchers have found men make a similar error when it comes to norms about sexual consent. That is, they tend to underestimate the importance of consent in the minds of other men.

new study suggests that violence in schools is yet another problem where inaccurate perceptions of social norms are having negative consequences. The research examined two cohorts of Chicago 6th graders (over 1,600 kids) whose schools took part in the CDC Multisite Violence Prevention Project. Students were surveyed on their attitudes about violence and their beliefs about the attitudes of others. Results showed that students consistently overestimated the degree to which others condoned aggression, while at the same time underestimating the degree to which others supported nonviolent problem-solving strategies. Students overestimated the social acceptability of violence regardless of gender, ethnicity, or aggression level, and the discrepancy remained through 8th grade. It’s difficult to know the exact effects of these inaccurate beliefs, but it seems obvious that at the margin they lead to an uptick in violence.

Though the tangible costs of this violence make the findings somewhat depressing, the silver lining is that the study uncovers some potentially lower hanging fruit in the effort to decrease school violence. In general, there are two norm-based ways to lower violence. One strategy is to change real norms by altering students’ personal beliefs about violence. The problem with this strategy is that changing a student’s moral beliefs can be incredibly difficult. A 13-year-old who learned from his older brother that it’s acceptable to punch a guy if he flirts with your girlfriend is probably not going to change his mind because a teacher says otherwise.

The other norm-based way to decrease violence is to change perceived norms. Often this is difficult because the beliefs you want to impart don’t reflect reality. For instance, attempting to convince a group of 5th graders that their classmates dont’ like soda and junk food. However, when you’re trying to convince kids to believe something real — in this case, that other kids are less accepting of violence — it ought to be easier.

As an extreme example, imagine one group of students who each believe violence is terrible and that everybody else believes it’s great, and a second group of students who each believe violence is great and that everybody else also thinks its great. If you then try and convince both groups that the existing norm is one of nonviolence, you’re probably going to have more success with the first group. Obviously the reality of the situation is more nuanced, but in general it ought to be easier to convince people of a norm when that norm more closely reflects how they actually feel.

Compared to changing core beliefs or persuading students to believe a lie, convincing students to believe in a real norm seems like a good option. None of this is to say that teaching kids what others believe will be easy. In fact, norm-awareness interventions that aim to curb alcohol consumption and energy use have had mixed results. Still, all things considered, the study points a relatively promising way forward in the fight to curb school violence.
—————————————————————————————————————————————–
Henry, D., Dymnicki, A., Schoeny, M., Meyer, A., Martin, N., & , . (2013). Middle school students overestimate normative support for aggression and underestimate normative support for nonviolent problem-solving strategies Journal of Applied Social Psychology, 43 (2), 433-445 DOI: 10.1111/j.1559-1816.2013.01027.x

If Only We Could Harness the Ingenuity Used to Justify Bad Behavior

One of the mind’s niftier tricks is finding loopholes in the rules it has created to keep us from engaging in bad behavior. The most interesting of these loopholes may be moral- or self- licensing — the process by which doing something good make it acceptable for you to do something bad. Recently psychologists have extended the power of licensing by showing that it doesn’t just have to be you who does something good, you can also earn the right to be bad when others in your loosely defined group do something good.

A team of researchers led by Northwestern’s Daniel Effron sought to further extend the power of licensing by proposing that even if you haven’t done something good, you can license yourself to misbehave by simply thinking about how you avoided doing something bad. Furthermore, they proposed that in cases where the behaviors people avoided aren’t all that bad, people will to exaggerate the terribleness of whatever they abstained from doing, and that could ultimately lead to a more powerful license to be bad.

We posit that reflecting on counterfactual sins (i.e., less-virtuous alternatives to one’s past behavior) licenses people to act less virtuously. By imagining the sinful road not taken, individuals can reassure themselves of their virtue without having done anything actively virtuous — and can thus license future indulgence.

Unfortunately for individuals wishing to indulge, it is sometimes difficult to imagine how one’s behavior plausibly could have been worse. The dieter may wish to use uneaten cookies to justify eating cake, but perhaps no cookies were previously available. In such situations, we propose, the motivation to indulge can lead people to distort their evaluations of their foregone behaviors. The dieter may convince herself that it would have been unhealthy to eat some low-fat crackers that she previously declined. We propose that when people are tempted to indulge, they will exaggerate the sinfulness of foregone actions, thereby creating the illusion that they previously refrained from bad behavior.

Two experiments provided evidence for their hypotheses. In the first experiment, participants who imagined less healthy alternatives to their recent behavior engaged in fewer weight loss behaviors over the following week and reported weaker intentions to improve their behavior. In the second experiment, participants who were tempted with a cookie rated a snack they had previously declined to eat as more unhealthy than participants who had not been tempted. Together the two experiments suggest that yes, people do “exaggerate the sinfulness of foregone actions,” and that thinking about having avoided those sins can lead to less positive actions in the future.

A common reaction to these types of social psychology studies is to think, “yeah, well, the process they’re concerned with will probably never be pertinent to my life.” But when it comes to research on licensing the opposite tends to be true. Everyday decisions about what to eat, whether or not to exercise, or what friends to spend time with are all influenced by evaluations of previous behavior. More often than not, these evaluations are subject to questionable interpretations or a focus on events that might not seem relevant to an objective observer. So if you ever catch yourself deliberating over a decision by considering everything you did over the past month, it might be best to just stop right there and decide what to do strictly based on the immediate merits of the actions you’re considering.
——————————————————————————————————————————————–
Effron, D., Monin, B., & Miller, D. (2013). The unhealthy road not taken: Licensing indulgence by exaggerating counterfactual sins Journal of Experimental Social Psychology, 49 (3), 573-578 DOI: 10.1016/j.jesp.2012.08.012

More College Students, Less College

Earlier in the year the University of Miami’s medical school quietly announced an intriguing re-design of their physician training program. There aren’t a ton of available details, but the gist seems the be that the school will replace some classroom lectures with online content while at same time increasing the presence of “small-group, case-based learning sessions.” Increasing the use of both online content and small-group activities in situations where they make efficient use of resources is a common sense step, but what’s momentous about the changes is that they could allow people to earn their medical degrees in less time.

“Eventually, our School will move away from a traditional calendar-based approach to medical education to create a true competency-based program,” said Gardner. “That would allow many students to complete the first two years of our program more rapidly, shortening the time necessary to obtain a degree, while other students would be able to master the core competencies at a slower pace.”

If medical schools manage to shorten matriculation times, over a given time period they’ll be able to enroll more students. The result would be a greater supply of doctors, lower healthcare costs, and less student loan debt. The benefits of spending less time in college beg the question of why we’re not doing do more to establish and promote 3- or 3-and-a-half year programs and competency-based degrees at the undergraduate level.

I would actually go even further than that. For the most part, these 3-year programs merely cram the standard 4-year, 120-128 credit degree into a shorter time frame. Competency-based degrees require a less arbitrary scope of learning, but in order to begin replacing standard degrees in a significant they would most likely have to be based on what your average student learns in four years. Given this strict adherence to the 120-128 credit standard, I think it’s worth asking whether that’s the optimal amount of education for a student to have?

This is a crucial question because there’s a lot of evidence that we’re keeping young people in the higher education system for an inefficiently lengthy amount of time. Couldn’t three years prepare certain people in certain situations for a computer programming, accounting, or biology career? In the end we’d churn out more of the scientists and engineers society craves, and though the increase in the supply of professionals could lead to small wage decreases in certain high paying jobs, the lower cost of the services these people (e.g. accountants) provide would act as a real wage increase for everybody else. Shortening degree times is not without precedent. A bunch of law school bigwigs recently admitted that for many law students a third year is unnecessary.

The standard objection is that lengthy B.A. requirements create the well-rounded adults that form the bedrock of society. But is it really that important for students who are ready to join the labor force to spend a semester’s worth of time and tuition fulfilling P.E and foreign language requirements? The goal of an all-encompassing liberal arts education is well-intentioned and admirable. But shouldn’t we be asking whether a 2nd semester senior occasionally attending his Ceramics class is an efficient way of achieving that goal? Couldn’t we get the same level of “horizon broadening” through less costly commitments involving extracurricular activities, volunteer work, event-attendance, or studying abroad?

More importantly, college is expensive! And that’s a huge burden on low-income students. In fact, it’s such a big burden that the prospect of sustaining oneself for four years is enough to turn people off the idea of going to college. A norm that’s more accepting of fewer requirements would not only improve the odds that a low-income student earns their degree, it would improve the odds they attempt to earn a degree in the first place. Some may object by saying that going to college for only three years could distinguish low-income students in a negative way, but given that the best and brightest would likely jump at the chance to spend less time in college and more time making money, I don’t think that’s a major concern.

Why is the current standard four years and 120-ish credits? That’s a good question. Why not ten years? Or five years? Or three years? And even if four years was the perfect length at the time when it become the norm, given how different the world is today might it not be possible that a 4-year norm is no longer ideal?

What’s less unclear is why the 4-year norm has remained. When nearly every single person in a position of power earned a 120-ish credit bachelor’s degree, there’s not going to be a lot of thought given to different ways of doing things. There’s also the financial model of universities, which for the most part depends on years of education, not the number of students who graduate. Besides than the fact that more graduates mean more potential alumni donors, it doesn’t really matter to a school if three students each graduate in four years or four students each graduate in three years. The revenue is the same. But it matters a lot to those students, and on a grand scale it matters to the American economy.

Now that’ I’ve provoked you all into yelling “that’s crazy talk,” I’m willing to admit that it’s entirely possible the current system is ideal. Perhaps 120-ish credits over four years spits perfectly-rounded graduates out into the real world right when the marginal gains from additional classes would begin a steep decline. My point is that we don’t know, and we need to be asking the question. The degree to which we’ve simply assumed the efficacy of a major component of our society is breathtaking. At some point I’ll get around to finishing a longer, more thoughtful piece about the need to question the entrenched characteristics of the B.A, but I can’t emphasize enough how important it is to not view the 120-ish credit B.A. as an eternal truth. It’s hard to comprehend the range and scale of the loss inflicted on society from forcing students to spent too much time in college, and we should be doing all we can to ensure it doesn’t happen.