The Downside of College Rankings
September 20, 2011 Leave a comment
The new U.S. News and World Report college rankings were released last week and Andrew Hacker and Claudia Dreyfus have a nice rundown of why they should be taken with a grain of salt. Of course the real problem with the rankings is not that they are inaccurate or based on accounting gimmicks, it’s that they alter behaviors at the core of a school’s mission.
In 2007 sociologists Wendy Espeland and Michael Sauder decided to examine how law schools reacted to the U.S. News law school rankings (the rankings are much more important for law schools than for undergraduate universities). They found that in contrast to the relatively harmless accounting tricks outlined by Hacker and Dreyfus, law schools actually do things that lower the quality of the school or go against the schools stated goals of providing an equitable education.
One dean of admissions said: “We are now torn between decisions that will bring a better class and a class that does better according to USN. . . . There was this student with a 162 LSAT score [a high score for this school] and a low GPA. I thought this guy was probably immature, a slacker . . . but I couldn’t pass on that score, so I admitted him.”
Another gaming strategy employed by some law schools involves pressuring faculty to take spring leaves. Following ABA guidelines, USN counts full-time faculty teaching in the fall term for its student-faculty ratio; a faculty member on leave in the fall harms a school’s ratio.
Laws schools are also not above employing the tricks of undergraduate institutions.
At several schools, mathematically sophisticated faculty have gone so far as to “reverse engineer” the rankings formulas to learn how their school might improve their scores. One dean said: “We’ve done a lot of careful studying of the USN methodology to figure out what counts the most and what is perhaps the most manipulable, because those are not necessarily the same things.
Respondents described a wide range of gaming strategies deployed by law schools. An early, blatant example entailed schools reporting different numbers to USN than to their accrediting body, the ABA.
Espeland and Sauder also highlight the problem with commensuration — the word they use to describe what happens when inherently qualitative characteristics are reduced to quantitative metrics. Not only does this make a vast amount of useful information irrelevant, it imposes the same form on everything that’s left. In the end that doesn’t really help anybody make a better decision.
At this point college rankings are a necessary evil because they’re all a lot of people have to go on. That’s ok. The problem is that we tend to see the ranking system as a finished product when we should be working to increase its accuracy and improve the incentives it creates.
Espeland, W. N., & Sauder, M. (2007). Rankings and reactivity: How public measures recreate social worlds American Journal of Sociology, 113, 1-40