What’s the Ideal Student Reaction to a New Exam?
May 13, 2013 Leave a comment
Imagine you take a test consisting of a reading passage and two multiple choice questions. After a few seconds, you’re 99.9% sure about the correct answer to the first question. Two of the answers are absurd, a third doesn’t quite seem right, and the fourth clearly aligns with what the question is about. But when it comes to the second question, you’re less sure. Only one answer is absurd, and while you’re confident a second choice is wrong, both of the remaining choices seem to answer the question. After a few minutes of thought you decide one of them is superior, but you’re only about 75% sure and you’re left feeling slightly discouraged.
Here’s my question: Which test question is more likely to elicit a “higher order” thinking skill?
The context of all this is the widespread negative reaction to New York City’s new Common Core aligned tests. Here’s the lede in the New York Times:
Students at the Hostos-Lincoln Academy in the Bronx blamed the English exams for making them anxious and sick. Teachers at Public School 152 in Manhattan said they had never seen so many blank stares. Parents at the Earth School in the East Village were so displeased that they organized a boycott.
As New York this week became one of the first states to unveil a set of exams grounded in new curricular standards, education leaders are finding that rallying the public behind tougher tests may be more difficult than they expected.
Complaints were plentiful: the tests were too long; students were demoralized to the point of tears; teachers were not adequately prepared. Some parents, long skeptical of the emphasis on standardized testing, forbade their children from participating.
Confusion, discouragement, and a stack of incomplete exams can surely be signs of terrible test questions. But you would expect those things with almost any new test, and therefore they don’t rule out test questions are actually new and improved. For example, even if the new tests were objectively better, and even if substantial efforts were made to prepare students, you would still expect some increase in student discomfort simply because the new tests aren’t what students and teachers are used to. You would also expect an increase in student discomfort because the new NYC tests are designed to do a better job identifying hard-to-measure thinking skills, and these types of questions ought to involve correct answers that are less obvious. The tests don’t even have to be so different to drastically alter the student experience. Imagine that on 20% of the questions students have 20% more doubt about their answers. Over the course of the whole test that ought to leave students distraught with their performance and short on time.
Up to a point, that’s not the worst thing in the world. Imagine President Obama in the war room trying to decide what to do about Libya. He listens to all his advisors, reads all the intelligences briefings, thinks through the potential consequences, and eventually chooses a plan of action. Is he 100% sure he chose the right path? Probably not. Does he feel great about his decision? I doubt it. But that’s the nature of solving difficult problems. Doubt and discomfort creep in when you push your cognitive skills to the limit. Obviously taking the 4th grade ELA exam isn’t quite the same, but isn’t this ultimately the type of critical thinking we’re aiming to prepare students to do? If we want to build a generation of students who aren’t merely “bubble fillers” and who actually learn from their tests, I’m afraid that what people saw in New York is what the initial steps may sometimes look like. It’s worth remembering that olds tests are old tests for a reason — they didn’t do a good job evaluating the skills that were deemed important.
Standardized tests are clearly a complex issue, and we absolutely need to do a better job preparing students and teachers for all the trials and tribulations that new tests will bring. But we should also be wary of making visible student reactions the driving force in evaluating a new exam. None of this is to say there weren’t terrible questions on the New York City exams (I haven’t seen the tests), and there’s good reason to believe the tests were too long. As with any exam, there are surely a slew of experts ready to point out exactly why the test design was so terrible. But I think it’s a mistake to make judgments about the efficacy of a new test strictly based on the number of blank stares it elicits. If we’re serious about attempting to measure real critical thinking skills, the tests that successfully do it are going to initially make students uncomfortable.
If you’re adamant that standardized testing is terrible, then the reactions of various teachers and students probably gave you all the information you need. But for those of us who believe in the potential of an accountability system that makes use of student test scores, it’s important to remember that we’re still early in the process of assessment development. Perhaps testing won’t prove to be the answer, but we’ve barely scratched the surface of what research and technology can do for evaluating and identifying individual skills. With “next-generation” tests beginning to arrive, exams are going to repeatedly go through major changes in a relatively short amount of time, and it’s important to remain patient and not overreact to student reactions.