Our Testing Problems Will Disappear
May 11, 2012 1 Comment
The constant testing of students is terrible. And the constant testing of students is completely necessary. It’s a catch-22 that has plagued education for some time. We need to know where kids stand and testing them is the best way to find out. Fortunately, as Bill Tucker points out in excellent story in the Washington Monthly, we can do nothing and the problem will go away because soon enough computers will be able to teach and evaluate at the same time.
Schools across the nation still essentially close to conduct inventory—only we don’t call it that. We call it “testing.” Every year at a given time, regular instruction stops. Teachers enter something called “test prep” mode; it lasts for weeks leading up to the big assessment. Just as grocery-store workers might try to fudge inventory numbers to conceal shortfalls in cash, schools sometimes try to fudge their testing results, and cheating scandals erupt. Then, in a twist, regular classroom instruction resumes only half heartedly once the big test is over, because there are no stakes attached to what everyone’s learning. Learning stops, evaluation begins: that’s how it works. But in the not-so-distant future, testing may be as much a thing of the past for educators as the counting of cans is for grocers.
While Refractions looks like a relatively simple game, the real complexity is behind the scenes. The game records hundreds of data points, capturing information each time a player adjusts, redirects, or splits a laser. This data allows Popovic and his colleagues to analyze and visualize students’ paths through the puzzles—seeing, for example, whether a student made a beeline for the answer, meandered, or tried a novel approach. Since the data shows not just whether the student solved the puzzle, but also how, it can be used to detect misconceptions or skill gaps. Good math teachers do this all the time when they require students to “show their work”—that is, to write down not just the answer to a math problem on a test, but also the calculations they used to derive the answer. The difference is that Popovic’s game essentially “shows the work” of hundreds of thousands of players, recording data automatically in a way that allows teachers and scientists to draw robust inferences about where students tend to go astray. This would be virtually impossible with paper tests. And it’s this massive scale that promises not only new insights on student learning but also new tools to help teachers respond.
Popovic’s game is one of dozens of experiments and research projects being conducted in universities and company labs around the country by scientists and educators all thinking in roughly the same vein. Their aim is to transform assessment from dull misery to an enjoyable process of mastery. They call it “stealth assessment.”
I think the larger lesson is that within education reform circles there is a lot of capital going into obsolete or dying areas. A whole slew of anti-testing and testing-improvement organizations are out there, and soon they will be completely unnecessary.
Another good example of this is teacher training and evaluation. Twenty years from now, thanks to advances like those described by Tucker, teaching will be a completely different job. Teachers will circulate among a group of kids learning from a computer and step in to help out when the computer encounters one of the small number of learning difficulties it cannot identify. In terms of classroom management and one-on-one instruction, the job will barely resemble what it is today. Yet we engage in heated debates about teacher evaluation and pour millions of dollars into training teachers for a role that will effectively no longer exist in 10-20 years. There is a severe lack of thought about the medium or long term future of education, and it’s an overlooked reason for why schools are so slow to change.