It’s Possible to Be Reasonable About Accountability

Kevin Carey lays out his thinking on teacher/school accountability. The middle ground he stakes out is so reasonable it makes you wonder why we can’t all just get along.

I think the federal government should insist on the creation of useful education information. States that take federal education dollars should be required to have standards and administer annual tests in core subjects, all the way through 12th grade. They should also be required to develop high-quality longitudinal data systems that connect K-12 school information to labor market data and post-secondary information systems. They should be obligated to use those systems to calculate and publish sophisticated growth estimates, among other things, including measures of how people who attend K-12 schools ultimately fare in the labor market and higher education. All of this information should be open and available…

But I don’t believe that Congress or anyone else can design a wholly rules-based accountability system that slices and processes and evaluates that much information into mechanistic judgements of success that are sufficiently accurate and credible to plausibly serve as the foundation for educational improvement. There are just inescapable tensions between the amount of information needed to render a reasonably accurate judgement of school success, the inherent complexity of any rules-based system for processing such information, and the transparency and credibility necessary for people to believe in and constructively react to the judgement of the system–and, as such, for the system to work…

That means we need to rely on human judgement to interpret accountability information and decide when and how to act upon it. There is no doubt that some state and local officials will fail in this responsibility, due to incompetence or bad faith. But here’s the thing–those same officials are perfectly capable of subverting a rules-based system to the same ends. That’s the story of NCLB implementation over the last 10 years. We need better officials, not an official-proof system…

So from a policy standpoint you fight like hell for real information and transparency and you try to feed it into policy and market environments that can benefit from it. You use the information to inform scholarship, influence public opinion, identify best practices, guide parental choice, and hold elected officials to account. But you don’t pretend that rules will suffice when only people will do.

The whole thing is worth a read.

Students Like Reading About Things They Like

In light of research proposing intrinsic reading motivation to be a core condition of reading performance, we particularly expected reading enjoyment and reading for interest to contribute to reading performance and its growth. These hypotheses were, by and large, corroborated.

That’s from a new paper in Learning and Instruction. Few educators would call the finding surprising, but few truly take it into account when designing or implementing literacy programs.

I’m in favor of students reading the important pieces of our literary cannon (e.g. To Kill a Mockingbird). But what if they had 20% less cannon and 20% more of whatever reading they choose. Wouldn’t that lead to literacy gains?

A Future of Clickers

The Worlds Bank’s Development Impact blog has good analysis of the recent Science paper on the potential impact of classroom clickers. First, the study:

The sections were taught the same for 11 weeks. Then in the 12th week, one of the sections was taught as normal by an experienced faculty member with high student evaluations, while the other was taught by two of Wieman’s grad students (the other two co-authors of this paper), using the latest in interactive instruction. This included pre-class reading assignments, pre-class reading quizzes, in-class clicker questions (using a device like the audience uses to vote with in e.g. Who wants to be a millionaire?), student-student discussion, small-group active learning tasks, and targeted in-class instructor feedback…

The students looked similar on test scores and other characteristics before this 12th week. Then the authors find (i) attendance was up 20% in the interactive class; (ii) students were more engaged in class (as measured by third-party observers monitoring from the back rows of the lecture theatre), and (iii) on the 12 question test, students in the interactive class got an average of 74 percent of the questions right, while those taught using traditional method scored only 41 percent – a 2.5 standard deviation effect.

And now the critique:

One week is a really short time to look at effects…The graduate students teaching the class certainly had extra incentive to do extra well in this week of teaching…The test was low stakes, counting for at most 3% of final grade…Despite them calling this an “experiment”, which it is in the scientific sense of any intervention being an experiment to try something out, there is no randomization going on anywhere here, and the difference-in-difference is only done implicitly.

The biggest problem for me is that the impact of this line of work is ultimately dependent on clickers. The intervention contains a whole slew of good instructional practices (student-student discussion, small-group active learning tasks, targeted in-class instructor feedback), but we already know that these things are good. The reason we don’t do them is because the practices they are supposed to replace (e.g. lecturing) are institutionalized. More research is not the solution to that problem.

Clickers require less radical change and they have the potential to be easy, cheap, efficient, and quickly scalable. If clickers are what’s important, why implement such a multi-faceted intervention? Why not design a more robust intervention (more than one week of treatment) based solely on clickers?

Linkademia

–The city of Charlottesville is giving all its middle and high school students tablet computers.

–Bellweather has a new paper about how to efficiently and effectively steer capital towards education innovation.

–The government is quietly pushing the envelope with free online open-source learning.

Zombie Motivation

Educational video games tend to evoke a wide range of opinions among educational researchers, but one thing they tend to agree on is that the more the “material to be learned” is integrated into the game, the better. A new study in the Journal of the Learning Sciences helps illustrate this point.

Researchers observed students play three different versions of Zombie Division, a video game designed to teach mathematics.  In the “intrinsic” version, mathematics was a crucial component of the gameplay — a dividend displayed on the zombie’s chest told the player about the zombie’s vulnerability to various attacks. In the “extrinsic” version the only math content was in the form of a between-level quiz. In the control version the math content was completely removed.

The results showed that children learned more from the intrinsic version of the game under fixed time limits and spent 7 times longer playing it in free-time situations.

I think researchers occasionally tend to over-emphasize the sometimes murky distinction between intrinsic and extrinsic motivation, and this paper does a nice job avoiding that pitfall by focusing on the intrinsic integration between a game and its learning content. The more that killing zombies and calculating quotients are part of the same activity or goal system, the more interest and engagement there is.

Can Prediction Markets be a Learning Tool?

Ah, prediction markets. Nothing is better for convincing yourself you know exactly what will happen in every upcoming election.  According to a new paper in Computers & Education, prediction markets may also be useful in classrooms. The basic idea is that prediction markets improve cognitive skills and increase student engagement by requiring new knowledge to be applied to the decision making process.

To test their hypotheses the authors used a prediction market called the “Insurance Loss Market” that was specifically designed for an undergraduate rick management class. Students were required to collect and analyze data in order to make weekly bets on property losses in various states. So, how did it work?

Our exploratory research has demonstrated that learners’ decision making in a specific problem domain has improved over the module. We have identified trading behaviours that demonstrate learners will integrate new information into their cognitive framework and alter their decisions based upon this new information. This is a clear indication of active engagement by learners. Finally, we have demonstrated that learners are actively searching for relevant information that is not supplied to them in lectures and are integrating this information into their decision making processes.

There’s more good news.  The class also counts as credit toward the school’s new “legitimized gambling” major.

Science Explains the Arab Uprising

The effects of power on behavioral approach, the propensity to negotiate, and preference for risk were all moderated by legitimacy. We repeatedly found that legitimate power led to more approach than legitimate powerlessness, a result that is consistent with the power-approach model of Keltner et al. (2003) and associated findings (Anderson & Berdahl, 2002; Galinsky et al., 2003). But when power was conceived or expressed under the shadow of illegitimacy, the powerful no longer showed more approach than the powerless.

That’s from a 2008 paper in Psychological Science that explains how perceptions of illegitimacy can make the powerless more likely to engage in action.  The moral? Appear legit. And try to make sure the guy running the country next to you also appears legit.