People Think Secret Information Is Better Information

The recent disclosures about the extent of the NSA’s domestic spying program add to a long history of incidents in which the American public has gained access to information that was once secret. And that’s great. People should have information about what their government is doing. But it’s worth considering whether people are able to make accurate judgements about leaked information. For example, do people perceive the quality of information to be different if the information is secret rather than public?

According to a new study led by the University of Colorado’s Mark Travers, when it comes to foreign policy the answer is yes:

Three experiments demonstrate that in the context of U.S. foreign policy decision making, people infer informational quality from secrecy. In Experiment 1, people weighed secret information more heavily than public information when making recommendations about foreign political candidates. In Experiment 2, people judged information presented in documents ostensibly produced by the Department of State and the National Security Council as being of relatively higher quality when those documents were secret rather than public. Finally, in Experiment 3, people judged a National Security Council document as being of higher quality when presented as a secret document rather than a public document and evaluated others’ decisions more favorably when those decisions were based on secret information.

To be clear, none of the judgments made by participants are inherently “bad” judgments. Placing greater value on secret information may be a useful heuristic in most situations. In fact, the researchers proffer three very legitimate reasons why people tend to be smitten with secret information: 1) Secret information is often more important in strategic situations (e.g. a seller’s preferences in a negotiation), 2) people tend to view their personal secrets as being of greater importance, and thus they may believe the same about other secrets, and 3) governments generally behave as though secret information is more important.

But there are some situations where a heuristic based on secrecy can be a problem. For example, it’s possible for the government to take advantage of leaks.

Our studies imply that, among average U.S. citizens, secret information is used as a cue to infer informational quality. This suggests that when government leaders claim, for example, that secret information indicates that enemy nations are building weapons of mass destruction—and that military intervention is therefore warranted—citizens may be more likely to endorse their government’s position even though there is no opportunity for public vetting of that information.

*Cough* *Iraq* *Cough* *Cough*

Though the study doesn’t shed any light on whether experts exhibit a tendency to over-value secret information, it’s also possible that they could be led astray. For example, if an intelligence officer has a set of policy preferences based on public information, but then encounters a piece of secret information, he may place too much weight on the secret information and alter his preferred policy too much. Obviously foreign policy experts have extensive training and a strong grasp of complex situations, but when you’re dealing with issues important enough to involve secret information any marginal shift away from the optimal policy has the potential to be extremely destructive.

Finally, assuming the findings extend beyond the realm of direct foreign policy, the study emphasizes how important it is for the media to not screw up when given access to secret information. For example, initial reports based on Edward Snowden’s leaks contained  claims about “direct access” that the government and the companies involved continue to deny. However, because the “secret” information in the news articles garners more weight than the public denials, it’s likely that many people will be slow to correct their perceptions of the spying program. The media should always take care not to report inaccurate information, but when the information is purported to be secret, sloppy reporting will be even more harmful.
———————————————————————————————————————————————————————-
Travers, M., Van Boven, L., & Judd, C. (2013). The Secrecy Heuristic: Inferring Quality from Secrecy in Foreign Policy Contexts Political Psychology DOI: 10.1111/pops.12042

What Types of Feedback Should Students Receive?

Throughout the school day there are hundreds of small reactions, judgments, and decisions that are tossed around in a student’s head. The question is, when, where, and how can students be given additional information that nudges these thoughts in directions that will lead to better learning outcomes. At the moment most teachers have their own systems for providing feedback and guiding students, but pair of new studies on computer learning suggests that there are certain best practices when it comes to giving students information about their learning.

The first study examined the effects of positive feedback on learning, and the findings suggest that correct judgments should be reinforced at all possible times:

We hypothesize that positive feedback works by reducing student uncertainty about tentative but correct problem solving steps. Positive feedback should communicate three pieces of explanatory information: (a) those features of the situation that made the action the correct one, both in general terms and with reference to the specifics of the problem state; (b) the description of the action at a conceptual level and (c) the important aspect of the change in the problem state brought about by the action. We describe how a positive feedback capability was implemented in a mature, constraint-based tutoring system, SQL- Tutor, which teaches by helping students learn from their errors. Empirical evaluation shows that students who were interacting with the augmented version of SQL-Tutor learned at twice the speed as the students who interacted with the standard, error feedback only, version.

The second study examined task selection, and it found that advising students on what to do has the potential to be detrimental:

A positive effect on learning is expected when learners select tasks that help them fulfil their individual learning needs. However, the selection of suitable tasks is a difficult process for learners with little domain knowledge and suboptimal task-selection skills. A common solution for helping learners deal with on-demand education and develop domain-specific skills is to give them advice on task selection. In a randomized experiment, learners (N = 30) worked on learning tasks in the domain of system dynamics and received either advice or no advice on the selection of new learning tasks. Surprisingly, the no-advice group outperformed the advice group on a post-test measuring domain-specific skills. It is concluded that giving advice on task selection prevents learners from thinking about how the process of task selection works. The advice seems to supplant rather than support their considerations why they should perform the advised task, which results in negative effects on learning.

Taken together, the studies have a number of important implications for the classroom. First, teachers should make every effort to give positive feedback, particularly when students seem unsure about something. Second, it’s important for teachers to elicit student explanations for why one task logically follows from a previous task.

Of course the broader takeaway is that a room full of computerized tutors will ultimately be able to do things a single teacher cannot. Cognitive tutors can provide positive feedback every time a student d0es something correct, as well as neutral or negative feedback when they don’t. In addition, although the above study suggests cognitive tutors should not give straightforward advice on task selection, the ability to allow students to choose from among multiple tasks has the potential to guide them in the right direction while at the same time allowing them to figure out why it’s the right direction. A teacher’s inability to be in 25 places at makes it impossible for them to efficiently accomplish these things.

————————————————————————————————————————————————————————–
Mitrovic, A., Ohlsson, S., & Barrow, D.K. (2012). The effect of positive feedback in a constraint-based intelligent tutoring system Computers & Education DOI: 10.1016/j.compedu.2012.07.002

Taminiau, E.M.C., Kester, L., Corbalan, G., Alessi, S.M., Moxnes, E., Gijselaers, W.H., Kirschner, P.A., & Van Merrienboer, J.J.G. (2012). Why advice on task selection may hamper learning in on-demand education Computer in Human Behavior DOI: 10.1016/j.chb.2012.07.028

Technology Is Rapidly Lowering the Cost of Testing

People may view this as something for the good news/bad news file, but technology has quietly made it significantly easier to grade tests electronically. For example, a new paper in the Journal of Science Education and Technology highlights a system called “Eyegrade” :

While most current solutions are based on expensive scanners, Eyegrade offers a truly low-cost solution requiring only a regular off-the-shelf webcam. Additionally, Eyegrade performs both mark recognition as well as optical character recognition of handwritten student identification numbers, which avoids the use of bubbles in the answer sheet. When compared with similar webcam-based systems, the user interface in Eyegrade has been designed to provide a more efficient and error-free data collection procedure. The tool has been validated with a set of experiments that show the ease of use (both setup and operation), the reduction in grading time, and an increase in the reliability of the results when compared with conventional, more expensive systems.

It’s easy to worry about how these new systems could lead to more testing, but their ability to grade paper-and-pencil assessments could also make teachers’ lives a lot easier. Of course in the long run the debate about assessment will be moot because eventually everything will be assessed instantly and electronically. When every answer on every assignment is digitally stored and analyzed by complex algorithms you don’t need a state exam to determine proficiency levels.

———————————————————————————————————————————————————
Fisteus, J.A., Pardo, A., & Garcia, N.F. (2012). Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision Techniques Journal of Science Education and Technology DOI: 10.1007/s10956-012-9414-8

Can Video Games Teach Kids “Grit”?

KIPP’s character report card and Paul Tough’s new book have laudably placed an emphasis on how emotional skills and character traits (e.g. persistence, curiosity, optimism, etc.) influence a child’s academic trajectory. Yet the question remains, will our education system make a real effort to emphasize these new ideas, or will they join things like Carol Dweck’s work on mindsets in the heap of scientifically valid interventions that are given their 15 minutes and then ignored.

The answer to this question will likely come down to whether we can find workable ways of enhancing crucial character traits and skills. Fortunately, one method may be right under our noses — video games:

An online performance-based measure of persistence was developed using anagrams and riddles. Persistence was measured by recording the time spent on unsolved anagrams and riddles. Time spent on unsolved problems was correlated to a self-report measure of persistence. Additionally, frequent video game players spent longer times on unsolved problems relative to infrequent video game players. Results are explained in terms of the value of performance-based measures of persistence over self-report measures and how video game use can lead to more persistence across a variety of tasks.

The findings are relatively intuitive in that video games — and RPGs in particular — reinforce the relationship between persistence and rewards. It’s also worth noting that over 80% of the sample was female, so it’s unlikely there are gender differences in how video games influence persistence.

The broader point is that there are a host unique of unique things that computer-based cognitive tutors can do. While it’s clearly unrealistic for students to spend hours of class time playing Skyrim, at the margin a computer-based exercise can give kids more control, more flexibility, more areas for exploration, and more opportunity to persist and see that persistence pay off. Cognitive tutors won’t be best for every student, but the potential to teach persistence is another reason it’s important to support the development of an infrastructure that provides opportunities for any student to get computer-based instruction.
———————————————————————————————————————————————–
Ventura, M., Shute, V., & Zhao, W. (2013). The relationship between video game use and a performance-based measure of persistence Computers and Education DOI: 10.1016/j.compedu.2012.07.003

Technology That Can Be Used For Education Is Not the Same As Educational Technology

Matt Richtel has a nice story in the New York Times about the vast quantity of time kids, and low-income kids in particular, are wasting on electronic devices. The article also hints at why it’s so unproductive to conflate technology that could serve educational purposes with actual transformative educational technology.

“Despite the educational potential of computers, the reality is that their use for education or meaningful content creation is minuscule compared to their use for pure entertainment,” said Vicky Rideout, author of the decade-long Kaiser study. “Instead of closing the achievement gap, they’re widening the time-wasting gap.”

It’s a mistake to expect something like an iPad to magically make somebody a good student. While an iPad is a tool that can be used for educational purposes, it was not created for education. Even if it allows you to connect to the internet or use a specific educational program, it won’t have a transformative effect unless it restructures the way something is learned or induces an significant increase in motivation and learning time.

There is no reason to expect a laptop, X-Box, or iPad to do that. If a new iPad comes out that’s twice as good, for every “unit” of additional technological advancement leisure activities are most likely going to get better at a faster rate than learning activities. The outcome is somewhat counterintuitive — even though learning with and iPad will be more entertaining and engaging than before, the opportunity cost is higher because when compared to an iPad video game, learning is now less engaging than before. That’s why the “time-wasting” Rideout talks about continues to grow.

You might be thinking “So what?” Who cares if iPads and other gadgets aren’t a panacea for struggling students?” The problem is that haphazardly throwing around the phrase “educational technology” distracts and pulls resources away from the technologies that are actually panacea-ish. School districts across the country are attempting to signal their quality by sinking money into highly visible, low-impact tools like iPads, and the result is a lack of focus on improving and implementing big-idea technologies like those being used by Rocketship schools or School of One. It’s counter-productive that watching a science video on an iPad gets the same label as doing a math exercise on a computer program that feeds you specialized problems after quickly figuring out that you have difficulty multiplying polynomials when the last term contains a negative coefficient. The former makes an evening more enjoyable. The latter can change an education system.

Ask Not What You Can Do For Educational Technology, But What Educational Technology Can Do For You

It’s irritating that people talk about educational technology in terms of iPads in the classroom when the real impact will come from pinpoint differentiation, instant student assessment, and a third thing that nobody talks about — improved simulations in speciality learning. For example, medical students who use virtual patients — an “interactive computer simulation of real-life clinical scenarios” — perform better than those who use more traditional methods.

A meta-analysis was performed to assess the Effect Size (ES) from randomized studies comparing the effect of educational interventions in which Virtual patients (VPs) were used either as an alternative method or additive to usual curriculum versus interventions based on more traditional methods…Under a random-effect model, meta-analysis showed a clear positive pooled overall effect for VPs compared to other educational methods. A positive effect has been documented both when VPs have been used as an additive resource and when they have been compared as an alternative to a more traditional method. When grouped for type of outcome, the pooled ES for studies addressing communication skills and ethical reasoning was lower than for clinical reasoning outcome.

Gains in how we teach people to remove a gallbladder or fly an airplane won’t close the 4th grade achievement gap, but that doesn’t mean they’re not substantial. The fact that the virtual patients had different effects on different kinds of learning also hammers home an important point. Is technology great for everything? No. But there tends to be a “technology vs. tradition” framing that says it has to be all or nothing. As virtual patients illustrate, technology may not be best for teaching everything (e.g. medical ethics), but it’s useful for teaching a lot of things (e.g. diagnosing diseases.)

——————————————————————————————————————————————-
Consorti, F., Mancuso, R., Nocioni, M., & Piccolo, A. (2012). Efficacy of virtual patients in medical education: A meta-analysis of randomized studies Computers & Education, 59 (3), 1001-1008 DOI: 10.1016/j.compedu.2012.04.017

Another Reason Bad Schools Stay Bad

As if there weren’t enough self-perpetuating social and economic phenomena that make it difficult for poor neighborhoods to change, Stanford’s Michelle Reininger highlights a new one: Teachers are significantly more likely than other professionals to work near where they grew up.

Teachers’ preference for working close to where they grew up is a distinct characteristic of teachers, and the author further explores the implications of those preferences for schools facing chronic shortages of teachers. The author finds that the local nature of the labor force and the differential rates of graduation and production of teachers from traditionally hard-to-staff schools are reinforcing existing deficits of local teacher labor supply.

The story would go something like this: “Good” neighborhoods produce a higher proportion of “good” teachers (through better high schools and higher college graduation rates), but the preference to stay close to home keeps those teachers from proliferating out into “bad” neighborhoods. Meanwhile, “bad” neighborhoods are stuck with a smaller teacher supply that’s composed of teachers with poorer credentials.

This strikes me as the kind of finding people will use to confirm the need for whatever solution their ideology and preconceived notions prescribe. Some might say it means we need more teacher training, higher salaries, and an environment less critical of teachers. Other’s might say it demonstrates the need to focus more on attainment measures like college readiness and college graduation.

For me, the study is another clear sign of the need to build an infrastructure for integrating computer-based cognitive tutors into the classroom. Putting aside the massive efficiency gains from having every student get instruction tailored to their specific strengths, weaknesses, and motivational tendencies, the ubiquity of cognitive tutors will also smooth over many of the inequities in teacher quality. We may not be there yet, but in the next 5 to 10 years computers will be able to instantaneously do at least 90% of what a teachers can when it comes to identifying, explaining, and correcting student mistakes and misconceptions involving math. Whereas only once school can employ the best teacher, every school can use the best cognitive tutor software.
———————————————————————————————————————————————-
Reininger, M. (2011). Hometown Disadvantage? It Depends on Where You’re From: Teachers’ Location Preferences and the Implications for Staffing Schools Educational Evaluation and Policy Analysis, 34 (2), 127-145 DOI: 10.3102/0162373711420864