Can Computers Reduce Bias?

Artificial intelligence has proven to be a the perfect antidote for certain failings of human intelligence. Exhibit A: Alarm clocks that move around the room in order to prevent weak-willed individuals from repeatedly snoozing. A less trivial weakness of the human mind is its susceptibility to numerous cognitive biases, and once again artificial intelligence may have an answer. A new experiment by a group of German researchers finds that a computer recommendation system that recommends information inconsistent with a person’s prior beliefs can alleviate confirmation bias.

In Study 1, preference-inconsistent recommendations led to a reduction of confirmation bias and to a more moderate view of the controversial topic of neuro-enhancement. In Study 2, we found that preference-inconsistent recommendations stimulated balanced recall and divergent thinking.  Together these studies showed that preference-inconsistent recommendations are an effective approach for reducing confirmation bias and stimulating divergent thinking.

Subjects in the experiment were initially asked a series of questions to gauge their thoughts on the topic of neuro-enhancement. Subjects were then presented with a list of eight arguments about neuro-enhancement, four of which were in favor of the practice and four of which were against it.  A third of the subjects were then recommended an argument consistent with their beliefs about neuro-enhancement (preference-consistent), a third were recommended an argument inconsistent with their beliefs about neuro-enhancement (preference-inconsistent), and a third received no recommendation (control condition).  When subjects were later given the opportunity to view more arguments or explain their views on neuro-enhancement, subjects in the preference-inconsistent condition showed less confirmation bias, more balanced recall of the arguments, and a greater ability to generate novel arguments.

These kinds of AI systems will surely come in handy when I’m in charge of a totalitarian democratic society that forces people to watch an unbiased, non-partisan two hour policy-education video before voting or expressing an opinion. For now, the challenge is finding a way to convince people to use preference-inconsistent recommendation systems. I don’t think it would be difficult for search engineers to create some kind of “reverse-Google” that returns preference-inconsistent information, but educators, policy makers, and people fed up with their ignorant friends would have to get creative in finding incentives that would actually get people to use it. The experimenters astutely point out that online recommendations systems are generally designed to give you the most preference-consistent recommendation possible. If you like pizza, Yelp will tell you about pizza places. It won’t say “Have you thought about trying Japanese?”
Schwind, C., Buder, J., Cress, U., & Hesse, F. (2011). Preference-inconsistent recommendations: An effective approach for reducing confirmation bias and stimulating divergent thinking? Computers & Education DOI: 10.1016/j.compedu.2011.10.003

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s