What Increases Trust in Driverless Cars?
January 24, 2014 4 Comments
Driverless cars face a mountain of technological, legal, and regulatory barriers, but it seems likely that some type of autonomous vehicle will eventually reach the cusp of widespread use. At that point, assuming the vehicle hasn’t been made obsolete by the invention of the hoverboard, it will have to earn the trust and confidence of the people who might use it. A new study (pdf) led by Northwestern’s Adam Waytz suggests one way to tackle this problem is by giving the vehicle more human features.
The study, which Waytz conducted with UConn’s Joy Heafner and Chicago’s Nicholas Epley, required participants to drive different types of vehicles in a driving simulator. The researchers reasoned that the more human-like the car, the more people would view it as thoughtful and competent, and the more they would trust it to make decisions.
At the start of the experiment the researchers assigned participants to three groups. In the normal condition participants simply drove the car themselves. In the agentic condition the car could control its own steering and speed, a feature that could be activated with the press of a button. In the anthropomorphic condition the car had the same autonomous functions as in the agentic condition, but it was also given a name (Iris!) and a female voice. The voice described the car’s autonomous features and when to use them, and it followed the same script that the experimenter used when explaining the car’s features in the agentic condition.
Participants completed two six-minute drives around a pair of practice courses. After the first drive they reported how much they trusted the car and felt it was safe, how much they liked the car, and how much they perceived it to have human qualities.
During the second drive the simulator presented a situation in which another car jutted out in front of the participant. An accident was nearly unavoidable, though it was clearly not the participant’s or their car’s fault. After completing the 2nd course participants reported how much they thought that they or their car were responsible for the accident.
The results suggest that giving autonomous vehicles human qualities is an effective and easy way to make people more comfortable with them. Participants in the anthropomorphic condition reported overall trust ratings that were significantly higher than participants in the agentic condition, and those in the agentic condition reported trust ratings that were significantly higher than participants in the normal condition. Participants in the anthropomorphic and agentic conditions also liked their vehicles more than participants in the normal condition.
Perhaps most importantly, participants blamed their car for the accident significantly less in the anthropomorphic condition than in the agentic condition. The name and voice made the anthropomorphic car seem more like it was competent, and that perception led to less blame. As the researchers write, “The perceived thoughtfulness of the fully anthropomorphic vehicle mitigated the responsibility that comes from independent agency. This shows a clear relationship between anthropomorphism and perceptions of responsibility.”
The lower level of blame is important because autonomous vehicles not only have a massive hurdle to clear in getting on the road, they also need survive the backlash that inevitably comes when they start getting into accidents. These findings suggests that the human qualities that make people more comfortable with using autonomous vehicles also make them less likely to blame autonomous vehicles when outside forces (i.e. human error) create an accident. If future research confirms the benefits of anthropomorphic features it’s not hard to imagine a world 30 years from now where you don’t choose a cab based on price or the smoothness of an app, but on the celebrity personality that controls the driving and interacts with passengers (that is, unless every cab is Scarlett Johansson.)
More broadly, the study is a good reminder that when we imagine machines taking on new autonomous responsibilities we generally picture them in their non-autonomous form. If you had imagined an autonomous vacuum cleaner 25 years ago you might have been weirded out by the idea of a three foot tall, dust-bagged vacuum zooming around the room. Of course that’s not what we got. We got the cute, un-intrusive, and animal-like Roomba, and it doesn’t seem all that weird. That’s not a perfect analogy, but the point is that when we finally do have widespread adoption of autonomous vehicles the passenger experience may be different than it is today in ways that we can’t even imagine. So maybe Google should round up Spike Jonze and the four Pixar guys working on Cars 7, lock them in a room, and not let them come out until they create the most alluring humanoid car imaginable.
Waytz, A., Heafner, J., & Epley, N. (2014). The Mind in the Machine: Anthropomorphism Increases Trust in an Autonomous Vehicle Journal of Experimental Social Psychology DOI: 10.1016/j.jesp.2014.01.005