Wednesday, July 1, 2015

Would you trust a robot?

Original post:  Jan 27, 2015

RobotSommelier.jpg
Would you trust a robot to pick out a wine for you?
If you did and it picked out a bottle that you didn't like, would you give it a second chance?

As technology is inserted into new areas of our lives, we will have to trust more and more decisionmaking power to fancy algorithms. Are we ready for the loss of control?

A recent study showed that it isn't quite as simple as placing blind faith in our computer overlords:

In a paper called “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err,” forthcoming in the Journal of Experimental Psychology: General, the University of Pennsylvania researchers Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey asked subjects to consider the challenge of making a difficult forecast: predicting either MBA students’ academic success or states’ airline traffic. They could choose to tie what they earned to either the prediction accuracy of a human (usually themselves) or that of a statistical model. Before making their decision, they first saw the model’s performance on several trial runs, or saw the human’s performance, or both, or neither.
When they had not seen the statistical model perform on the trial runs, the majority of subjects bet on the model being more accurate in the money-earning round—they chose to tie their earnings to its performance rather than the human’s. But if they had seen it perform, the majority bet on the human. That’s despite the fact that the model outperformed the humans in every comparison, by margins ranging from 13 to 97 percent. Even when people saw the performance of both the model and the human in the trial runs, and saw that the model did better, they still tended to tie their winnings in the earnings round to the human over the model. They were more accepting of the human’s mistakes.
These findings surprised the researchers. They had expected people to shy from algorithms at first and then change their minds after seeing their superior performance, Dietvorst says. Instead, he says, they “found completely the opposite.”
One of the study authors speculates that people are less likely to trust computers when it touches on their health or their ego. We may feel that human  judgment can provide superior advice by weighing variables no computer can possibly comprehend. That belief may not actually be supported by the data:

There can be a real cost to this aversion. A 2000 meta-analysis summarized 136 studies comparing predictions made by experts with those made by equations, in areas including medical prognosis, academic performance, business success, and criminal behavior. Mechanical predictions beat clinical predictions about half the time, while humans outperformed equations in only 6 percent of cases. Those are judgments with significant implications for our lives, and it’s a genuine loss to ignore a system that can give us much more accurate answers.

A possible solution might be trying to incorporate some user feedback to help return a level of control to the user:

The researchers at Penn are now exploring why people abandon algorithms so quickly. There may be a hint in the fact that subjects judged the statistical model as worse than humans at learning from mistakes and getting better with practice. Perhaps people could learn to trust algorithms more if they were told that computers can learn. Balfoort says that once you inform customers that WineStein gets better with feedback, “you get a satisfied nod.”
In current work, Dietvorst is finding that giving people the chance to alter an algorithm’s forecast even a trivial amount increases their adoption of it over a human counterpart, from less than 50 percent to more than 70 percent. People “don’t want to completely surrender control,” he says. Ironically, the user’s adjustment usually makes the computer forecast slightly worse, but at least it gets them to use it.
What do you think?

No comments:

Post a Comment