Study: Humans are ‘selfish’ around AIs and don’t feel guilty about it

Humans don’t treat intelligent machines with respect or kindness and don’t feel bad about it, according to a new study that sheds light on humanity’s reluctance to reciprocate with the help that artificial intelligence (AI) offers us.

The findings, published in June, are set to become increasingly relevant in a world where AI-operated cars, household machines and software are becoming more and more common.

While humans expect AI to be helpful, people are much less ready to reciprocate with such machines, and will instead exploit its benevolence to their own benefit, according to a new study from scientists at the Ludwig-Maximilians-Universitaet (LMU) in Munich and University of London.

The researchers gave the example of a driver who is given the choice of yielding to another car on the road. If the driver sees there is no human driving the car, then he or she will often not allow the car to pass ahead of them, the research suggests.

“If we realise that the robot in front of us will be cooperative no matter what, we will use it to our selfish interest,” says Professor Ophelia Deroy, a philosopher and senior author on the study.

“They are fine with letting the machine down,” says Dr. Bahador Bahrami, a social neuroscientist at the LMU. “That is the big difference. People even do not report much guilt when they do.”

The team had investigated whether humans behave just as cooperatively when dealing with AI systems as they do with their fellow humans.

The results were sobering for the researchers: “People expected artificial agents to be as cooperative as fellow humans. However, they did not return their benevolence as much and exploited the AI more than humans,” explains Jurgis Karpus, also from LMU.

This reluctance to cooperate with machines is a challenge for future interaction between humans and AI, the researchers say. – dpa