Use cases & solutions

Can man and machine work together?

Monday, 18 September 2023

Jurgis Karpus, Assistant professor in the Cognition, Values, Behavior and the Crowd Cognition research groups at Ludwig-Maximilians-Universität in Munich, gave a presentation during ADW in March 2023. Karpus stated, among other things, that it is better that humans and machines do not work together in dedicated airspace. We asked what led him to this striking conclusion.

In 2021 Jurgis Karpus, together with other scientists, published the paper ‘Algorithm exploitation: Humans are keen to exploit benevolent AI’. In the article, published in iScience, Karpus explores how humans behave when they interact with cooperative artificial intelligence (AI) systems and whether they are more likely to exploit them than other humans.

What motivated you to study how humans interact with cooperative AI systems?
“I was studying human decision making for quite a long time. In particular, why and when people cooperate amongst themselves. A few years ago a colleague asked me over lunch: ‘Do you think people will also cooperate with robots, such as self-driving cars?’ So we had a quick chat about this over lunch and that prompted us to start a new research program in our group to study human interaction with machines.”

How did you design your experiments?
“We thought that we could use the already established behavioral science methods that we use to study cooperation among humans and to apply those methods to the study of human interaction with machines. We took already developed and well-established experimental methods from behavioral game theory and started applying them to study human willingness to cooperate with machines.”

Was it hard to find those interactions?
“No, not really. It's not so difficult to study human interaction with intelligent machines or machines powered by AI using game theory. The tricky bit is to see whether findings that come from rather abstract and carefully constructed game-theoretic studies in labs will transfer to real life, day-to-day interactions. That’s why we're now working together with computer scientists and human-computer interaction specialists to implement game-theoretic scenarios in much more realistic environments. We now use virtual reality environments to do that.”

What surprised you most about the outcome of your research?
“We thought that, since everybody's talking about the need to increase people's trust in machines, people will not trust machines as much as they trust other humans to cooperate with them. But we were wrong. We found that participants in our large scale studies expected machines to be as cooperative as people. But we also found that, in return, people didn't cooperate with machines as much as they cooperated with fellow humans. What this means is that people expect machines to cooperate with them, but they are also quite keen to take advantage of cooperative machines, much more than they are keen to take advantage of cooperative humans. That was our major finding. We now call this phenomenon algorithm exploitation.”

So, you suggest that humans are more likely to exploit benevolent AI systems than other humans, even when the AI systems are cooperative and helpful. Could that have negative consequences for the success and safety of future AI applications that rely on human-AI cooperation, such as self-driving cars, co-working robots or drones?
“Yes indeed. And what we see in virtual reality experiments, which we are conducting now, is that people slow down for a human-driven car in busy traffic more often than they slow down for an automated empty (self-driving) car. So it appears that our initial findings from experiments that we conducted earlier in rather abstract settings are also playing out in a more realistic type of interaction. One reason for why this happens might be that we often cooperate with others not because we are born altruists, but because we realize that cooperation is sort of a compromise – a kind of social contract. We are aware that we can be nasty to one another and not cooperate if we so choose. But since we know that machines are programmed not to be nasty and to always be nice to us by their design, we think it’s safe to take advantage of them.”

So we humans are the bad guys and not the machines?
“Yes. It’s like that.”

How can we make humans more cooperative?
“One approach might be to publicize or talk more about the reasons why we are creating these artificial agents and why we are introducing them in our lives. At the end of the day, we are creating them for something good: more efficiency, more safety, things that are good for us all. If you emphasize that and if people realize that, they might be kinder to machines, perhaps even as kind as they are to other humans. On the other hand, free-riding on others’ goodwill has been a human societal problem for ages and very difficult to overcome. So I am skeptical. Another solution could be to make machines less predictable and more nasty themselves. But that, of course, will raise a myriad of ethical concerns and objections.”

Or maybe not cooperate with machines at all?
“That’s what I suggested at Amsterdam Drone Week last March. Maybe we should avoid interacting with machines in situations where conflicts of interest between what we personally want ourselves and what machines are there to do can arise—situations where it will be tempting to take advantage of cooperative machines for our selfish gains. But I am not sure if we can stop the developments in this field. One thing that we can do is promote better education and regulation to foster trust and reciprocity between humans and AI.”

Are all humans the same? Or can we - in the Western world - learn from other cultures?
“In our initial studies we recruited participants in the US. Other researchers later replicated our findings in the UK. To find out if algorithm exploitation was a global phenomenon we recently teamed up with experimental psychologists in Tokyo and reran our study in Japan. What we found was very interesting. In all respects, people in Japan acted similarly to people in the UK and the US. Except for one thing: people in Japan didn't exploit artificial agents. They cooperated with machines as much as they did with humans.”

Ok. And did you find out why?
“Yes. We asked people how they felt when they exploited a fellow human or a machine. And here we found interesting differences across countries. In Western societies people generally feel bad when they exploit other people. But they don’t feel bad at all when they exploit machines. That makes sense if you think that machines don’t have feelings. When you exploit a machine you don’t genuinely “hurt” it. And yet, in Japan people feel as bad about exploiting machines as about exploiting another human. So one reason why people in Japan cooperate with machines more is that they emotionally treat machines similarly to how they treat humans.”

What can we learn from people in Japan?
“That's the next question. The idea that people in Japan have a different attitude to machines, compared to people in the West, has been around for a while. Psychologists have been studying cross-cultural differences in people’s general attitudes toward machines using surveys. For example, people in Japan have been found to be less worried that machines might eventually take over their jobs. Some say that this might be so because people in Japan have had more exposure to robots and thus have a more realistic view of them and their actual capabilities. Other researchers point to differences in religious belief systems, for example, the idea that was prevalent in Japan that non-living things can possess souls.”

It could be the solution for the near future.
“Yes. As a behavioral scientist, I would like to study next why people in Japan feel guilty about exploiting machines. That might shed some light on what you're asking. Another question we might ask is who is right: people in Japan or people in the West? Should you feel guilty about exploiting machines? That is something that we will have to study more.”

Share your stories with us
Do you have knowledge on current air solutions, potential innovations and vital regulations you would like to share with the UAS community? The Amsterdam Drone Week website and social media channels are a great platform to showcase your stories!
Please contact our Brand Marketing Manager Hilke de Vries.