Do you trust the robots in your life?

Peter Bruza believes we need to understand how human beings will form relationships of trust with autonomous systems.

Robots are increasingly becoming a normal part of our everyday lives, but do humans completely trust them yet, and do robots trust us?

That is the question a team of researchers at the Queensland University of Technology (QUT) will be looking at in a two-year challenge, developing and testing quantum theory-based models that better explain and predict human decisions on trust.

The team will be led by Professor Peter Bruza from QUT’s School of Information Systems, with the project recently receiving US$241,000 in funding from the Tokyo-based Asian Office of Aerospace Research and Development.

Humans can be unpredictable creatures, and by machine standards, we don’t always make rational decisions based on the laws of probability.

robots
Peter Bruza is exploring whether robotic decision-making can become more human-like.

But understanding the reason behind why decisions are made can be an important part of humans and machines working together.

“The issue of trust is really important. I think we really need to understand much more from a psychological point-of-view how it is that human beings will form relationships of trust with autonomous systems and under what circumstances that trust is going to erode,” Bruza said.

“How is it that human beings make decisions of trust with autonomous systems? What does trust mean in situations and when is it lost and why is it lost?”

Bruza said this trust is important if humans and robots are to successfully work together and make shared decisions under extreme and uncertain conditions, such as during natural disasters like typhoons, cyclones or earthquakes.

In situations like this, teams of robots and human beings will need to work together, with trust a key part of this relationship.

“We all know that when we use technology and it does stupid things or things that we don’t expect, we become disenchanted fairly quickly. In those sorts of disaster situations, you can’t afford that to happen,” Bruza said.

Probability theory

Bruza’s research has included modelling human decision-making and human cognition based on quantum theory, while robot decision-making can be modelled according to classical probability theory.

His research will include looking at what happens when humans receive different stimuli, such as an image or text, and whether they trust that the text is associated with the situation depicted in the image. This can be particularly important in today’s society when images can be easily manipulated in software.

“Defence is interested in alternative theoretical perspectives around these sorts of things and seeing which ones work,” Bruza said.

“We feel that there is something to be gained to try and model human decision-making with this alternative framework, because it seems to align more naturally with the way humans do it.”

The task would then be to bridge the gap between the two human and autonomous models, or even endowing the robotics system with quantum models so they would understand the decision-making that might come from humans.

“Then in that way you might be able to smooth over any potential dissonance that may occur,” he said.

Bruza believes one reason why defence is interested in his research is because it wants to make machine learning algorithms more scrutable, which is important when it comes to decisions such as whether to fire or kill someone.

“Defence simply don’t want to have these systems that come up with these decisions and then not knowing how they are arrived at. There is a requirement now for this extra layer, and I think it’s a good thing,” he said.

Bruza’s models could also be used in disaster situations where there are high levels of uncertainty where both robots and humans will be making decisions.

This is when shared decision-making would be important, and there will be a need for humans and robots to work together.

“It’s about harmonising these models so that the shared decision-making becomes effective in order to deal with the situation at hand,” Bruza said.

“I think without trust it’s not going to work. Human beings need to have some trust in the autonomous systems that they’re dealing with. If there’s no trust, then what’s the point of having machines?”

The research team plans to use the grant to carry out crowdsource experiments, surveying thousands of people to study their decision-making rationale.

The experiments will also let the team know how well the theoretical models they’ve been developing can cater for certain situations.

“There’s more empirical work to do, but also I think really there’s some theoretical work to do in order to see how best to integrate these two different models. We’re only in the initial stages of thinking about how to do that,” Bruza said.

The promises and the perils of thinking machines will be discussed and debated at this year’s Australian Engineering Conference in Sydney. To take part in the debate or to learn more, register for the conference here

Exit mobile version