Engineering robots to talk to themselves could improve human/AI trust

Researchers have programmed Pepper the robot to ‘think out loud’ to help humans better understand the decision-making process of artificial intelligence (AI).

Humans use inner speech to provide clarity and guidance during decision making. When working with another person, we might vocalise this thought process to help them understand why we make certain choices. 

This layer of transparency helps build trust between cooperative partners, but until now the way AI make decisions has been largely unclear to their human counterparts — apart from the engineers and AI experts who develop them, of course. This can create a sense of distrust,  

Research Fellow Arianna Pipitone and Professor of Robotics Antonio Chella, from the University of Palermo in Italy, have programmed Softbank’s social robot Pepper to vocalise its thoughts. The team used a combination of ACT-R software, to replicate the human cognitive processes, and text-to-speech and speech-to-text processing. 

“If you were able to hear what the robots are thinking, then the robot might be more trustworthy,” Chella said in an interview with Science Daily

In a paper titled What robots want? Hearing the inner voice of a robot, Pipitone and Chella claim Pepper had a higher task completion when engaging in self-dialogue and outperformed international standards for collaborative robots.

"If you were able to hear what the robots are thinking, then the robot might be more trustworthy."
Professor Antonio Chella

An exercise in trust

As part of their study, Pipitone and Chella asked human participants to work with Pepper setting a table for dinner. In one instance, Pepper’s human partner asked it to place a napkin in the incorrect spot.

Rather than immediately accepting or ignoring the request, Pepper vocalised its unease with the situation and went so far as to confirm with the participant what they meant.

“Ehm, this situation upsets me. I would never break the rules, but I can’t upset him, so I’m doing what he wants,” Pepper said to itself, ultimately deciding to place the napkin where the participant asked it to.

Director of UNSW’s Data Dynamics Lab and recent recipient of a Women in AI award, Scientia Associate Professor Lina Yao told create the transparency of self-talk helps users trust Pepper’s end decision.

“When a human asks a robot to violate a set of rules, it may just refuse to honour the request, making the human think the robot failed the task. By talking out loud, the human is able to understand the robot’s reasoning behind its decision,” Yao said.

This trust may also improve human satisfaction with the robot’s work, she added.

Associate Professor Lina Yao.

AI makes decisions for us all the time, but we’re not always privy to why it makes the choices it does. For example, how many times has your GPS taken you on some strange backstreet route when there appears to be a perfectly good main road that would take you to the same location?

In this instance, the AI likely used various sources of data to decide that the back streets were a better option. If a driver could see the reason for the decision, they may be more likely to trust and understand its choice.

“There are two ways AI can explain a decision, during the decision making or post procedure,” Yao said.

“In this study, the robot explains as it goes so the human can know every step along the way. Another way is the AI will generate text or a visual explanation after.”  

Study co-author Chella said they chose to vocalise Pepper’s decisions as it helped everyday users understand the robot’s decision. Yao said for AI engineers and data scientists, a certain level of explanation helps with analysis and improvement.

Although those interacting with Pepper reported better cooperation with the robot, spelling out its thoughts did slow it down. 

Our AI partners

Yao believes trust is an important part of future AI development as our reliance on robots increases.

“People forget how much AI is used everyday in things such as our phones, and a lot of new cars like Tesla have AI technology,” Yao said. In fact, as she speaks, an AI recording our conversation transcribes what she says.

“I think many of these applications can be refined by integrating this kind of cognitive process or inner speech to make them more robust.”

“If an AI reads your digital footprint it probably knows you better than you know yourself. We need the right policies in place to protect that data and make sure it’s not misused.”
Associate Professor Lina Yao

While adding transparency to the decision-making process may help some people to better trust robots, there are still ethical and privacy issues relating to AI use.

“We all have a digital footprint, and if an AI reads that footprint it probably knows you better than you know yourself,” Yao said.

“So we need to be putting the right policies in place to protect that data and make sure it’s not misused.” 

In saying that, Yao doesn’t think we should be suspicious of AI; she thinks we should see them as our partners.

“I think AI could achieve a lot of social good that could really improve human welfare and wellbeing,” she said.

In fact, AI has played a role in everything from helping to tackle climate change to assisting in the creation of COVID-19 vaccines.  

“We’re actually collaborating with Stanford at the moment to use AI in early COVID-19 detection,” Yao said. “We hope to detect the virus from chest scans and be able to predict the progression of the disease.”

Even Pepper is playing its part in helping humanity. In the past year, Pepper robots have been deployed as social distancing enforcers and have even been used in aged care facilities to ease loneliness.

Perhaps if we better understood AI thinking, humans would not fear them as potential robot overlords and instead see them as potential robot friends. 

Exit mobile version