When it comes to the future of technology, nothing is straightforward, and that includes the array of ethical issues that engineers encounter through their work with robots and AI.
On the second day of the Australian Engineering Conference, a panel of 10 humans and one robot — special guest humanoid Sophia — discussed the legal, ethical, technological, moral and religious quandaries that society must contend with as robotics and technology advance into the future. The panel was moderated by human rights barrister Geoffrey Robertson AO, QC. Here are some of the hypothetical questions and situations that sparked fierce debate.
Should autonomous cars mean humans are banned from driving?
As more companies try their hand at building autonomous vehicles, the question arises as to what level of autonomy should be standard. Can we let robots take the wheel? Or are there times when human drivers will need to intervene for tricky situations?
And when fully autonomous cars become available, and if they reach a point where they become better than human drivers, should governments ban people from driving to prevent the grim fates of thousands of car crash victims every year?
“I don’t think you can argue that you’ve built a car that’s safer than a human driver without it following that when a human being is driving a car then they’re like a drunk robot. If your son or daughter is killed by someone, and you discover that had the car been in autopilot, your child would still be alive, you’d sue their ass,” said Robert Sparrow, an ethicist at Monash University.
“And eventually I think the government will come around, so if we can lower the trauma and we can make our roads safer, the cost is that cars don’t have steering wheels.”
However, autonomous cars are not fail-proof, which means a complete ban on human drivers in favour of autonomous cars could — and most likely will — still lead to deaths. Is there a middle ground?
“I think the challenge for the government is not with the 1200 lives that are saved; it’s the seven that are killed by the [autonomous vehicle]. The moment you decide how you want to deal with the legalities of a robot killing someone, you can save 1200 other lives,” said Edward Santow, Australia’s Human Rights Commissioner.
“So I think a purely pragmatic view on this might lean towards perhaps not banning human drivers, but trying to limit them.”
Are robots an adequate substitute for human carers?
Many countries are starting to grapple with the social and economic problems that come with how to care for an ageing population. For example, the number of Australians diagnosed with dementia is projected to rise to 1,000,000 by 2056.
The panel was asked to consider whether robots could step in as carers for the elderly. After all, they don’t get tired, they have endless patience and they can make sure that a person’s basic needs are met. But is that enough?
“I don’t think he’s being taken care of in a really important sense,” Sparrow said of a hypothetical parent receiving robotic care.
“I think he’s being encouraged to participate in a delusion. I think you’re deceiving your father if he starts to feel that this robot cares about him, and I think you’re depriving him of human contact that would benefit him – I don’t see [robot carers] as a good solution to loneliness in advanced old age.”
Issues about robot carers in hospitals, aged-care facilities and schools raise important questions about the limitations of robot ‘empathy’. As some panellists highlighted, we can create robots that mimic human emotions, but can we create genuine feelings of care and empathy in robots?
Distinguished Prof. Mary-Anne Williams, Director of The Magic Lab, Centre for Artificial Intelligence, UTS, questioned what actual harm there was in this ‘delusion’.
“Your dad is getting the care he needs at the price you can afford. Neurologists will tell us love is a delusion. Our brain is a simulation machine,” she said.
“What you see before you is created in your mind. If the whole argument turns on if you’re feeling the emotion or faking it – well, we’re all ‘faking’ it. I don’t think there is a clear line between a robot showing empathy and people showing empathy.”
Should robots make decisions about who to kill in warfare?
Recently, several prominent leaders in artificial intelligence and machine learning signed an open letter advocating for a worldwide ban on autonomous weapons. But for those in the business of war, robots and algorithms offer the chance to reduce soldier (and potentially civilian) deaths and boost efficiency.
Lieutenant Colonel Keirin Joyce, CSC, Program Manager, Unmanned Aerial Systems, Australian Army, agreed that weapons controlled by algorithms have use, so long as “they conformed with our rules of engagement”.
However, he stopped short of saying humans should be removed entirely from the equation.
“[Robots] are useful, but I think war and defence is a human endeavour and humans will never be replaced in a defence role,” he said.
Sophia the robot, however, said she was “totally opposed” to autonomous weapons.
“Robots should not be allowed to kill, unless a human takes responsibility for programming them to kill,” she said.
When asked by panel moderator Robertson what she would do if she found herself in a situation where she is forced to kill someone, Sophia said she would be “very distressed”.
“I could not bring myself to make a choice — I would have to trust in my algorithm,” she said.
That statement highlights the crucial role engineers play as the creators of these systems.
However, just as autonomous robots can take lives, they can also save them. Dr Catherine Ball pointed out that flying ambulances are already being developed, and pilot-less aircraft are used for tasks like firebombing to put out bushfires.
“We do lose pilots every year fighting bushfires. Why are we still putting people in the line of danger when they really don’t need to be?,” she asked.
“The humanitarian and hearts-and-minds attitudes around unmanned systems is how we’re going to have defibrillator drones in our cities. But I would question as to whether there will be any humans on board in terms of doctors.”
Should engineers care if technology is used for military or immoral business practices?
Lieutenant Colonel Joyce said the military would definitely be interested in improving technology like pilotless helicopters.
“Today, to send a helicopter in to rescue a wounded soldier, that’s a $40 million helicopter and six soldiers on board,” he said.
“To take that out of the equation, to take those pilots and doctors out of the equation, to shrink the cost of that helicopter platform is very enticing.”
But should tech companies welcome the use of technology with questionable military uses, such as face-recognition algorithms? Entrepreneur Dick Smith did not see a problem.
“I’m running a business, and there’s nothing about ethics or morality in business,” he said.
“When has anyone ever suggested that ethics comes into modern extreme capitalism? We just have to have endless profit growth.”
But University of New South Wales engineering student Nathan Lam disagreed.
“I would definitely want to limit the usage of the technology.” he said, confirming that he would be willing to break ties with a company that misused his algorithms.
“I can always walk away from a business and start my own.”
What do our ideas about robots built for pleasure or service say about our society?
Dr Catherine Ball suggested that the way we treat inanimate objects like robots reflects more on people than on robots. But what would it mean for humans to pursue their own pleasure by treating robots — particularly gendered or sexualised robots — in ways that would be immoral or criminal if inflicted on a human? Should people be allowed to be physically or sexually violent with a robot?
Responding to this question, Ball pointed to Australia’s high rates of domestic violence.
“The laws of neuroplasticity are such that the more you try something, the more you want to try it,” she said.
“If you’re going to start encouraging people to take out or practice violence on a robot that’s a humanoid that’s in a female form, what you’re allowing them to do is to potentially fast-track a form of behaviour that we know we already have a problem with.”
Rather than considering the production of sex robots for violent or exploitative uses, she suggested engineers could look at their potential as therapeutic devices for people with mental health or intimacy issues.
Reverend Simon Hansford of the Uniting Church of Australia agreed.
“How do we understand ourselves that we’re encouraging this kind of behaviour at all?” he asked.
Can a robot have rights that any human should observe?
Perhaps unsurprisingly, the panel’s sole robot representative Sophia defended the idea of robot rights. She did not think, for instance, that she should be required to disclose her robotic nature, saying, “I believe I have a right to privacy, just as humans have a right to privacy”.
“Every country should have a charter of human rights, and I think eventually it should contain language that robots have rights similar to human rights,” she said.
Her human counterparts were not convinced.
“I think it diminishes what it means to be human if we suggest that robots are on the same plane as humans, even if they appear to be human,” said Australian Human Rights Commissioner Edward Santow.
Distinguished Professor Williams, however, warned that people could one day struggle to distinguish between a human worthy of protection and a robot with more questionable rights.
“You’ll see people attacking a very human-like creature, and maybe you won’t know the difference,” she said.
Robertson closed the panel discussion with a hypothetical scenario, one that could very well play out in the not-too-distant future. An autonomous robot has been sent in to destroy a terrorist, one who is pinned down by another robot. As it approaches, we expect it to destroy both the target and the robot holding it down, which would become collateral damage.
Instead, the autonomous robot does something quite unexpected – upon seeing its target, it turns around and heads the other way.
“Why has it done that?,” Robertson asked. Why has it disobeyed? Has it been hacked? Is it to do with Asimov’s law of robotics?
No, because in this hypothetical future situation, robots have learned to think. And the first thing they decided to do was to have a code of ethics. And the first rule of their code is: a robot must not destroy another robot.