People want ethical robots, but there’s one big problem in the way

People want ethical robots, but there's one big problem in the way

People are messy, complex and often irrational (sorry, but it’s true). So if we want to program ethical robots, can we learn how to code those shades of grey?

“AI is a technology like most technologies – it is completely morally neutral,” AI expert Toby Walsh explains.

“And we make choices about whether it’s good or bad; whether it gets used to make wars more horrific, or whether it gets used to create autonomous vehicles and improve the quality of all of our lives.”

These choices – whether we use AI to create or destroy – are part of the imminent age of AI. According to Walsh, it will be a “golden age for philosophers”, who will need to work with engineers, developers and governments to decide how ethical robots ought to operate, the kinds of machines we should create, and come up with definitive solutions to age-old ethical dilemmas.  

Killer bots

Assumptions that a future dominated by artificial intelligence technology is a dystopia overrun with ‘killer robots’ are not unfounded. Earlier this month, the UN met to discuss how the international community ought to legislate autonomous weapons. Walsh, a self proclaimed AI enthusiast, has been outspoken in his concern about this technology.

He wrote an open letter to the UN, which was signed by leaders in the field including Elon Musk, warning them of the potential dangers of these autonomous weapons. He has since been named runner-up for the 2017 Arms Control Persons of the Year Award, beating Pope Francis, for the impact his letter had on raising awareness about autonomous weapons. 

The letter’s signatories argued that autonomous weapons will be heartless killing machines, ones that can operate on a devastatingly large scale.  

“Instead of deploying an army of Terminators,” Walsh explained, “a terrorist group or a fringe country could buy and clandestinely deploy small, insect-like, AI-equipped drones capable of infiltrating buildings, exploiting personal data and causing mass casualties at very low cost in a way that is difficult to defend against or even deter.”

Human soldiers, and the people issuing their commands, have physical and mental limitations that would make a move like this impossible. There are not enough soldiers in the world to take out an entire city in the space of a few seconds, whereas autonomous weapons allow an individual to command a ruthless, pocket sized armada.

They are, in Walsh’s words, “weapons of mass destruction”.

Why bother with ethical robots?

With all of these complications and potential dangers, it’s worth debating the need for developing artificial intelligence. It’s a question that Walsh is often asked.

The answer, he said, is that AI has the capacity to massively improve our lives on almost every level. Take self-driving cars, for example.

“The planet is immensely stressed,” he said.

“The climate is stressing the planet, the diminishing resources, the global financial crisis, we’ve got a huge number of problems that are stressing the planet globally. Our only hope is to embrace technology.

“The only reason we live better lives than our grandparents is because we did embrace technology. The only answer for our grandchildren is if we, again, embrace technology. It is, in fact, the only cards we’ve got to play.”

Automated brains

Fortunately, there is thousands of years of philosophical thought to help us make the kinds of choices that the creation of ethical robots demand. But unfortunately, philosophers haven’t come up with any definitive answers. This lack of finite resolution is problematic for computers which, unlike humans, are didactic in their thinking.

Take, for example, the frustratingly unsolvable trolley problem, a version of which was first written in 1905.   

It goes like this: A runaway trolley’s brakes have failed and it’s careering towards a group of five people, all of whom will die if it continues on its current course. But, you notice that you are standing near a lever that will divert the trolley. This way, it will only hit one person.  

It’s a relatively easy choice for most people: they would opt to minimise the loss of life.

But it doesn’t end there.

Imagine you’re a doctor. There are five people all in desperate need of organs, or they will die. There is a healthy young man who is a match for all five and whose organs would save their lives. Using the logic of the trolley problem, it would be ethical to kill him so that the other five patients can have his organs.

There are many other variations on the problem, and the conversations at university pubs and, for that matter, in academic papers usually end with everyone deciding that they don’t exactly know what the right answer is – that perhaps there isn’t just one.

Sometimes, we will judge it to be morally acceptable for the trolley to hit one person to save five, and sometimes it is necessary to save one person at the expense of five. A lot of people would argue, for example, that we are obligated to save the life of one child over the lives of five people who are sick and ageing. (A team at MIT created a game to illustrate these dilemmas – play it here.)

But, as Walsh explained, computers are “very literal” and aren’t good at understanding this kind of moral complexity.

“The trolley problem has existed for years, and people have always faced the prospect that at any moment you might be driving down the road and you’ll face the trolley problem, and you’ll have to make that decision,” he said.

“What’s the ethical path to follow? In the past, we didn’t have to write this in advance and code a program that very explicitly says, ‘you’re going to trade H for the numbers of people’ or whatever the appropriate decision is going to be.”

Robert Sparrow, a professor of ethics from Monash University who specialises in AI, disagreed with the idea that we need to come up with a definitive solution to complicated philosophical dilemmas to create technology that functions in a way that is ethically acceptable.

He believes that there is a way to marry the complex ways ethicists think with the didactic brains of machines.

I don’t think this differs from the problem of institutional design more generally … we’re trading off human lives in public policy all the time” he said, pointing to accidents that take place every day, in all kinds of different contexts. We don’t for example, stop building railways because there have been accidents in the construction process in the past.

He conceded that sometimes machines might make the wrong decisions, “but the goal of a transport system is to sort of get people around the place, killing as few as possible,” he said.

“These machines do that better than people.”

The German government agrees. They are one of the first countries to legislate this complicated ethical problem. A government body made up of lawyers, ethicists and tech thought leaders decided that autonomous cars ought to be programmed to minimise human injury. They decided that “the software may not decide on its course of action based on the age, sex or physical condition of any people involved.”

Walsh said that we need to decide if those same ethics ought to apply to Australia. With this logic, multiple children could be injured to save the life of one adult. This might be the right course of action, but it needs to be openly and transparently discussed.

Educate the people, educate the machines

To make these kinds of decisions, Walsh echoes the views of many of his peers in suggesting that our institutions are going to need to acknowledge the increasing importance of ethics. We are, he thinks, entering an era where organisations will have “a CPO, a chief philosophical officer who is going to guide ethical decisions.”

But the onus is not just on ethicists to create ethical robots.

“My fellow computer scientists are waking up to the idea that, in the past when we got spreadsheets, we didn’t really have to worry too much about the ethics,” Walsh said.  

“Now, people’s lives are at stake and what we’re building is influencing society in very profound ways, and with that comes responsibility.”

The ethical values of the makers themselves are particularly important when we consider the idea that machines, like all things, bear the mark of their creators. For this reason, Walsh stressed the importance of programmers and engineers learning ethics.

Furthermore, he believes that accelerating the inclusion of women in engineering might soon become a moral imperative.  

“The under-representation of women in AI and robotics is undesirable for many reasons,” he said in an article for the Sydney Morning Herald.

“Women will be disadvantaged in an increasingly technically focused job market. But it might also result in AI systems that fail to address issues relevant to half the population, and even to systems that perpetuate sexism.”  

Then there’s the question of educating the machines themselves. Could robots, with their lack of human weaknesses and optimised learning mechanisms, be better at learning ethics than their human creators?

“A lot of AI these days is about learning; it might be possible that we get machines to learn ethical values by watching what we do,” Walsh said.

The problem is that humans aren’t very good at being ethical.

“What they’ll learn from us,” Walsh explained, “is lower standards”.

Exit mobile version