Autonomous weapons and the idea of ‘killer robots’ can conjure images of a dystopian future where robots roam the earth searching for their next victim.
But how scared should we be, and how should the weapons be regulated?
Toby Walsh (pictured above with Baxter the robot), Professor of the School of Engineering at the University of New South Wales, believes autonomous weapons with no human control would revolutionise war and signal a step-change in the way we fight it.
He says it would make the world a much more dangerous place and be the perfect weapon for terrorists to use.
“It would be disastrous if we actually allow war to be fought with these technologies,” he says.
Just like chemical and biological weapons, Walsh believes there is a moral line that shouldn’t be crossed because autonomous weapons don’t have the moral capability to make decisions on who it should or shouldn’t kill.
“Certainly, at the end of the day, these will always be machines and therefore they can’t be held accountable, whether they be given the right moral judgement or not,” he says.
Walsh isn’t the only person to be concerned about the potential of autonomous weapons.
A majority of United Nations states, including Pakistan and Iraq, have proposed starting negotiations next year on a new treaty to prevent and ban lethal autonomous systems, while some countries have recommended a new mandate to the Convention on Conventional Weapons to ensure “meaningful human control” over weapons systems.
But some UN states – including Australia, the United States and Russia – have said talks on fully autonomous weapons should continue in order to explore the advantages or benefits of such weapons systems.
One argument for the advantage of autonomous weapons is that machines can be more impartial than humans in making decisions due to their lack of emotions and subjectivity.
But Walsh says that point is challenged by the fact that we don’t yet know how to build such machines, and is one reasons for why we need to regulate it.
“It also ignores the fact that if we could, we don’t know how to build machines that can’t be hacked or that any ethical restraints that we could build into a machine will be eliminated or it could be removed by bad actors,” he says.
“And everybody ignores the argument that this would create a new type of weapon, one that would be far more destructive, one that would change the duration, the speed and accuracy of war and therefore this would make warfare a much more terrible thing.”
Fourth industrial revolution
Despite concerns over autonomous weapons, the social impacts of other aspects of artificial intelligence are starting to be seen and felt by people.
For example, Walsh says companies such as Rio Tinto are forging ahead with automation, implementing it in such areas as automated trucks.
While that has meant the loss of some jobs, Rio Tinto has said these employees will be reskilled to work elsewhere in the mining industry.
But some people are concerned that automation is going to completely kill off some jobs.
“There’s going to be some disruption and therefore people should have a concern, but some of it is being over-exaggerated,” Walsh says.
“There have been some studies that have come out that have painted pessimistic views of the outcomes … [But] they don’t try and quantify the number of jobs that will be created by the technologies.”
Walsh also says while there is a huge amount of uncertainty, some of the predictions around jobs could be wrong.
“For example, one of the jobs they predict to be automated is a bicycle repair person. No one is trying to build a bicycle repair robot. Even if they could, it wouldn’t be economically worthwhile to do so,” he says.
Instead, Walsh points to the positive impacts of automation and AI, such as the economic benefits and an improvement in safety, with Australia’s AI push being driven by the economic necessity to compete on a global stage.
This can be particularly seen in the mining industry – Walsh says Australia has some of the most automated mines in the world, which has given Australia a competitive edge.
“We wouldn’t have actually been able to keep many of those mines open and financially competitive if it hadn’t been for the benefit of automation and the productivity that that’s brought,” he says.
Despite all the concerns about what a future with AI will look like, Walsh says eventually people will need to work out how to let the technology into their lives and how to get the maximum benefit from it.
“There’s always going to be a period of disruption. We saw that with the first industrial revolution. There was 50 years of pain before people’s quality of life actually improved,” he says.
“I think our challenge really is to make sure that we minimise the amount of pain before we come out of it on the other side with a better quality of life, which I’m sure we will.”