Artificial intelligence will be able to do many things – destroying the world won’t be one of them, says Professor Toby Walsh.
In the 2013 movie Her, a lonely man called Theodore (Joaquin Phoenix) falls in love with his new operating system Samantha (Scarlett Johansson). Critically acclaimed, the movie won an Academy Award for Best Original Screenplay and was nominated for Best Picture.
However, the acclaim wasn’t limited to the arts community. According to one of Australia’s top artificial intelligence (AI) experts, Toby Walsh, the film resonated with his community too.
“Unfortunately, if you ask AI researchers which AI movie they like, they complain that most of them paint such a dystopian picture of what AI’s going to do to the planet,” he said.
“One that I like, and many of my colleagues have said they like as well, is the movie Her which is not a very dystopian picture at all, and gets something very right, which is that AI is the operating system of the future.”
Walsh said the way we interact with computers has evolved from plugging wires into the front panel of the computer, to machine code programming, MS-DOS with its command line interface, and ultimately the graphical user interface we are all used to today.
“The next layer is going to be this conversational one. You already see the beginnings of that in systems like Siri and Cortana,” he said.
“As we move more to the Internet of Things, your house is full of devices that are connected to the internet that don’t have screens or keyboards. The front door, the light switch, the fridge, all of these are going to be networked together. There’s only one interface you can have with these, which is voice interface.
“You’ll have this ongoing conversation that follows you around, and authenticates you on the biometrics of your voice. It will learn everything about you and your preferences. It will be very much like the movie. People will get quite attached to this person they’re having the conversation with all the time.”
He said it’s hard to think of an area that artificial intelligence is not going to touch in some way.
“It’s going to touch education, it’s going to touch healthcare, it’s going to touch pretty much every form of business you could imagine,” he said.
“Anything cognitive that we do, you can imagine it touching. It’s hard to begin to think about what it won’t change.”
Next move
Walsh said there are a lot of misconceptions out there about what artificial intelligence is able to do.
“If you summed up all the things that you read in the newspapers, then you’d imagine it’s only a matter of moments before the machines are going to be taking over, which is far from the truth,” he said.
“There are still a lot of significant hurdles to overcome before we can actually make machines as intelligent as us, and likely more intelligent than us. We recently saw the announcement of AlphaGo Zero, where they just gave it the rules of the game Go and it learned everything from scratch in just three days, then beat the program that beat Lee Sedol (World Go champion) 100-0.
“That was pretty impressive. But we still build only narrow intelligence, programs that can do one task. We have made almost no progress on this idea of artificial general intelligence, programs that can match the breadth of abilities of the human brain.”
He suspects it will be at least 50 years before we will get to machines that will be as intelligent as us and possibly longer.
“I’m still hopeful it might happen in my lifetime, that would be a nice achievement. It’s not impossible but it could easily not happen for 100 years, or 200 years. One should always have a healthy respect for the human brain. It is the largest, most complex system we’ve seen in the universe by orders of magnitude, nothing approaches the complexity of the billions of neurons and the trillions of connections the human brain has, nothing!”
The awakening
Walsh was born in southeastern England, just outside London, and confesses that as a boy he read too much science fiction.
“From about the age of seven or eight I started to read about robots and intelligent machines,” he said.
“Maybe I didn’t have any imagination, but it’s what I decided I wanted to do in life – try and build those things that I read about. The more I thought about the problem as I got older and could understand a bit more about it, I realised it was actually one of those challenging problems that wasn’t going to go away anytime soon, like how did the universe come into existence?”
After studying maths and physics at Cambridge University, he did his PhD in artificial intelligence at the University of Edinburgh. There he met an Australian philosophy professor who invited him to Canberra to teach at a summer school each year for the next ten years or so.
“I would come out for a couple of weeks or a month in the middle of December and January, and escape the British winter,” he said.
“I learnt to love Australia in that time.”
Eventually, he landed a permanent position at National ICT Australia (NICTA) now part of the CSIRO’s data innovation group, Data61, and the University of NSW where he is Scientia Professor of Artificial Intelligence.
He is particularly interested in the interface between distributed optimisation, social choice, game theory and machine learning and believes now is probably the most exciting time to be an AI researcher.
“I started as a postgraduate researcher at what was the tail end of the AI boom, the expert system boom,” he said.
“It was actually already on the downswing at that point. Then it was what was called the AI winter. We’re definitely in spring, if not summer by now. It’s a very exciting time. You can’t open the newspaper and not read several AI stories.”
Of course, this increasing interest opens the door to misinformation being spread about AI as well. So, last year Walsh decided he “had a duty” to write his own definitive guide to the field: It’s Alive! Artificial intelligence from the logic piano to killer robots.
It’s Alive!
One big question, which takes up a large chunk of Walsh’s book, is what will happen to human jobs in the future if many tasks can be performed better by machines?
“We don’t really know the answer to this,” he said.
“Lots of new jobs will be created by technology, that’s always been the case. Most of us used to work out in the fields, farming. Now just three per cent of the world’s population is involved in farming. Lots of jobs were created in office and factories that didn’t exist before the industrial revolution.”
However, he acknowledged there is a chance it could be different this time around.
“Previously when our brawn was replaced we still had a cognitive advantage over the machines,” he said.
“If we don’t have a cognitive advantage over the machines, what is the edge that humans have? We have social intelligence, emotional intelligence that machines don’t have. We have creativity. Machines are not as adaptable as humans yet. It could be the case that we end up with fewer people employed than before. That is possible. One thing is absolutely certain, that there will be jobs displaced and new jobs will be created. And the new jobs will require different skills to the old jobs.”
He said the caring professions, artistic professions and scientific professions should all survive, professions where there is no natural limit to the potential of the job, unlike say ploughing fields or assembling widgets, repetitive tasks that could be done by robots and then the humans are no longer needed in that role.
Interestingly, he feels some ancient jobs will grow in stature while some newer jobs might be very short-lived.
“One of the newest jobs on the planet is being an Uber driver. But Uber are already trialling autonomous taxis. The driver is the most expensive thing in the Uber. It’s clearly part of their business plan to get rid of them as quickly as possible. That’s probably one of the first jobs that’s going to completely disappear,” he said.
“Whereas, one of the oldest jobs on the planet, with a very venerable history, is a carpenter, that is probably going to be one of the safest in the sense that hand carved objects are going to be increasingly valued. We’ll appreciate those things where we can see the touch of the human hand, and if we believe economists, their value will increase.
“In fact, if you look at hipster culture today, you can already see the beginnings of that: craft beers, artisan cheese, and hand-baked bread. It seems to me that there might be some beautiful symmetry, where we’ll actually all end up doing the jobs that we used to do 500 years ago when we were craft people.”
This is where the choices he mentioned previously come into play again.
“We need to think about how we might need to change education so that people are educated for whatever the new jobs are; whether we’re going to have more free time; whether income is going to be distributed well enough,” he said.
“We seem to be suffering from an increase in inequality within society and technology may amplify that. That’s certainly a worrying trend.”
Another area for discussion is how far we want AI to evolve. Do we want it to get to consciousness and what would the consequences of that be?
“Supposing machines become intelligent, but not conscious, then we wouldn’t have to be troubled, if for example, we turn them off or we make them do the most terrible, repetitive, dangerous, or other activities that we wouldn’t ask a human to do,” he said.
“So we could be saved from some difficult ethical quandaries. Whereas, if they are conscious, maybe they could be thought of as suffering in that respect, then maybe we’ll have to give them rights, so we’ll have to worry about these things. It could be useful if they’re not conscious.”
Killer robots
Walsh said there are issues regarding the use of artificial intelligence where we should be concerned. Most notable is its use by the military.
In 2015, he coordinated an open letter to the United Nations signed by more than 1000 leading researchers in artificial intelligence and robotics including Apple co-founder Steve Wozniak and Tesla CEO Elon Musk as well as other luminaries such as physicist Stephen Hawking and philosopher Noam Chomsky. The letter called for a universal ban on the use of lethal autonomous weapons.
“Certainly today machines are not morally capable of following international humanitarian law,” he said.
“Even if we could build machines that were able to make the right moral distinctions, there are lots of technical reasons in terms of industrialising warfare, changing the scale at which you can fight warfare that would suggest to me that it would be a very bad road to go down.”
He said the world has agreed in the past to ban certain nuclear, chemical and biological weapons after seeing the horrific impact they can cause. And they also preemptively banned blinding lasers after realising the potential horror.
His activism on the issue has seen him invited to the United Nations in both New York and Geneva to argue the case for a ban on autonomous weapons.
“It’s very surreal to find oneself in such an auditorium having conversations with ambassadors,” he said.
“It’s also gratifying how flat the world is. I had a meeting with the Under Secretary General, who’s the number two in the United Nations. He was asking my opinion about autonomous weapons. It’s been a very interesting ongoing journey, in fact.”
It has also opened his eyes to the reality of international diplomacy and how difficult it can be to get things done.
“Pleasingly they have gone from the issue first being raised less than five years ago, to three years of informal discussions, and now last year they voted unanimously to begin formal discussions, what’s called a group of governmental experts,” he said.
“I’m told, for the United Nations, that is lightning speed. But this is very slow from a practical perspective as the technology is advancing very rapidly.”
He said they warned a couple of years ago in their open letter that there would be an arms race. Now, the arms race has begun with prototype weapons being developed by militaries around the world in every sphere of the battle, in the air, on the sea, under the oceans, and on the land.
“There’s plenty of money to be made out of selling the next type of weapon to people. There’s a lot of economic and military pressure. You can see why the military would be keen to have assistive technologies,” he said.
And he acknowledged there are some arguments for autonomous weapons.
“You can see, certainly from an operational point of view, there are some obvious attractions to getting soldiers out of the battlefield, and having weapons that follow orders very precisely, weapons with super-human speed and reflexes, weapons that will fight 24/7, weapons that you can risk on the riskiest of operations, that you don’t have to worry about evacuating from the battlefield when they’re damaged,” he said.
“It’s not completely black and it’s not completely white. But I think the weight of evidence is strongly against having autonomous weapons.”
However, it is ethical questions such as this that make working in the field so interesting.
“It is like the famous Chinese curse, ‘May you live in interesting times’,” he said.
“It’s a very interesting time, because we’re starting to realise if we do succeed, then we have to worry about exactly how we use the technology. How do we make sure it doesn’t get misused? It’s a morally neutral technology, it can be used for good or for bad. We have to make the right choices so that it gets used for good.”
AI, robotics and the future of engineering is a key theme at this year’s Australian Engineering Conference. To learn more and to register, click here.
Comments 1