Will the technological singularity change everything we know about AI?

As the technological singularity approaches, engineers might need to start thinking about what it means to be human.

The idea of a technological singularity — a point where advancement passes a certain point and runs away from us — gets traced back to computer scientist and polymath John von Neumann in the 1950s.

In recent years, that date at which artificial intelligence outstrips human intelligence has become a question asked of computer scientists, engineers and others who might be better placed than the layperson to speculate.

Putting aside issues such as the difficulty in defining and measuring “intelligence”, here are some educated guesses.

In 2045? Or 2300?

According to Rodney Brooks, the iRobot and Robust.AI co-founder, the date for human-level AI might be 2300.

A median prediction based on a survey of 300 colleagues by University of NSW Professor of Artificial Intelligence Toby Walsh foresaw the year 2062.

And Google futurist and Lead Engineer Ray Kurzweil predicts AI will pass a valid Turing Test in 2029 and reach “singularity” level in 2045.

Harry Turner, Aurecon’s Technical Director, Infrastructure, grew up with science fiction such as Arthur C. Clarke’s 2001: A Space Odyssey and “one of the first TRS-80s”. Science fiction and programming are old passions, and his interest in the possibility of machines smarter than humans goes way back.

The singularity will require thinking about what it is to be human, said Harry Turner.

It intensified, however, after hearing Kurzweil’s keynote at the Australian Engineering Conference in September 2018, which was on “Thinking Machines — the Promise and the Peril”. Turner adds that Walsh’s 2018 book, 2062: The World that AI Made, was another strong and recent influence.

Turner believes that “artificial general intelligence” and the singularity are inevitable, given enough time, though the outcomes can be shaped by us. He said it’s time to spend more effort on the ethical questions connected to a world shared with really smart machines.

“There’s just this huge disparity that you see between the optimists and the pessimists over the future of AI,” Turner said to create on the opinions put forward by experts in the field.

“The one thing they all tend to agree on is that the singularity is coming. It’s just a question of whether that’s going to be a positive or a negative thing for mankind.”

While the current and near-term effects of AI progress on people’s lives and careers get plenty of play in the media, that’s not the whole story, Turner believes.

“People focus in on the jobs that are going to be lost in the improvements, and autonomous vehicles and all this sort of stuff, and it’s quite right that we should focus and be concerned,” he said.

“But the next horizon isn’t actually that far behind it. And that’s a much bigger change for us all.”

Turner mentioned the unintended consequences attached to any significant technological change.

Among the influential names highlighting potentially negative scenarios is Nick Bostrom, a University of Oxford philosophy professor and author of Superintelligence: Paths, Dangers, Strategies.

His warnings include the risk of a misalignment of goals between human and non-human intelligences, illustrated by  the “paperclip maximiser” thought experiment.

The paperclip maximiser

A King Midas-style fable, the thought experiment imagines a machine with a goal of making as many paperclips as possible and the ability to constantly self-improve and thwart everything standing between itself and its insatiable quest to turn as many molecules in the universe as it can into paperclips.

The singularity assumes that once an artificial intelligence reaches a certain point, it will self-improve exponentially, with its capabilities pulling away from human-level intelligence at increasing pace.

(The concept has its critics. A summary of six arguments against it is given in Walsh’s paper The Singularity May Never Be Near.)

The disagreements around this highly speculative topic can be heated, but we owe it to ourselves to think of the future, as unknowable as it is, believes Turner. He added that disagreements — due to the absence of facts — are sometimes based on faith and can therefore take on a kind of religious flavour.

“And religious argument is often fairly heated, as well, because it just assumes that sort of style,” he said.

“Which is not what we’re used to as engineers and scientific people.”

Turner is not alone with his concerns around super-intelligent machines. Entrepreneur Elon Musk, the late physicist Stephen Hawking and Microsoft founder Bill Gates are among those who have urged caution.

Proceeding successfully will mean paying as much attention to human as well as technical factors. Engineering questions will need to include ones of philosophy, ethics and other issues.

“It’s involving the right team approach, and not just letting the software programmer make all the choices that will potentially result in bad moral or ethical decisions … the consequence of getting this wrong are far more extreme [than elsewhere],” Turner said.

Exit mobile version