New report asks what kind of relationship Australians want with AI in the future

artificial intelligence user guidelines

All Australians should be asking themselves and their governments what role artificial intelligence will play in society, according to Chief Scientist Dr Alan Finkel.

Finkel made the comment while launching a recent report by the Australian Council of Learned Academies (ACOLA). The report urges the nation to reflect on what AI-enabled future it wants, as crucial decisions are currently being made about the future impact of AI.

“This report was commissioned by the National Science and Technology Council, to develop an intellectual context for our human society to turn to in deciding what living well in this new era will mean,” Finkel said.

Among the report’s findings are the importance of having a national strategy, community awareness campaign, safe and accessible digital infrastructure, a responsive regulatory system, and a diverse and highly skilled workforce.

“By bringing together Australia’s leading experts from the sciences, technology and engineering, humanities, arts and social sciences, this ACOLA report comprehensively examines the key issues arising from the development and implementation of AI technologies,” said Professor Hugh Bradlow, Chair of the ACOLA Board.

Setting an example

Co-chair of the ACOLA expert working group, Professor Toby Walsh, said that AI offers great opportunities, provided we ensure it does not compromise our human values.

“As a nation, we should look to set the global example for the responsible adoption of AI,” he said.

Walsh himself has been active in setting an example, campaigning locally and internationally for a ban on autonomous weapons, or ‘killer robots’.

According to Walsh, his activism started when he realised how many of his colleagues in the AI field were dismissing killer robots as a problem of the distant future.

“From what I could see, the future was already here. Drone bombers were flying over the skies of Afghanistan. Though humans on the ground controlled the drones, it’s a small technical step to render them autonomous,” he explained.

To counter this apathy, Walsh organised a debate about autonomous weapons at a scientific conference, and was asked by the head of the MIT Future of Life Institute to help him circulate a letter calling for the international community to ban emerging robot weaponry. Walsh gathered over 5000 signatures, including those of Elon Musk and Steve Wozniac.

According to Walsh, the key issue is that we can’t let machines decide if we live or die.

“Machines don’t have our moral compass, our compassion and our emotions. Machines are not moral beings,” he said, adding that unlike other banned weapons of mass destruction, autonomous weapons could use facial recognition to discriminate between victims.

Walsh has previously told create that engineers need to be aware of their responsibility to produce AI-enabled tools that meet the expectations of society. This applies not just for robotic killing machines, but more mundane applications such as smart house tech, facial recognition and news algorithms.

“There are some decisions we should make about where technology shouldn’t be in our lives, not just where it should be in our lives,” he said. 

Exit mobile version