From face recognition to killer robots, engineers need to think about the human impacts of artificial intelligence, says one of Australia’s leading experts.
Professor Toby Walsh is a top artificial intelligence (AI) researcher and a vocal advocate for a national and international ban on autonomous weapons, or killer robots.
As the technology becomes a more important part of our lives, engineers also need to discuss the issues and impacts introduced by “more mundane” applications such as facial recognition and news algorithms, Walsh told create.
“There are some important ethical dimensions to the way [the technology] is being used and how it will impact our lives,” he said.
To this end, CSIRO is seeking input on a discussion paper titled Artificial Intelligence: Australia’s Ethics Framework. The paper, which Walsh helped review, identifies ways to “achieve the best possible results from AI, while keeping the well-being of Australians as the top priority”.
Walsh encouraged engineers and others in the tech industry to contribute their expertise and experience. He added that one of the real challenges of new technology is unexpected consequences, so knowledge sharing is important.
Using tech for good, not evil.
Concerns over privacy and influence are affecting AI use around the world. For example, San Francisco recently became the first US city to ban police from using facial recognition software, and a team of election researchers are analysing how Facebook is influencing election results.
Like many technologies, AI could be used for “good or bad” purposes, Walsh said. For example, facial recognition algorithms are being put to work in scanning the internet for human trafficking victims.
“That’s an immense evil in our world that’s being tackled by the use of this technology. Equally, the same technology is being used by the Chinese authorities to help suppress religious minorities … It’s all about making the right choices,” he added.
Walsh pondered the question of whether the downfalls of facial recognition outweigh its benefits, and whether we should be prepared to “pay any price” for ‘Big Brother’-style surveillance.
“Authors like George Orwell have painted a very good picture of the sort of world we could end up in if we use the technology without respecting the values that we spent a long time trying to build,” Walsh said.
Ethical engineering
As engineers build the AI systems of the future, Walsh stressed that they need to be aware of their responsibility to produce tools that meet the expectations of society.
“There are some decisions we should make about where technology shouldn’t be in our lives, not just where it should be in our lives,” Walsh added.
Walsh said Australia is at the forefront of developing AI technology, and has been an early adopter of automation in areas such as mining, ports and airports. But if technology is not regulated, it could undermine people’s trust in the technology, and affect how they adopt it or accept its use.
“We’ve given the tech companies a huge amount of freedom and they’ve brought really interesting innovations, but not all of it’s been good. We’re discovering now that we could actually stifle innovation if we don’t responsibly regulate,” Walsh added.
The good news is that there are a lot of initiatives underway to make sure our future tech does not go down the wrong track. As well as the CSIRO discussion paper, the Institute of Electrical and Electronic Engineers’ (IEEE) has released the first edition of its crowd-sourced standard Ethically Aligned Design after extensive feedback from engineers and the broader community. Tech companies such as Google are also making their ethical guidelines public.
Walsh said it was time to take the next step and about how we put ethical principles into practice. This will include laws, regulations and standards, as well as education for engineers and others involved in design and production. Walsh said another suggestion was a ‘Turing mark’, which would be displayed on systems made in accordance with ethical standards.
And to make sure new technology benefits the widest range of people, there needs to be a broad representation of ages, genders, ethnicities and cultures among among engineers and AI designers.
“The technology offers a great potential to include everyone, but that doesn’t happen unless we make sure those voices are heard when we are building it,” Walsh said.
While concerns about how AI affects society need to be properly tackled, Walsh is confident that the technology will have a lasting positive impact if used responsibly.
“It can take away the dull, difficult and dangerous things from our lives and let us focus on the important things,” he said.
Comments 1