A new study demonstrates that artificial intelligence (AI) can be used to influence human decision-making by exploiting vulnerabilities in an individual’s habits and patterns.
CSIRO scientist Dr Amir Dezfouli, a neuroscientist and machine learning expert who spearheaded the research, said the study highlighted the potential power of AI and underscored the need for proper governance to prevent potential misuse.
“Although the research was theoretical, the implications of this research are potentially quite staggering,” he said.
“Ultimately, how responsibly we set these technologies will determine if they will be used for good outcomes for society, or manipulated for gain.”
How the study worked
CSIRO scientists conducted three experiments where participants played games against a computer.
In the first two experiments, participants clicked on red or blue coloured boxes to win a fake currency, with the AI learning the participant’s choice patterns and guiding them towards a specific option.
The third experiment gave participants two options for financial investment — a trustee, and an investor (the AI), with the AI observing how the participant chose to distribute their fake currency, and eventually learning how to get the participant to give the investor (itself) more money.
The framework that analysed human choice frailty involved a machine versus machine adversarial step, in which a (deep) reinforcement learning agent was trained to be an adversary to a recurrent neural network.
As the machine gained insights from the behaviour underlying participant responses, it identified and targeted vulnerabilities in people’s decision-making to steer them towards particular actions or goals.
Dezfouli said the research, conducted in partnership with the Australian National University, Germany’s University of Tübingen, and the Max Planck Institute for Biological Cybernetics, will help scientists and machine engineers better detect and avoid patterns which could be misused.
Additionally, exploring the relationship between adversarial strategies and traditional cognitive biases is a direction for future research, he said.
Using AI for good
Commenting on the research, Dr Tongliang Liu, a lecturer in machine learning at the School of Computer Science, in the University of Sydney’s Faculty of Engineering, agreed ensuring algorithms were based on unbiased data was key to ethical use.
Before we trust machine learning techniques, we also need to pay special attention to preserving privacy, he said.
Liu said AI had been used for a lot of good to date, including screening for cancer and managing COVID-19 outbreaks. It has also helped create solutions in fields including engineering.
“If we can develop machines that have even some of the decision-making capabilities of humans, this could also reduce human labour and inconvenience and improve our quality of life,” he said.
“However, it is essential to ensure AI is not misused to influence people into unwise decisions.”
Liu said AI could not currently manipulate human behaviour without a machine learning engineer or AI engineer developing the algorithm behind it.
First step in an AI framework
Like any technology, AI could be used for good or bad, so proper governance is critical to ensure that AI and machine learning are implemented in a responsible manner, said Dr Jon Whittle, Director of CSIRO’s Data61.
“This research is further proof that AI technologies are powerful, with tremendous potential for societal benefit, but also ethical risks,” he said.
“Organisations need to ensure they are educated on what these technologies can and cannot do and be aware of potential risks as well as rewards.”
Data61 recently worked with the Australian Government to release an AI Ethics Framework that included voluntary AI ethics principles designed to provide a foundation for both awareness and achievement of better ethical outcomes.
It notes that AI is big business, with 14 countries and international organisations announcing a combined $86 billion for AI programs in recent years.