Can we ‘vaccinate’ artificial intelligence against adversarial attacks?

protecting AI algorithms against adversarial attacks

Researchers from CSIRO’s Data61 have developed a world-first ‘vaccine’ to protect AI algorithms from attacks. 

Presented at the International Conference on Machine Learning in late June, the vaccine works by feeding the algorithm distorted versions of the training data, such as images. 

This simulates a weaker version of an attack for the system to be able to recognise these adversaries – a process similar to how vaccinations build immunity to disease. Dr Richard Nock, machine learning group leader at Data 61, said that they train the algorithm to recognise the small dose of distortion, resulting in a model that is more robust and immune to adversarial attacks.

For example, by adding a layer of ‘noise’ over an image, attackers can deceive machine learning models into misclassifying the image. This has real-world implications for technology like autonomous cars.

“Adversarial attacks have proven capable of tricking a machine learning model into incorrectly labelling a traffic stop sign as a speed sign, which could have disastrous effects in the real world,” said Dr Nock.

The researchers’ paper on the technology stated these ‘vaccination’ techniques are built from the worst possible adversarial examples, so they can withstand even the strongest of attacks. 

Data61 CEO Adrian Turner said the significance of this contribution to the growing field of adversarial machine learning means research will continue to develop, ensuring the positive use of transformative AI technologies. 

“Artificial intelligence and machine learning can help solve some of the world’s greatest social, economic and environmental challenges, but that can’t happen without focused research into these technologies,” he said. 

To further this area of study, CSIRO invested $19 million into an Artificial Intelligence and Machine Learning Future Science Platform to target AI-driven solutions for food security and quality, health and wellbeing, sustainable energy and resources, resilient and valuable environments, and Australian and regional security.

Dr Nock said Data61 is currently scaling the vaccine approach for a much bigger domain in computer vision, as machine learning is extremely important for society.

“I would expect that within the next few months we are going to have something very concrete in this area,” he said. 

However, formulating a large-scale version of the vaccine to make algorithms more resilient is just one facet of the challenges in the field of machine learning. This vaccine has only scratched the surface of how to protect computer vision AI. 

Since viruses targeting machine learning are crafted by humans, Dr Nock said the AI must understand human adaptability in order for this area of science to progress to a larger scale. 

“[With autonomous cars], the machine needs to make a decision, this decision needs to be accurate, and somehow some people can try to tamper with the decision. The problem I believe is much bigger in nature because it’s much more about autonomy and automatic decision making,” he said.

“We need to make sure that somehow this [autonomous] technology begins by learning to predict something automatically. We need to make sure that this technology essentially complies with the specifications that were initially put in the design of the algorithms.”

Exit mobile version