Around the globe, businesses and governments are rejecting the use of artificial intelligence (AI) facial recognition technology, but in Australia the jury is still out.
The ABC revealed last month that the Australian Federal Police (AFP) was trialling a controversial facial recognition program, Clearview AI.
The news was likely a shock to many Australians, particularly following announcements from Silicon Valley giants IBM and Amazon that they would not be sharing facial recognition technology with police forces.
However, the AFP pushed back on initial concerns, saying the technology was being used by the Australian Centre to Counter Child Exploitation in its fight against child trafficking. Surely that’s a good thing, right?
This example highlights the fact that AI, and facial recognition technology in particular, occupies a moral grey area in the public conscience. Can facial recognition technology be used for “good”? Does it infringe on privacy? Do its problems outweigh the benefits? Does Australia need tighter restrictions on its use?
It gets it wrong, a lot
Artificially intelligent facial recognition software works in a similar fashion to most other AIs. It must be trained to recognise faces, and this is where the problems begin.
Tim Miller, Associate Professor at the School of Computing and Information Systems at the University of Melbourne, explained the software needed to see many faces to learn the differences between each one.
“Facial recognition uses deep machine learning. It takes lots of images of different people’s faces and it learns how to map features,” Miller told create.
“So you have to have multiple images of individuals, and lots of those individuals, so the machine can learn what they look like and how they change if they’re wearing glasses or have different hair.”
However, if the software isn’t supplied with a variety of faces, it may learn to differentiate people with certain features, or skin colour, better than others.
“Facial recognition software has been proven to work much better with people of white skin, and this down to the data it receives to learn,” Miller said.
“So if it’s seeing only white people, it will learn their features better than people of different skin tones.”
This racial bias in facial recognition can have huge problems when used for policing.
In January this year, a man in the United States was arrested for an alleged robbery after facial recognition software identified him from the store’s surveillance video. Robert Williams was hauled in for questioning and kept in custody for 30 hours before being released on bail.
When he was shown the image from the video, Williams held it next to his face and asked officers, “you think all Black men look alike?”. Although Williams had an alibi for the night of the robbery, officers relied on AI over human police work.
In 2018, the London Metropolitan police trailed facial recognition to spot 104 “suspects”. Of these, 102 turned out to be false positives — it failed 98 per cent of the time. Despite this, the Met proceeded with installing facial recognition cameras in east London earlier this year.
It’s being used for oppression
Part of the issue with facial recognition software is around who uses it and why.
Professor Toby Walsh, Laureate Fellow and Scientia Professor of Artificial Intelligence at the University of New South Wales, said AI facial recognition technology would likely move past its racial biases, given time to learn. However, he said the harm it could do made it difficult to find a good enough excuse to give it this time.
“Humans are racially biased and very bad at human facial recognition, so it’s not surprising the computer is currently failing in that area,” Walsh told create
“Given the time, it would likely learn and greatly surpass humans in that skill. But that’s not what worries me about facial recognition in the long term.
“The technology has the ability to remove some of our fundamental human rights. Our right to protest. Our right to disagree with politicians. This technology lets you scale surveillance nationwide. That’s something Orwell got wrong; it’s not people watching people, it’s computers watching people.”
Day to day, a regular person might use the software to unlock their phone or try a fun Snapchat filter. But they might not realise that the same software is watching and logging their movements via security cameras, sometimes for surprising reasons.
Recently the convenience chain 7-Eleven announced it would roll out facial recognition technology in 700 stores across the country to capture customer feedback.
Unfortunately, that same technology is allowing governments across the globe to arrest and punish people for speaking up.
“When you watch footage from Hong Kong protests, what’s the first thing they do? They take down the cameras,” Walsh said.
China is seen as the biggest deployer of facial recognition technology for government surveillance. Its use to curtail anti-government protests and persecute the Uighur minority has shocked human rights groups. The United Nations has expressed concern at its use as a policing tool and joined the calls for a moratorium on facial recognition software.
It isn’t well regulated
Like other technologies, Australia’s laws have failed to keep up with the rapid pace at which software is being created and distributed.
However, any regulation in this area needs to be balanced with private business interests and ensure it will not impact future AI research and development.
Miller said the government may need to reassess current laws before deciding to introduce new restrictions. He said there isn’t a solid argument to ban the use of facial recognition outright.
“I’m not in favour of new regulations,” he said
“If the technology is being used to find missing people or stop trafficking, then it probably deserves our consideration, despite its failings in other areas.”
Walsh, on the other hand, said governments need to step up to the plate on the matter.
“We do need greater regulation … because as it stands it’s too easy to implement and anyone can use it,” he said.
“We have CCTV cameras everywhere and with a push of a button, they can be upgraded to facial recognition cameras. That’s very worrying.”
However, he does contend that regulation shouldn’t halt research, and even sees areas where it can be useful.
“There are some benefits. The best story about the benefits comes out of New Delhi, India,” he said
“The police went into orphanages across the city with facial recognition software and software for ageing up photos of children. With that, they managed to reunite about 3000 children with their families, which is amazing really.”
To hear more from Tim Miller about artificial intelligence, don’t miss Engineers Australia’s Introduction to Artificial Intelligence and Digital Ethics webinar on 27 August.