The Biometric Mirror asks users to reflect on the flaws of facial recognition tech

A new Black Mirror-like device at the University of Melbourne asks users to take a long, hard look at the implications and shortcomings of facial recognition technology.  

According to head researcher Niels Wouters, the machine, dubbed the Biometric Mirror, has similar facial recognition technology already used by commercial shopping centres, private companies and by some governments.

Currently located on the University of Melbourne campus, the ‘black mirror’ has been popular with students, whose faces are scanned and information about them is produced. Some of it’s banal, like gender and age, but some less so. The mirror ranks the user’s kindness, happiness, level of responsibility and trustworthiness, and even their attractiveness.

Researchers hope the mirror will encourage discussion about the ethics of facial recognition technology by confronting users with the data that corporations and governments could be collecting without their knowledge or consent.

“We’ve seen an emergence of new technology, facial recognition predominantly, that’s being deployed across society. It’s being used in shopping malls, in airports, as part of hiring procedures. And the challenge is that these technologies are definitely not transparent,” Wouters said.  

“We typically don’t know what happens behind the camera, what analysis is being performed. Do we smile enough to get that job, for instance? Do we look aggressive according to the camera – and what are the consequences?”

Unreliable data

When researchers talk about AI holding a mirror up to humanity, they are often speaking about the way in which it learns from humans and takes on some of their traits – for better or for worse.

Like most AI, the Biometric Mirror makes judgements based on crowdsourced data, which often contains biases.

“If you happen to have a beard, you tend to turn out quite aggressive according to the system. And if you start taking into consideration ethical origins, some of these values turn even more extreme,” Wouters said.

Major concerns have been raised about the use of the software by law enforcement agencies, where it has been known to display a higher number of false positives when attempting to identify people of colour.

In the case of the Biometric Mirror, the stakes are far lower, but even so, Wouters has been surprised by how willing users have been to agree with the data the machine produces, even when it seems wildly incorrect.

“We have this really subjective data set, or crowdsourced data set – the fact that we have fed that into a computer doesn’t mean that the analysis of the data all of a sudden becomes objective,” he said.

Trust test

The inclination to blindly trust AI is at the heart of the researcher’s concerns. They believe the lack of knowledge about the mechanics of this kind of technology could have significant consequences.

With the Biometric Mirror researchers hope to draw attention to this by creating output that is radical in its transparency.

“It really stays with you and takes you along the whole trajectory,” Wouters explained.

“It takes your photo, it shows you when it’s sending that photo for analysis and then when that analysis comes in, it shows each and every attribute that our system can distinguish from a photo. And it shows you how confident the algorithm is in its assumption.”  

Timothy Miller, a researcher in the School of Computing and Information Systems at the University of Melbourne, thinks there is an ethical imperative to provide clear explanations for the way in which facial recognition technology works.

“A lot of the research has come from researchers and engineers themselves trying to understand what their models are doing. That’s naturally led to a series of explainable AI models that are probably good for other engineers, but they’re not good for the general public,” Miller said.

He thinks that both engineers and researchers need to make transparency in this area a priority.  

To that end, Wouters suggested an app that alerts people when facial recognition technology is in use, and gives them the opportunity to give or withdraw consent.

Miller thinks that a platform like this could also give people the opportunity to correct misleading information about themselves and improve the quality of the data set, although self-reflection could simply produce data with a different set of biases.  

Another solution is to contextualise facial recognition data. An employee from facial recognition vendor Vigilant Solutions told The Guardian that “upon reflection” the use of facial recognition technology in police investigations should be treated “no differently than someone calling in a possible lead from a dedicated tip line”.  

Data in the wrong hands

While the app described by Wouters and Miller goes some way to solving the issues of consent associated with facial recognition technology, it doesn’t entirely solve the disquieting problem of near constant surveillance, or fears about what this technology could be capable of in the wrong hands.

Facial recognition already has some Orwellian applications in China, where it is a central part of the government’s controversial Social Credits System. Members of the public can lose credits when the technology picks up on minor indiscretions such as jaywalking. The loss of social credits can impede an individual’s ability to board flights, ride trains, as restrict access to insurance and employment prospects.

The technology is also being used in Australia. Westfield, for example, uses it to track the age, gender and moods of shoppers.

However, despite its Big Brother reputation, facial recognition has an array of positive applications. In the Northern Territory, for example it is used to identify patients admitted to hospital who are unconscious, as well as confused Alzheimer’s patients. But these applications need to be weighed against a right to privacy, and with technology that is free from bias.

“I think there is a need in education to focus more on data consciousness, data awareness, privacy and so on,” Wouters said.  

“But I think if we were to do all of these things at the same time, we’re going towards a really exciting future where, indeed, facial recognition could benefit a lot of us.”

These are not questions with easy answers, but the transparency advocated by the Biometric Mirror – as well as the conversations that it inspires – is an important first step.

Exit mobile version