Getting personal: The ethical engineering behind biometric technology

Image credit: Getty Images

Biometric technologies introduce entirely new levels of risk for individuals. What considerations are needed in the ethical engineering of these systems?

If someone’s credit card details are stolen through a cyberattack, data leak, or simply as a result of poor management on the card owner’s behalf, that card can be cancelled and replaced by another one with a new number.

Similarly, if any type of online account is hacked, the password can be changed. And if sensitive contact details are obtained, phone numbers and emails can be swapped out. 

However, if biometric information about a person – their face, fingerprints, retina scans, or even their gait – falls into the wrong hands, there is nothing the individual can do. 

Their most personal and permanent identifiers are now able to be used against them without warning, and nothing can be done to change that.

That is the biggest challenge around the use of biometric technology, according to Laureate Fellow and Scientia Professor of AI Toby Walsh, who is Chief Scientist at the University of New South Wales’s new AI Institute. 

“You can’t change your face,” Walsh, who was recently announced as a member of the Australian Government’s new Artificial Intelligence Expert Group within the Ministry for Industry and Science, told create.

“If someone gets hold of any of your biometric identifiers, you can’t undo that or change it, as you can with a password.”

“If someone gets hold of any of your biometric identifiers, you can’t undo that.”
Toby Walsh

There are numerous other challenges for engineers involved in designing biometric systems, and processes to follow to ensure those challenges are solved and risks are mitigated. But at the core of the challenge is the potential and terrible outcome for the individual, and for the organisation from which the data was breached.

If such enormous risk for individuals and organisations is to be assumed, Walsh said, it had better be worth it.

Act proportionately

The first question any engineer should ask when designing a system that could involve a biometric element is whether such technology is justified, said Samantha Floreani, privacy advocate and Head of Policy at Digital Rights Watch.

Samantha Floreani

“First of all, before even discussing how to do it well, the first question is whether biometrics should even be used at all,” Floreani said. “Is it a reasonable use of this kind of technology, given that it does come with quite significant risks? Is it proportionate?”

Floreani illustrates her point with an example from the Office of the Australian Information Commissioner (OAIC), which found against retailer 7-Eleven in 2021. 

The company, the OAIC reported, had “interfered with customers’ privacy by collecting sensitive biometric information that was not reasonably necessary for its functions and without adequate notice or consent”.

Customer feedback had been requested by 7-Eleven through surveys about in-store experience. The surveys were completed on tablets installed in 17 stores. 

Those tablets contained front-facing cameras, through which the organisation harvested respondents’ facial images, purportedly to give an understanding of the demographic profile of those respondents.

“Before even discussing how to do it well, the first question is whether biometrics should even be used at all.”
Samantha Floreani

OAIC Commissioner Angelene Falk said entities must carefully consider whether they need to collect this sensitive personal information, and whether the privacy impacts are proportional to achieving the entity’s legitimate functions or activities.

“Was that a reasonable use of biometric technology?” Floreani asked. “It comes back to the question of proportionality. Was there a way 7-Eleven could have gone about collecting customer feedback in a way that was less privacy-invasive? Of course there was. Using biometric data was wildly disproportionate.”

More proportionate, Walsh said, is the use of biometric technology in environments that require high security, such as airports and military installations, as well as environments in which safety could be a factor, such as hospitals or mines.

But even then, there should always be three other essential ingredients: communication, consent and choice – or the ability to opt out.

Using the data

Clear communication around the purpose and process of biometric systems, and the management of the data collected, is a big part of the discussion around ethics in biometrics, said Nick Stanton FIEAust CPEng, Director of I2I Collaborative Executive Solutions.

Nick Stanton FIEAust CPEng

“Are you clearly informing people that you’re going to harvest this data?” asked Stanton. “And if so, how are you going to use it? What are you doing with the information and where is it held?”

If all an organisation offers is “we’re using this technology, and just trust us on how we use the data”, said Stanton, who is also a facilitator at Engineering Education Australia, then that is not a reassuring start.

“There must be absolute transparency around what you’re collecting, why you’re collecting it, where it is kept and for how long,” he said. “Is it legally or ethically correct behaviour by the organisation if you have no idea what you’ve signed up to through the simple act of entering their store? Of course it’s not.”

Walsh agreed, referring to a case study involving Bunnings and Kmart, which were also investigated by the OAIC. 

“There was controversy in 2022 with facial recognition used in those stores,” Walsh said. “Bunnings had apparently put up a small sign at the back of the store saying if you come into the store, you consent to the use of this technology.

“And actually, there was a good reason behind it. At Bunnings, there was increasing amounts of violence committed against their staff. The technology was being used to try to reduce this. But unfortunately, they didn’t explain it at all.”

“There are documented cases of people being wrongly arrested on the basis of facial recognition,” Toby Walsh said. Image credit: supplied

Whether it is for a positive purpose or not, there must also be a discussion around the fact that the technology is not foolproof, Walsh said. 

It makes mistakes, and often these errors have affected such groups as people of colour, Indigenous groups, women and people represented in combinations of these groups, particularly women of colour.

“There are documented cases of people being wrongly arrested on the basis of facial recognition, leading to some people losing their jobs,” Walsh said. “These cases invariably turn out to be around people of colour.”

“The big question is whether you really need to store or process that data on the cloud.”
Dr Niels Wouters

Dr Niels Wouters, principal researcher at Paper Giant, said the location in which data is held is a vital part of the conversation, both in terms of design and in what must be communicated to the market and to customers.

“Does that data live locally, on a small device that is installed in the building itself?” asked Wouters, who was previously an academic at the University of Melbourne’s School of Computer Science, where he conducted research into the ethics of artificial intelligence. 

“Or are we sending that data on to the cloud? This is a red flag. Is it stored for a couple of days, months or years? What if someone is able to access that environment maliciously? Who manages the data? And what damage could they, or others, do with the data?

“The big question is whether you really need to store or process that data on the cloud. There is always risk with that.”

“There’s often a sense that it’s relatively harmless stuff,” Dr Niels Wouters said. Image credit: supplied

Meaningful consent

The question of consent is a difficult one to answer if people do not have a good understanding of what they’re consenting to, Wouters said. 

That’s an issue because “biometric data” is increasingly becoming a catch-all term, meaning we’re losing perspective around what it entails.

“There’s often a sense that it’s relatively harmless stuff,” he said. 

“For instance, with facial recognition technology, we often think that it’s just a photo. And often a photo of one’s face isn’t bad; we share on social media every day. But besides capturing a photo of your face, the technology is capturing landmarks of your face, capturing your face from various angles. It becomes a unique and living personal identifier.

“When we use the term biometric data, some people have already left the conversation because they don’t really know what it means.”

“When we use the term biometric data, some people have already left the conversation.”
Dr Niels Wouters

But as organisations collect more biometric data that covers deeply personal attributes, those organisations, or malicious actors, can increasingly link that biometric data to other data points, such as commercial behaviours, shopping preferences, locations and times of movements.

“Suddenly, they’re starting to build quite complex, detailed and intricate profiles of people, and starting to make assumptions,” Wouters said. “Even without any malicious intent, organisations need to be much more transparent about capturing such data, and much more open around obtaining meaningful consent.”

Another issue is, Floreani said, the term “biometrics” often sounds “new and shiny, effective and advanced”, particularly in front of investors and stakeholders.

“But when you’ve got a problem at hand, you want to be using the right tools to address that problem,” Floreani said. “And biometrics, in many cases, is going to create more problems than it solves.

“Consent is one of the biggest problems, so I would recommend engineers ask from the outset whether it’s even the appropriate approach.

“If it is, how can you embed privacy by design into the entire process? Can the technology be designed to be more privacy enhancing? For example, if you must hold a bunch of biometric data, does it need to be centralised? Or can it be decentralised in a way that the individual retains some control over their own biometric data?”

Opting out

Nobody should ever be forced to give up their biometric data to simply do the grocery shopping, Walsh said, or to grab supplies from the hardware store. 

There should always be a way to opt out, just as people at the airport who don’t wish to go through the body scanner can choose a different method of security check. In some situations, such as high-security environments, opting out may not be possible. In that case, people must offer meaningful consent or opt out by not entering the facility.

Stanton said it will ultimately be up to the design engineers, legal fraternity, commercial industry and government to work together to meet the expectations of Australian society when it comes to the appropriate use and safeguarding of biometric data.

“Without choice, meaningful consent, transparency and best practice data management, all of which make up the ethical engineering of a biometric system, there are only consequences,” he said. 

“They are consequences for the individual and consequences for the organisation, and they can be disastrous for both parties.”

Exit mobile version