Think artificial intelligence is unbiased? Think again. Researchers are finding the technology can reflect the flaws of the humans who build it, and are trying to counteract this effect.
Sometimes, the issue isn’t with the algorithm, but the data that is used to train it, according to electronics engineer and computer scientist Professor Vasad Honavar, who directs the Artificial Intelligence Research Laboratory at Pennsylvania State University.
In a statement, Honavar said AI systems are trained on large data sets, but if the data is biased, this can affect the system’s recommendations.
For example, Amazon retired an experimental AI recruiting tool when they found it favoured men over women – because most of the applicants over the past decade had been male.
Honavar explained that in cases such as this, the machine learning algorithm is doing what it’s supposed to do, which is to identify good job candidates based on certain desirable characteristics.
“But since it was trained on historical, biased data it has the potential to make unfair recommendations,” Honavar explained.
To address this issue, Honavar and a team of researchers have developed an AI tool to detect discrimination on the basis of characteristics such as race and gender.
Estimating fairness
The tool was designed to detect discrimination based on the principle of cause and effect.
Researcher Aria Khademi explained that to tackle the question of whether gender affected salaries, it could be reframed as “does gender have a causal effect on salary?”
“Or in other words, ‘Would a woman be paid more if she was a man?’,” added Khademi.
The researchers tested their tool on two data sets: US income data; and demographic data about drivers pulled over by the New York state police force.
They found there was evidence of gender-based discrimination with regards to salary, with women having two-thirds less chance of earning more than US$50,000 (AU$71,000) per year. From the police force data, the researchers found some evidence of possible racial bias against Hispanic and African American individuals, but did not find evidence of discrimination against these groups on average.
The researchers’ findings were published in the Proceedings of the 2019 World Wide Web Conference in May.
Their paper stated there is a pressing need to make sure real-world algorithmic decision-making systems do not become vehicles of unfair discrimination, inequality and social injustice. To do this, you need effective tools for detecting discrimination, Honavar said.
“Our tool can help with that,” he added.
Inside the ‘black box’
Another pressing issue as industry and government continue to collect personal data – including biometric data such as facial images – is how this data will be used by AI algorithms. This means the engineers who develop AI technology also need to think about how their work will be put to use.
This issue was recently brought into the spotlight when Curtin University and the University of Technology Sydney announced that they are reviewing links to Chinese companies and research that use facial recognition tech to track and detain members of the minority Uyghur ethnic group.
To spur discussion on the issue, researchers from the University of Melbourne developed a tool called the Biometric Mirror. This is an interactive application that compares users’ photos to thousands of facial images and crowd-sourced evaluations, where a large number of people have rated how they perceive each face’s personality.
Biometric Mirror uses this comparison to rate the user’s personality characteristics, including attractiveness, aggression, responsibility, emotional stability and ‘weirdness’ – and asks them to imagine a world where this information is shared with their employer or insurer.
According to developers Dr Niels Wouters and Professor Frank Vetere, their application can be confronting.
“It starkly demonstrates the possible consequences of AI and algorithmic bias, and it encourages us [to] reflect on a landscape where government and business increasingly rely on AI to inform their decisions,” he said.