This engineer’s research is teaching artificial intelligence to recognise when humans are approaching their cognitive limits.
Professor Fang Chen was doing software and analytics work for an emergency control centre when her manager asked her something that would change her career forever.
Chen had been designing a new interface for the centre and the manager confided that, when an emergency hit, he needed more than a pretty screen.
He asked if there was a way to know instead who on his team of 10 to 20 responders had the ability to take on more work. Who could he give an additional task to, even though everyone was flat out?
Chen — an artificial intelligence (AI) and data science expert at the University of Technology Sydney — didn’t know. But she was determined to find out.
Delving into the mind
Chen and her team started looking into critical factors that affect the human brain when it’s working.
They enlisted experts in psychology and education to help them understand brain structure and the complexities of human cognition.
“We started digging out all this cognitive load theory, working memory theory,” Chen said.
She explained that there is only so much a person’s brain can deal with before they start to make mistakes.
“Every single person has a very limited working memory … just like you have limited RAM on a computer,” she said.
“When your calculating capability is less than the tasks required, that means you overload and you will make mistakes. Because no human brain can sustain being heavily overloaded forever.”
Chen, an electrical engineer, then began designing AI systems that can recognise when humans are approaching their cognitive limit.
She researched how people perform and behave when they’re overloaded, and how to develop AI systems that can adapt.
A better approach could be as simple as blocking out low-priority information.
Artificial intelligence detects information overload
Chen has trialled the technology for bushfire management, where she said a balance needs to be struck to give operators just the right amount of information in an emergency.
“We need to safeguard those operators or commanders [so] they are equipped with good information; however, they’re not so overloaded by the information that they can’t think,” Chen said.
Chen said the mental load a person can take on varies considerably between individuals and can change from moment to moment.
“I think every single person has the moment you feel like you can’t think,” she laughed. “But [just because] you can’t think now doesn’t mean you can’t think in the next hour.”
For Chen, it is less important to compare between individuals than it is to understand someone’s brain capacity at a particular point in time.
“If you have five tasks to do [it might require] 50 per cent of your working memory — whether you have that or not,” she said.
Something as simple as a bad night’s sleep changes a person’s ability to take on tasks. Or perhaps, on a particular day, a bushfire manager might be worried about their own home.
“Those actually eat into some part of your capacity,” Chen said.
“You’re not feeling your brain is really clear; you feel clouded. At that time, it’s not the best of you … [and] systems need to recommend some help.”
Chen has also looked at air traffic control, where delayed flights and extreme weather can play havoc with landing schedules. She sees the technology being used to evaluate a team’s capacity in real time, and flag when help is needed — “whether someone can take looking after a few flights for you,” Chen explained.
“And then, when your capacity is over the roof, at least someone can observe or get a little bit of an alarm.”
Chen has also designed experiments involving several tasks at once.
She measured people’s performance on each of the tasks, as well as their overall ability to multitask.
As with all AI systems, trust is a crucial component in Chen’s system.
Last year, a University of Queensland and KPMG report found Australians don’t know a lot about how AI is used but have little trust in AI systems and believe they should be carefully regulated.
Two in five people surveyed for the report were unwilling to rely on the recommendations of an AI system.
According to Chen, people intuitively trust AI systems more when their recommendations follow a human decision-making paradigm.
“Then your recommended suggestion is at least, in the ballpark,” she said.
“It fits with the process of how humans make decisions.”
Chen uses the example of air traffic control, where there are step-by-step rules for controllers to follow in different scenarios.
She said any AI system for landing planes should use the pre-existing rules to give controllers information that can support them in making decisions.
“Rather than giving them the decision and saying ‘ok, you make your call, do you trust this or not trust this’,” Chen said.
While much of Chen’s research revolves around life-or-death decisions, she sees her technology as having use in day-to-day life.
Her team’s current focus is on bringing the technology to the classroom, where Chen said teachers could use AI to recognise when students have hit their cognitive limit.
She said students learn best when they have the brain capacity to absorb information and practice new skills.
“When a student is overloaded, he or she can’t study — it’s basically a waste of everyone’s time,” Chen said.
“So to understand … the student’s [workload] when they’re learning is super important to maximising the learning outcomes.”
The technology could also be valuable for those of us whose day-to-day emergencies are of a far less critical office variety. AI systems could be used to recognise our cognitive load and block non-urgent emails and messages when we’re busy.
“Hopefully one day we’ll even make it as an app to say ‘ok, you lack brain capacity, don’t sign that critical contract now’,” Chen said.
The work could even be used to understand people’s focus and behaviour in Zoom meetings.
From an engineering perspective, it’s fascinating, Chen said.
“A human being is the most complex machine anywhere.”