Researchers from the University of South Australia have developed a computer vision system that can remotely monitor a premature baby’s vital signs and detect their face in a hospital bed.
While non-contact vital signs monitoring of adults has been studied in recent decades, research focusing on infants in NICUs has been limited.
Lead researcher and Engineers Australia member Javaan Chahl MIEAust told create the artificial intelligence software was unique in its ability to monitor and detect an infant’s skin and face when covered in clothes and tubes and while undergoing phototherapy.
“The photoplethysmography approach has been known since the ‘30s, but there weren’t computers then. They were aware you could observe blood flowing under certain conditions through fingers and earlobes,” Chahl said
“The challenge has been to get it to be viable. It’s one thing to observe a phenomenon, it’s another thing to be able to lock onto it [and] take a measurement …. We forced ourselves into difficult situations to make it work.”
Setting sights on software
More than 10 per cent of babies are born prematurely worldwide and due to their vulnerability, their vital signs need to be continuously monitored. Traditionally, this has been done with adhesive electrodes placed on the skin that can cause skin tears and infections.
The study is part of the University of South Australia’s ongoing project to replace contact-based electrical sensors with non-contact methods. The project would also replace contact-based electrical sensors with the facial imaging video software.
The software was developed using videos of babies in intensive care, detecting faces and skin tones.
“In some cases, these babies can have a lot of instruments near their face. Things like masks, tubing, even a beanie can reduce the amount of face left to see,” Chahl explained.
“The weird one was if a baby had an object of some sort, like a toy, the neural network classifier wasn’t able to lock on properly. It was locking onto a teddy bear with a higher probability of being a person than the baby.”
Detecting faces and movements
In order to detect body movements that would ordinarily be invisible to the human eye, the team used high resolution cameras to film infants at close range. These cameras were able to collect physiological data using neural networks that could outperform the conventional electrodes.
“We need to make a very robust system for this to be viable,” Chahl said. “We’ve started to realise that it won’t be enough to just look at the shape and form of the face, but also detect the presence of life signs of the baby.”
This technique may be applicable in clinical environments as an economical, non-contact and easily deployable monitoring system, and it also has a potential application in home health monitoring.
Chahl said the results are particularly relevant given the COVID-19 pandemic and the need for physical distancing.
“Babies don’t own smartphones, so the state of filters or classifiers is nowhere near as technologically mature as adults. You could potentially end up with technology for people who don’t need it, and nothing for people who aren’t using devices on a daily basis,” Chahl said.
“When we were building this system for babies, we had to ensure it was built for a diverse population. This is the underlying theme of the paper; there’s very reliable ways to detect adults whereas babies are less likely to be detected.”