AI tool for smart speakers could save lives
10 July 2019
Researchers at the University of Washington have developed a new contactless artificial intelligence (AI) tool to monitor at-risk patients for cardiac arrest, even when they’re asleep.
The tool is designed to keep patients experiencing cardiac arrest safe when there are no bystanders around them to help. During a heart attack, people have an even chance of either becoming unresponsive or beginning to gasp for air – this distinctive gasping is known as agonal breathing.
The algorithm, which is designed to be compatible with smart home devices and mobile phones, detects the gasping sound of agonal breathing and automatically contacts people nearby or the emergency services for help.
The tool was trained using audio from calls to Seattle’s emergency services, and is able to detect agonal breathing 97% of the time from up to 6 metres away, according to findings published in npj Digital Medicine. As agonal breathing happens in about 50% of cardiac arrest cases, the AI could detect up to 48.5% of heart attacks.
University of Washington associate professor Shyam Gollakota said: “We envision a contactless system that works by continuously and passively monitoring the bedroom for an agonal breathing event, and alerts anyone nearby to come provide CPR. And then if there’s no response, the device can automatically call 911.”
The research team collected audio from 162 emergency calls from 2009 to 2017, extracting 2.5-second clips from the start of each agonal breath recorded to come up with a total of 236 clips. The recordings were then processed through Amazon Alexa, iPhone 5s and a Samsung Galaxy S4 and various machine learning platforms to generate 7,316 overall clips.
The team used this dataset to teach the AI to detect agonal breathing 97% of the time when the smart device housing it was placed up to 6 metres away from a speaker generating the sounds.
The audio clips were played at different distances to ascertain the distance at which the software was operational, with interfering noises like cars honking and air conditioning added to some of the clips so the AI could learn to detect agonal breathing through these.
The algorithm was taught to avoid false positives using 83 hours of sleep lab data, yielding 7,305 sound samples, as well as audio from volunteers who had recorded themselves sleeping in their own home. By analysing these it learned to differentiate agonal breathing from regular sleeping sounds like snoring.
Gollakota said: “Right now, this is a good proof of concept using the 911 calls in the Seattle metropolitan area. But we need to get access to more 911 calls related to cardiac arrest so that we can improve the accuracy of the algorithm further and ensure that it generalizes across a larger population.”
The researchers plan to commercialise the technology through University of Washington spinout Sound Life Sciences.
Published by Verdict Medical Devices on June 19, 2019
Image from Shutterstock