Welcome To AI news, AI trends website

Revolutionary AI Technology Identifies Asymptomatic COVID-19 Through Smartphone Cough Analysis

Revolutionary AI Technology Identifies Asymptomatic COVID-19 Through Smartphone Cough Analysis
Revolutionary AI Technology Identifies Asymptomatic COVID-19 Through Smartphone Cough Analysis

Individuals carrying COVID-19 without displaying symptoms present a unique challenge in pandemic control. These asymptomatic carriers show no visible signs of illness, making them less likely to seek testing while potentially transmitting the virus unknowingly to others.

However, groundbreaking research from MIT scientists reveals that even asymptomatic individuals exhibit subtle changes in their cough patterns that are imperceptible to the human ear but detectable through advanced artificial intelligence algorithms.

In a pioneering study published in the IEEE Journal of Engineering in Medicine and Biology, the research team details how their innovative AI model can distinguish between healthy individuals and asymptomatic COVID-19 carriers using forced-cough recordings submitted via smartphones, laptops, and other web-enabled devices.

The researchers trained their sophisticated AI system on tens of thousands of cough samples and spoken word recordings. When tested with new cough recordings, the system demonstrated remarkable accuracy, correctly identifying 98.5% of coughs from individuals with confirmed COVID-19 cases. Notably, the model achieved 100% accuracy in detecting coughs from asymptomatic individuals who had tested positive despite reporting no symptoms.

The MIT team is currently developing a user-friendly application incorporating this AI technology. If approved by the FDA and widely adopted, this app could serve as a free, convenient, non-invasive screening tool to identify potential asymptomatic COVID-19 carriers. Users would simply need to log in daily, cough into their smartphone, and receive immediate feedback on whether they should seek formal testing.

"The widespread implementation of this diagnostic tool could significantly curb pandemic transmission if used before entering classrooms, factories, or restaurants," explains Brian Subirana, a research scientist at MIT's Auto-ID Laboratory and co-author of the study.

Subirana's collaborators in this research include Jordi Laguarta and Ferran Hueto, also from MIT's Auto-ID Laboratory.

 Advanced AI Technology for Asymptomatic COVID-19 Detection Through Cough Analysis

Vocal Biomarkers Analysis

Before the pandemic, research groups had already been training algorithms on smartphone-recorded coughs to diagnose conditions like pneumonia and asthma. Similarly, the MIT team had been developing AI models to analyze forced-cough recordings for potential Alzheimer's detection, as this neurological condition is associated not only with memory decline but also with neuromuscular deterioration, including weakened vocal cords.

Initially, the researchers trained a machine-learning algorithm, specifically a neural network known as ResNet50, to differentiate sounds associated with varying degrees of vocal cord strength. Previous studies had demonstrated that the quality of the "mmmm" sound can indicate the strength or weakness of a person's vocal cords. Subirana trained this neural network on an extensive audiobook dataset containing over 1,000 hours of speech to identify the word "them" among similar-sounding words like "the" and "then."

The team then developed a second neural network to identify emotional states expressed in speech patterns. This approach was based on research showing that Alzheimer's patients—and individuals with neurological decline more frequently—tend to exhibit certain emotional states such as frustration or flat affect more often than positive emotions like happiness or calmness. The researchers created this sentiment speech classifier by training it on a comprehensive dataset of actors expressing various emotional states, including neutral, calm, happy, and sad.

Next, the researchers trained a third neural network using a database of cough recordings to detect changes in lung and respiratory function.

Finally, the team integrated all three models and added an algorithm to detect muscular degradation. This algorithm works by simulating an audio mask or noise layer and distinguishing strong coughs—those audible over the noise—from weaker ones.

Using their novel AI framework, the team analyzed audio recordings, including samples from Alzheimer's patients, and discovered that their system could identify Alzheimer's cases more effectively than existing models. These results demonstrated that vocal cord strength, emotional expression, lung and respiratory performance, and muscular degradation collectively serve as effective biomarkers for diagnosing the condition.

As the coronavirus pandemic emerged, Subirana hypothesized that their AI framework developed for Alzheimer's detection might also be applicable to COVID-19 diagnosis, particularly as evidence grew that infected patients experienced similar neurological symptoms, including temporary neuromuscular impairment.

"The sounds produced during talking and coughing are both influenced by the vocal cords and surrounding organs. This means that when you speak, part of your vocalization resembles coughing, and vice versa. Consequently, aspects easily derived from fluent speech—such as a person's gender, native language, or emotional state—can also be identified by AI from cough patterns alone. There is indeed sentiment embedded in how one coughs," Subirana explains. "So we considered whether these Alzheimer's biomarkers might be relevant for COVID detection as well."

Remarkable Similarities

In April, the team began collecting as many cough recordings as possible, including samples from COVID-19 patients. They created a website where individuals could record a series of coughs using their smartphones or other web-enabled devices. Participants also completed a survey detailing their symptoms, COVID-19 status, and diagnosis method (official test, doctor's assessment, or self-diagnosis). Additionally, they could provide information about their gender, geographical location, and native language.

To date, the researchers have amassed over 70,000 recordings, each containing multiple coughs, totaling approximately 200,000 forced-cough audio samples—which Subirana describes as "the largest research cough dataset that we know of." Among these, around 2,500 recordings were submitted by individuals confirmed to have COVID-19, including asymptomatic carriers.

The team utilized these 2,500 COVID-19-associated recordings along with an additional 2,500 randomly selected recordings to create a balanced dataset. They used 4,000 of these samples to train the AI model, while the remaining 1,000 recordings served as test data to evaluate the model's ability to distinguish between coughs from COVID-19 patients and healthy individuals.

Surprisingly, as documented in their paper, the researchers discovered "a striking similarity between Alzheimer's and COVID discrimination."

With minimal modifications to their AI framework originally designed for Alzheimer's detection, they found it could identify patterns in the four biomarkers—vocal cord strength, emotional expression, lung and respiratory performance, and muscular degradation—that are specific to COVID-19. The model successfully identified 98.5% of coughs from individuals with confirmed COVID-19, and among these, it accurately detected all coughs from asymptomatic carriers.

"We believe this demonstrates that sound production changes when you have COVID-19, even if you're asymptomatic," Subirana states.

Detecting the Undetectable

Subirana emphasizes that the AI model is not intended to diagnose symptomatic individuals or determine whether their symptoms stem from COVID-19 or other conditions like influenza or asthma. The tool's primary strength lies in its ability to differentiate between coughs from asymptomatic carriers and those from healthy individuals.

The team is collaborating with a company to develop a free pre-screening application based on their AI model. They are also partnering with hospitals worldwide to collect a larger, more diverse set of cough recordings, which will help further train and enhance the model's accuracy.

As proposed in their paper, "Pandemics could become historical artifacts if pre-screening tools are always operational in the background and continuously improved."

Ultimately, the researchers envision that audio AI models like the one they've developed could be integrated into smart speakers and other listening devices, enabling people to conveniently assess their disease risk on a regular basis, potentially as part of their daily routine.

tags:AI asymptomatic COVID detection smartphone cough analysis technology artificial intelligence health screening non-invasive virus detection AI machine learning pandemic control
This article is sourced from the internet,Does not represent the position of this website
justmysocks
justmysocks

Friden Link