NIH-backed research using AI to spot subtle signs of early dementia

Michigan State University announced that it is developing artificial intelligence (AI)-based technology that scans speech and vocabulary patterns in order to detect the early signs of Alzheimer's disease. The research, which is being undertaken in collaboration with Oregon Health & Science University and Weill Cornell Medicine, is supported by a $3.9-million grant from the US National Institutes of Health (NIH).  

The aim of the work is to code an easy-to-use smartphone app that can help assess whether a follow-up medical diagnosis is needed. Jiayu Zhou, who is leading the effort, said that "the final assessment will be done by a patient's physician."

Zhou explained that it is very easy to confuse the mild cognitive impairment seen in early-stage Alzheimer's disease with the normal cognitive decline that comes with aging, but he believes AI may be able to catch some of the subtler changes in speech and behaviour earlier, and more reliably, than human observers. Moreover, he said an app would make early detection far more affordable and accessible than medical diagnostics, such as magnetic resonance imaging (MRI) scans and in vivo testing, which can be time-intensive, invasive and expensive.

Comparable accuracy to MRI

The researchers said they have already shown in preliminary tests that the AI approach is as accurate as MRI scans at recognising the early warning signs of Alzheimer's disease. These tests used data collected by Oregon Health & Science University, which is running a trial on how conversations can help protect against cognitive decline, and has provided many hours of interviews that could be used to test the AI. The interviews were transcribed and the algorithm could analyse patterns in the text, such as the variety of words used, to give clues about a person's cognitive state.

Investigators have developed a prototype app that interviews a user and records their audio responses. The aim now is to refine the questions that the app asks, as well as how it asks them, in order to obtain what it needs from users more quickly. Zhou said that "if we want to develop an app that everyone can use, we don't want to have people talking to it for hours," adding that "we need to develop an efficient strategy so we can navigate the conversation and get the data we need as quickly as possible, within five to 10 minutes."

Additionally, to help the AI make an assessment, the researchers may bring in acoustic signals of a conversation and video to analyse facial expressions along with the words a user is saying. They are also working on integrating behaviour sensors to track sleep patterns.

To ensure you don't miss other Top Stories like these and news on key healthtech industry developments, sign up for our free daily e-newsletter here.  

Did you like this article?

Reference Articles