Health & Wellbeing

Self-administered smartphone speech app may spot Alzheimer’s early on

Self-administered smartphone speech app may spot Alzheimer’s early on
Researchers have developed a self-administered smartphone app that analyzes speech for the telltale signs of early dementia
Researchers have developed a self-administered smartphone app that analyzes speech for the telltale signs of early dementia
View 1 Image
Researchers have developed a self-administered smartphone app that analyzes speech for the telltale signs of early dementia
1/1
Researchers have developed a self-administered smartphone app that analyzes speech for the telltale signs of early dementia

Researchers have developed a self-administered smartphone app to screen for neurodegenerative conditions like Alzheimer’s disease and mild cognitive impairment by analyzing speech patterns. With subtle speech disturbances being an early indicator of these conditions, this may be an easy way to obtain a diagnosis quicker.

Despite the worldwide prevalence of Alzheimer's disease (AD), it’s estimated that 75% of people with it have not been diagnosed. Language impairment is usually one of the first signs of AD. Early on, individuals may develop a stutter or halting speech and have difficulty recalling words or finding the right word to convey what they’re trying to say.

Using technology to capture the often-subtle changes in a person’s voice is a way of helping doctors diagnose AD and mild cognitive impairment (MCI) early. The earlier the diagnosis, the better the chances that the disease’s progress can be slowed. However, recognizing speech patterns in older people can be difficult.

Researchers from the University of Tsukuba, Japan, and IBM Research developed a self-administered prototype smartphone app to accurately analyze someone’s speech for the telltale signs of early AD and MCI.

The researchers collected speech data from 114 participants: 25 diagnosed with AD, 46 with MCI, and 43 cognitively healthy participants. The age of participants ranged from 72 to 75 years. Participants sat in a quiet room and answered pre-recorded questions; their answers were recorded on an iPad.

The participants performed five speech tasks: counting backwards, subtraction, tasks concerned with verbal fluency, and picture description. Their responses were transcribed using the IBM Watson Speech-to-Text automatic speech recognition service. The recordings were analyzed for jitter (short-term variations in pitch), shimmer (short-term variations in loudness), speech rate, intonation and pauses. Machine learning was used to classify the three groups – AD, MCI, and control – via speech features, with the researchers inputting 92 speech features extracted from each task.

The researchers found statistically significant differences in the speech patterns of the control participants and those with AD or MCI. Moreover, the machine learning model detected AD and MCI with an accuracy of 91% and 88%, respectively.

To their knowledge, this is the first study to show the feasibility of using an automatic, self-administered tool to detect AD and MCI using speech as a marker. They propose further studies to test whether speech variations picked up by their app coincide with the pathological changes seen in these conditions, such as tau and amyloid beta levels.

The researchers acknowledge their study has some limitations. The speech data was collected in a lab setting, which may have influenced how participants responded to questions. Second, the sample size was small, which affects the generalizability of the study’s findings.

Nonetheless, their research demonstrates the potential of using speech analysis via a self-administered smartphone app to screen for these debilitating diseases.

The study was published in the journal Computer Speech and Language.

Source: University of Tsukuba

No comments
0 comments
There are no comments. Be the first!