AI detects speech patterns for autism in different languages

Summary: Machine learning algorithms help researchers identify speech patterns in children on the autism spectrum that are consistent across different languages.

source: Northwest University

A new study led by researchers at Northwestern University used machine learning – a branch of artificial intelligence – to identify speech patterns in children with autism that were consistent between English and Cantonese, suggesting that speech functions could be a useful tool for diagnose the condition.

Conducted with collaborators in Hong Kong, the study provided insights that could help scientists distinguish between genetic and environmental factors that shape the communication skills of people with autism, which could help them learn more about the origin of the disease and develop new treatments.

Children with autism often speak more slowly than typically developed children and show other differences in pitch, pitch, and rhythm. But these differences (which researchers call “random differences”) have been surprisingly difficult to characterize consistently and objectively, and their origins have remained unclear for decades.

However, a team of researchers led by Northwestern scientists Molly Loach and Joseph CY Lau, along with Hong Kong collaborator Patrick Wong and his team, have successfully used supervised machine learning to identify speech differences associated with autism.

The data used to train the algorithm were recordings of young people speaking English and Cantonese with and without autism telling their own version of the storyboard in a wordless children’s book called “Seeds, Where Are You?”

The results were published in the journal ANOTHER June 8, 2022.

Loach, Jo-Ann J. Pedro F. Dolly is Professor of Learning Disabilities at Northwestern University.

“But interesting is also the variation we observed, which could indicate more fluent speech characteristics, which would potentially be good targets for intervention.”

Lau added that the use of machine learning to identify key elements in speech that predict autism is an important step forward for researchers who have been limited by English bias in autism research and human subjectivity when it comes to classifying differences in autism . between autistic and non-autistic.

“Using this method, we were able to identify speech characteristics that could predict an autism diagnosis,” said Lau, a postdoc researcher working with Loach in Roxlin and Richard Pepper’s Department of Communication Science and Disorders at Northwestern.

“The most notable of these features is pacing. We hope this study will provide the basis for future work on autism that improves machine learning.”

The researchers believe that their work has the potential to contribute to a better understanding of autism. Lau said AI has the potential to make autism diagnosis easier, help reduce the burden on healthcare professionals and make autism diagnosis more accessible to more people. It could also provide a tool that could one day transcend cultures, due to a computer’s ability to analyze words and sounds quantitatively, regardless of language.

The researchers believe that their work could provide a tool that could one day transcend cultures, due to the computer’s ability to analyze words and sounds quantitatively, regardless of language. The image is in the public domain

Because speech characteristics identified through machine learning include features common to English, Cantonese, and language-specific, Loch said, machine learning can be useful in developing tools that not only identify aspects of speech that are appropriate for interventions. therapeutic methods, but also measures the impact of these interventions by assessing the speaker’s progress over time.

Finally, the results of the study may inform the efforts to identify and understand the role of specific genes and brain treatment mechanisms involved in genetic susceptibility to autism, the authors said. Ultimately, its goal is to form a more comprehensive picture of the factors that make up people with autistic speech differences.

“One of the brain networks involved is the auditory pathway at the subcortical level, which is closely linked to differences in how speech sounds are processed in the brain by people with autism compared to those who normally develop across cultures,” he said. . Lau.

The next step will be to determine if these differences in processing in the brain lead to the behavioral patterns of speech we see here and the neurogenetics that underlie them. We’re excited about what’s coming. “

Also see

About this research news for AI and ASD

author: Max Wittinsky
source: Northwest University
Contact: Max Wittinsky – University of the Northwest
Photograph: The image is in the public domain

original research: Free entrance.
“Interlinguistic Patterns of Speech Differences in Autism: A Study of Machine Learning Written by Joseph CY Lau et al. ANOTHER


Summary

Interlinguistic patterns of speech differences in autism: a machine learning study

Differences in speech presentation are a commonly observed feature of autism spectrum disorders (ASD). However, it is unclear how stereotypical differences in ASD in different languages ​​show cross-linguistic variation in the presentation.

Using a supervised machine learning approach, we examined vocal skills relevant to the rhythmic and tonal aspects of performance derived from narrative tests obtained in English and Cantonese, two typically distinct and episodic languages.

Our models revealed successful classification of ASD diagnosis using relative rhythm characteristics within and between both languages. Classification with characteristics related to intonation was important for English but not for Cantonese.

The results highlight differences in timing as one of the main symptomatic features of ASD and also show significant variation in other general features that appear to be shaped by specific language differences, such as intonation.

Leave a Comment

Your email address will not be published.