Researchers Are Using Machine Learning to Screen for Autism in Children
Parents and doctors face a difficult dilemma when it comes to detecting and treating autism spectrum disorder (ASD) in children. It’s critically important to diagnose ASD as early in a child’s development as possible. Starting treatment for ASD at an age of 18 to 24 months can increase a child’s IQ by up to 17 points—in some cases moving them into the “average” child IQ range of 90-110 (or above it)—and, in turn, significantly improving their quality of life. What’s more, early intervention can save someone with ASD up to $1.2 million in lifetime medical costs, according to Geraldine Dawson, director of the Duke Center for Autism and Brain Development.
But the current process of early screening isn’t very accurate because it relies on a questionnaire that parents answer about their child’s behavior (usually at the child’s 18 month checkup). And these questionnaires often produce false positives. In fact, of the children whose parents report early signs of ASD on the questionnaire, Dawson says only 50 percent have that diagnosis confirmed by a licensed ASD clinician. And because there are so few licensed ASD clinicians qualified to follow-up with the many parents who report suspected ASD through the questionnaire, the wait time for children to receive a diagnosis could be well after the child’s third birthday—delaying treatment past the ideal window of time to potentially improve outcomes for children with ASD.
An interdisciplinary team of researchers at Duke University, led by Dawson and electrical and computer engineering professor Guillermo Sapiro, hope to improve the difficult problem of early ASD detection by combining the ubiquity of mobile devices with the power of machine learning and computer vision. Together, they hypothesized that these technologies could create a faster, less expensive, more reliable, and more accessible system to screen children for ASD.
The method they developed has not only led to new insights about ASD, but it could also enhance how doctors evaluate patients for other behavioral disorders.
Developing an ASD Screening App
The Duke team’s method took the form of a mobile device app, which allows caregivers and practitioners to test children for ASD-related behaviors at home or in the clinic based on specific symptoms.
“Babies who go on to develop autism typically don’t pay attention to social cues,” Dawson says. “They’re more interested in non-social things, like toys or objects. They’re also less emotionally expressive. They smile less, particularly in response to positive social events.”
The in-app test screens for these behaviors by playing a movie with a social stimulus (a woman telling nursery rhymes, for example) on one half of the screen, and with non-social stimulus on the other half (a spinning top). Meanwhile, the device’s front-facing camera tracks the child’s gaze, head movements, smile, and other facial expressions.
For Sapiro, the challenge was to convert the facial movements data into meaningful information. And this is one of the instances where the machine learning goes to work.
Sapiro (a recipient of AWS Machine Learning Research Awards, which gives leading researchers easy access to AWS computing infrastructure and machine learning services) and his team used Amazon Web Services and tools called TensorFlow and PyTorch to build machine learning algorithms that connect children’s facial expressions and eye movements to the appropriate human emotions and attention patterns. The group is also using these cloud computing tools to develop new machine learning algorithms for privacy filters for the images and videos they collect—an issue most developers run into when dealing with sensitive healthcare data.
“When doing the facial analysis, we had a team of trained ASD clinicians label by hand the emotions associated with different facial expressions, and used that data to teach and validate the algorithm,” Sapiro says.
Dawson and Sapiro went back-and-forth for months, refining the algorithms until they were able to detect signs for ASD.
Studying ASD at an Unprecedented Scale
Machine learning and computer vision technology allowed the researchers to study ASD more quickly, and in more children. Typically, ASD studies involve 50 to 100 children. With the app, the Duke team collected behavioral data of about 1,700 children in a single study.
The research team tested the app results against evaluations conducted by ASD specialists, and found the app to be almost 90 percent accurate in some subset of behaviors—a step forward from the 50 percent accuracy of the traditional questionnaire.
Beyond improving the initial assessment, the team hopes the app could become a useful complementary tool for licensed ASD clinicians. The app provides frame-by-frame analysis of a child’s reactions to stimuli—a more detailed observation than the human eye—and can detect signs for ASD instantly and objectively.
“We’re actually finding new biomarkers associated with autism,” Dawson says. “We learned that when children with autism watch these movies, they make very subtle head movements, almost as if they have trouble keeping their bodies still. And we expect to discover even more now that we can analyze behavior in greater detail.”
The potential for this kind of at-home, app-based screening method is promising, Dawson says. Her team is already exploring using it to detect signs of ADHD, but Dawson says the method could potentially be applied to other behavioral disorders, such as dementia and Alzheimer’s, and possibly anxiety and depression.
“Whether we’re looking at motor behavior, attention, emotional expression, or cognitive processing — there are so many aspects of behavior you can measure through computer vision analysis,” Dawson says. “I see this as a tool that could potentially be used for a wide range of conditions.”
[Originally posted by Wired — August 13, 2019]