AI-Based Software Accurately Identifies Autism

A software program based on artificial intelligence (AI) is effective for distinguishing young children with autism spectrum disorder (ASD) from those with other conditions, according to results of a pivotal trial presented by Current Psychiatry and the American Academy of Clinical Psychiatrists.

The AI-based software, which will be submitted to regulatory approval as a device, employs an algorithm that assembles inputs from a caregiver questionnaire, a video, and a clinician questionnaire, according to Sharief Taraman, MD, associate clinical professor of neurology at Children’s Hospital of Orange County, University of California, Irvine.

Although the device could be employed in a variety of settings, it is envisioned for use by primary care physicians. This will circumvent the need for specialist evaluation except in challenging cases. Currently, nearly all children with ASD are diagnosed in specialty care, according to data cited by Taraman.

“The lack of diagnostic tools for ASD in primary care settings contributes to an average delay of 3 years between first parental concern and diagnosis and to long wait lists for specialty evaluation,” he reported at the virtual meeting, presented by MedscapeLive.

When used with clinical judgment and criteria from the American Psychiatric Association’s 5th edition of the Diagnostic and Statistical Manual (DSM-5), the data from the trial suggest the diagnostic tool in the hands of primary care physicians “could efficiently and accurately assess ASD in children 18 to 72 months old,” Taraman said.

The AI-assisted software was evaluated in 425 children at 14 sites in 6 states. The study population was reflective of U.S. demographics. Although only 36% of the children were female, this is consistent with ASD prevalence. Only 60% of the subjects were White. Nearly 30% were Black or Latinx and other populations, such as those of Asian heritage, were represented.

Children between the ages of 18 and 72 months were eligible if both a caregiver and a health care professional were concerned that the child had ASD. About the same time that a caregiver completed a 20-item questionnaire and the primary care physician completed a 15-item questionnaire on a mobile device, the caregiver uploaded two videos of 1-2 minutes in length.

This information, along with a 33-item questionnaire completed by an analyst of the submitted videos, was then processed by the software algorithm. It provided a patient status of positive or negative for ASD, or it concluded that the status was indeterminate.

“To reduce the risk of false classifications, the indeterminate status was included as a safety feature,” Taraman explained. However, Taraman considers an indeterminate designation potentially actionable. Rather than a negative result, this status suggests a complex neurodevelopmental disorder and indicates the need for further evaluation.

The reference standard diagnosis, completed in all participants in this study, was a specialist evaluation completed independently by two experts. The presence or absence of ASD was confirmed if the experts agreed. If they did not, a third specialist made the final determination.

On the basis of the software, there was a determinate result in 52% of the children. In comparison to the specialist determinations, all were correctly classified except for one child, in which the software was determined to have made a false-negative diagnosis. A diagnosis of ASD was reached in 29% of the study participants.

For those with a determinate designation, the sensitivity was 98.4% and the specificity was 78.9%. This translated into positive predictive and negative predictive values of 80.8% and 98.3%, respectively.

Of those identified as indeterminate by the AI-assisted algorithm, 91% were ultimately considered by specialist evaluation to have complex issues. In this group, ASD was part of the complex clinical picture in 20%. The others had non-ASD neurodevelopmental conditions, according to Taraman.

When the accuracy was evaluated across ages, ethnicity, and factors such as parent education or family income, the tool performed consistently, Taraman reported. This is important, he said, because the presence or absence of ASD is misdiagnosed in many underserved populations.

The focus on developing a methodology specific for use in primary care was based on evidence that the delay in the diagnosis of ASD is attributable to long wait times for specialty evaluations.

“There will never be enough specialists. There is a need for a way to streamline the diagnosis of ASD,” Taraman maintained. This is helpful not only to parents concerned about their children, he said, but also there are data to suggest that early intervention improves outcomes.

A specialist in ASD, Paul Carbone, MD, medical director of the child development program at the University of Utah, Salt Lake City, agreed. He said early diagnosis and intervention should be a goal.

“Reducing the age of ASD diagnosis is a priority because early entry into autism-specific interventions is a strong predictor of optimal developmental outcomes for children,” Carbone said.

Although he is not familiar with this experimental AI-assisted diagnostic program, he has published on the feasibility of ASD diagnosis at the primary care level. In his study, Carbone examined the Modified Checklist for Autism in Toddlers (M-CHAT) as one of several methodologies that might be considered.

Diagnosis of ASD “can be achieved through systematic processes within primary care that facilitate universal development surveillance and autism screening followed by prompt and timely diagnostic evaluations of at-risk children,” Carbone said.

MedscapeLive and this news organization are owned by the same parent company. Taraman reported a financial relationship with Cognos, the company that is developing the ASD software for clinical use. Carbone reported that he has no conflicts of interest.

This article originally appeared on MDedge.com, part of the Medscape Professional Network.

Source: Read Full Article