With autism numbers rising every few years, some researchers in the field are sharply questioning the reliability of the government statistics.
An editorial published this month in the journal Autism is taking the U.S. Centers for Disease Control and Prevention to task for the methods they use to assess prevalence of the developmental disorder.
The current approach is based on data collected on 8-year-olds in multiple communities across the country. Researchers review medical and education records for the children to identify any existing diagnosis of autism or symptoms that suggest a child is on the spectrum.
Since the CDC began releasing national prevalence estimates based on this surveillance method in 2007, rates of autism have jumped from 1 in 150 to 1 in 110 reported in 2009 to 1 in 88 in 2012.
The most recent figures released in March indicate that 1 in 68 American children are on the spectrum, reflecting a 30 percent rise in just two years.
“We believe that this apparent increase should raise as many concerns about the study methods themselves as it does about other reasons for the observed change in prevalence,” reads the editorial authored by two of the journal’s editors, David Mandell of the University of Pennsylvania School of Medicine, and Luc Lecavalier of Ohio State University.
Specifically, the researchers question how autism prevalence could be determined through a records review alone without assessing a single child.
What’s more, they say the wide variety of results collected from the different study sites should be a red flag. In the latest CDC data, for example, 1 in 45 kids in New Jersey were said to have the developmental disorder compared to 1 in 175 in Alabama. There were also variances by race and ethnicity across the study’s 11 sites and there were differences in the number of children found to have co-occurring intellectual disability.
“Simply put, without direct assessments of children, we will not know the extent to which the CDC-determined ‘cases’ include false positives, or the extent to which children who it was determined do not have autism are really false negatives,” wrote Mandell and Lecavalier, adding that they believe it would be a “mistake” to continue relying on the CDC figures any longer as “meaningful estimates of prevalence.”
For their part, the CDC did not respond to an interview request, but defended its current approach in a statement to Disability Scoop.
“CDC is committed to scientific integrity and a high standard of quality for the autism data that we report. There are different methods to estimate the number of children with autism, each with its strengths and limitations. CDC stands behind the (Autism and Developmental Disabilities Monitoring) Network’s autism tracking method for providing the most complete picture of autism in communities across the United States,” the statement said.
Furthermore, a CDC spokeswoman pointed to a 2011 study published in the Journal of Autism and Developmental Disorders that the agency said backs its current surveillance methods, finding that the autism rate reported is likely a conservative estimate.