Evidence-based practice is the process of identifying the best available evidence to make decisions about practices that should be deployed to support individuals in a given population (McKibbon, 1998, see Vivanti, 2022, for a review in relation to autism). Practices that meet a predefined set of evidentiary criteria are labeled “evidence-based practices” (EBPs1), to promote their adoption by service providers. A tenet of EBP is that the research used to designate EBPs should be rigorous, with the fewest risks of bias possible (Slavin, 2008).

We must improve the low standards underlying “evidence-based practice” – Kristen Bottema-Beutel, 2023

I think we’re just going to have to let the term “evidence-based” go. There seems to be an inverse relationship between the extent to which a practice is described as evidence-based, and the quality of evidence supporting its use.

Dr. Kristen Bottema-Beutel on Twitter

Excited to share my first editorial as a @journalautism Associate Editor! In it, I argue that frameworks for categorizing autism interventions as ‘Evidence-based Practices’ (EBPs) rely on research quality standards that are far too low 🧵 1/7


Low standards have consequences for future research and for autistic people. For research, it has likely contributed to the fact that after more than a half century of autism intervention research 2/7

there are still few high quality studies— studies w/ minimal risks of bias & adequate adverse event monitoring, & that are produced by researchers w/out COIs. 3/7

If interventions can be declared EBPs w/out studies with those important features, what’s the incentive for producing high quality research? Researchers may mistakenly interpret EBP standards as ‘high quality’ metrics, but for autism research this is not the case. 4/7

For autistic people, it means that the infrastructure for making interventions available to them is built around low quality evidence— so those interventions might not provide any benefit, and might actually be harmful. 5/7

Raising EBP standards so that EBPs must be backed by studies w/ minimal risks of bias & adequate adverse event monitoring, & be produced by researchers w/out COIs will likely improve future research quality, & ensure that there are services that benefit autistic people. 6/7

Thank you to @autismcrisis for extensive feedback on this editorial and for your significant work on these topics, and Micheal Sandbank, @ShanCLaPoint, and the @journalautism Editor team for their helpful comments! 7/end

Originally tweeted by Kristen Bottema-Beutel (@KristenBott) on January 2, 2023.

Critics of autism EBP frameworks have argued that they: do not consider the scope of change indexed by outcome measures so that broad, developmental change and narrow, context-bound change are conflated (Sandbank et al., 2021)2; lead to an overestimation of effectiveness by tallying studies that show effects while ignoring gray literature, studies showing null effects, and studies showing iatrogenic effects (Sandbank et al., 2020Slavin, 2008); and use taxonomies for categorizing practices that confuse practices and specific components of those practices (Ledford et al., 2021). The aim of this editorial is to point out another limitation of autism EBP frameworks, which is that research quality thresholds are much too low for making determinations about which interventions are likely to be efficacious. Low standards result in practices with questionable efficacy being labeled EBPs and promoted for use, and perpetuate the continued production of low-quality autism intervention research.

Crucially, none of these EBP frameworks considers whether intervention researchers measure or report on adverse events, which are unintended negative consequences of interventions that can cause short- or long-term harms. This is problematic because selecting interventions should involve appropriate weighting of the potential for benefit against the potential for harm. The pairing of low standards with insufficient consideration of adverse events that is common to each of these frameworks could mean that researchers routinely recommend interventions that confer little or no benefit, while also inadvertently putting autistic people at risk of harm.

Across these two reviews, we found that adverse events were rarely mentioned (they were mentioned in 7% of studies in our review on young children, and in only 2% of studies in our review on transition-age youth), but there is nevertheless evidence that they do occur (Bottema-Beutel et al., 2021a2022).

The conclusions from these two quality reviews starkly contrast with findings from EBP reports. For example, nearly half of the 28 practices designated as “evidence-based” in the most recent NCAEP report were behavioral (i.e. practices that rely on manipulating behavioral antecedents and consequences to shape new behavior).4 Similarly, Smith and Iadarola’s (2015) report concluded that behavioral practices either alone or in combination with developmental practices were “well established,” and the National Autism Center (2015) considered a variety of behaviorally-based interventions to be “established.” However, in Sandbank et al. (2020), we showed that there were too few randomized controlled trials of behavioral interventions to make any conclusions about their efficacy for autistic children. In our review of interventions for transition-age autistic youth (Bottema-Beutel et al., 2022), we found that although 70% of the interventions tested were behaviorally-based, quality concerns prevented us from considering any intervention practice to have sufficient evidence. Because autism EBP frameworks do not distinguish between research that adheres to some quality standards but is still designed with significant risks of bias, and research with minimized risks of bias, the reports may mislead researchers, practitioners, and commissioners of services to conclude that behavioral interventions are better supported by research evidence than other kinds of interventions, given the high number of behavioral strategies labeled as EBPs. In reality, behavioral intervention research has more risks of bias relative to research examining other types of interventions (Sandbank et al., 2020).

We must improve the low standards underlying “evidence-based practice” – Kristen Bottema-Beutel, 2023

And if it turns out that, contrary to widespread assumptions, behavior modification techniques aren’t supported by solid data even when used with autistic kids, why would we persist in manipulating anyone with positive reinforcement? A rigorous new meta-analysis utterly debunks the claim that applied behavior analysis (ABA) therapy is the only intervention for children with autism that’s “evidence-based.” In fact, it raises serious questions about whether ABA merits that description at all.

You might assume that those who use the phrase “evidence-based practice” (EBP) are offering a testable claim, asserting that the practices in question are supported by good data. In reality, the phrase is more of an all-purpose honorific, wielded to silence dissent, intimidate critics, and imply that anyone who criticizes what they’re doing is rejecting science itself. It’s reminiscent of the way a religious leader might declare that what we’ve been told to do is “God’s will”: End of discussion.

Moreover – and it took me awhile to catch on to this – behaviorists often use “EBP” just as a shorthand for the practices they like, in contrast to the (progressive or humanistic) approaches they revile. It doesn’t matter if the evidence is actually weak or ambiguous or even if it points in the other direction. They’ll always come up with some reason to dismiss those inconvenient findings because their method is “evidence-based” by definition. (On social media and elsewhere, you can get a glimpse of how modern behaviorism resembles a religious cult – closer to Scientology than to science – with adherents circling the wagons, trading ad hominem attacks on their critics, and testing out defensive strategies to employ when, for example, people with autism speak out about how ABA has harmed them. Or when scholarship shows just how weak the empirical case for ABA really is.)

Which brings us back to that new research review. The work of eleven authors – including, interestingly, an ABA therapist – representing the University of Texas, Boston College, Vanderbilt, and Mount Holyoke, it was published in January 2020 in Psychological Bulletin (PB), a prestigious social science journal that specializes in lengthy integrative research reviews. The article is not a polemic. It does not consider, and appears not even to be informed by, any of the broader objections to ABA that are raised by autistic people or that I’ve raised here. It confines itself to describing peer-reviewed research. The authors cast a wide net, looking for every English-language study in the last half-century that compared an intervention group with a control group in treating children up to age 8 who had been diagnosed with Autism Spectrum Disorder. This yielded 1,615 separate results from 150 reports representing 6,240 participants.

The most striking finding in this research review is how few high-quality assessments of “the primary approach used in clinical practice” – that is, ABA – have ever been conducted. In fact, the great majority of ABA studies were so poorly designed that they didn’t merit inclusion in this review. Rather than comparing the results of different treatments for groups of children, behaviorist journals commonly publish single-subject studies, in which one child is assessed before and after treatment. (This method was invented by behaviorists back when their behavior-shaping efforts were limited to lab rats.) You don’t have to be a trained data analyst to see the serious limitations of this method in terms of the results’ lack of generalizability. For the authors of the PB review, these limitations were so glaring that it didn’t even make sense for them to bother with the results of single-subject studies. Yet those dubious results are the primary basis for behaviorists’ claims that ABA is “evidence-based.”

Autism and Behaviorism – Alfie Kohn

Perspectives that lack knowledge are often dangerously misinformed.

You would think that would be a pretty obvious statement and perhaps you might think that there are certain contexts where that should be a mantra imprinted in the brains of everyone involved.

Naively, when I was much younger, less knowledgeable about myself and much less worldly-wise, I used to think that Autism Research would be one of those contexts.

How wrong I was and how terrifying it is when I look around and see so many Autistic people invested in Autism research like it’s written in the holy scripture of [insert religion here].

Autism research is incredibly flawed in an enormous number of ways. One example of how, is the fact that the sum total of all knowledge of Autism in academia is based on the work of two incredibly flawed men, both with incredibly flawed ideas and practice from the 1940s. Everything we know professionally and societally about Autism is underpinned by their work. As I’ve said so many times in talks and trainings the whole of Autism research is built on a foundation of sand.

Why is it a foundation of sand? Well, right from day one the narrative of Autism research has been this:

  • Expert’ looks at Autistic person (usually child; usually white child; usually white boy child; usually white boy child that presents in a particular way).
  • ‘Expert’ takes notes.
  • ‘Expert’ forms opinion.
  • ‘Expert’ writes it up.
  • Another ‘expert’ nods wisely.
  • ‘Expert’ publishes.
  • ‘Experts’ applaud ‘Experts’.
  • Whole world believes ‘Expert’.
  • Services are developed around ‘Expert’ knowledge.
Autistic Masking: Kieran Rose a new Academic Paper

It’s now often just marketing jargon. Practices that are accepted as evidence-based generally don’t have to try to sell themselves as evidence-based. 
I’d be curious how many things labeled “evidence-based” are for profit.

Noah Sasson on Twitter

Further reading,

Published by Ryan Boren

#ActuallyAutistic retired technologist turned wannabe-sociologist. Equity literate education, respectfully connected parenting, passion-based learning, indie ed-tech, neurodiversity, social model of disability, design for real life, inclusion, open web, open source. he/they