Evidence-based practice is the process of identifying the best available evidence to make decisions about practices that should be deployed to support individuals in a given population (McKibbon, 1998, see Vivanti, 2022, for a review in relation to autism). Practices that meet a predefined set of evidentiary criteria are labeled “evidence-based practices” (EBPs1), to promote their adoption by service providers. A tenet of EBP is that the research used to designate EBPs should be rigorous, with the fewest risks of bias possible (Slavin, 2008).
We must improve the low standards underlying “evidence-based practice” – Kristen Bottema-Beutel, 2023
I think we’re just going to have to let the term “evidence-based” go. There seems to be an inverse relationship between the extent to which a practice is described as evidence-based, and the quality of evidence supporting its use.
Dr. Kristen Bottema-Beutel on Twitter
Evidence-based Practice and Autism Research
https://journals.sagepub.com/doi/10.1177/13623613221146441
Low standards have consequences for future research and for autistic people. For research, it has likely contributed to the fact that after more than a half century of autism intervention research 2/7
there are still few high quality studies— studies w/ minimal risks of bias & adequate adverse event monitoring, & that are produced by researchers w/out COIs. 3/7
If interventions can be declared EBPs w/out studies with those important features, what’s the incentive for producing high quality research? Researchers may mistakenly interpret EBP standards as ‘high quality’ metrics, but for autism research this is not the case. 4/7
For autistic people, it means that the infrastructure for making interventions available to them is built around low quality evidence— so those interventions might not provide any benefit, and might actually be harmful. 5/7
Raising EBP standards so that EBPs must be backed by studies w/ minimal risks of bias & adequate adverse event monitoring, & be produced by researchers w/out COIs will likely improve future research quality, & ensure that there are services that benefit autistic people. 6/7
Thank you to @autismcrisis for extensive feedback on this editorial and for your significant work on these topics, and Micheal Sandbank, @ShanCLaPoint, and the @journalautism Editor team for their helpful comments! 7/end
Originally tweeted by Kristen Bottema-Beutel (@KristenBott) on January 2, 2023.
Critics of autism EBP frameworks have argued that they: do not consider the scope of change indexed by outcome measures so that broad, developmental change and narrow, context-bound change are conflated (Sandbank et al., 2021)2; lead to an overestimation of effectiveness by tallying studies that show effects while ignoring gray literature, studies showing null effects, and studies showing iatrogenic effects (Sandbank et al., 2020; Slavin, 2008); and use taxonomies for categorizing practices that confuse practices and specific components of those practices (Ledford et al., 2021). The aim of this editorial is to point out another limitation of autism EBP frameworks, which is that research quality thresholds are much too low for making determinations about which interventions are likely to be efficacious. Low standards result in practices with questionable efficacy being labeled EBPs and promoted for use, and perpetuate the continued production of low-quality autism intervention research.
Crucially, none of these EBP frameworks considers whether intervention researchers measure or report on adverse events, which are unintended negative consequences of interventions that can cause short- or long-term harms. This is problematic because selecting interventions should involve appropriate weighting of the potential for benefit against the potential for harm. The pairing of low standards with insufficient consideration of adverse events that is common to each of these frameworks could mean that researchers routinely recommend interventions that confer little or no benefit, while also inadvertently putting autistic people at risk of harm.
Across these two reviews, we found that adverse events were rarely mentioned (they were mentioned in 7% of studies in our review on young children, and in only 2% of studies in our review on transition-age youth), but there is nevertheless evidence that they do occur (Bottema-Beutel et al., 2021a, 2022).
The conclusions from these two quality reviews starkly contrast with findings from EBP reports. For example, nearly half of the 28 practices designated as “evidence-based” in the most recent NCAEP report were behavioral (i.e. practices that rely on manipulating behavioral antecedents and consequences to shape new behavior).4 Similarly, Smith and Iadarola’s (2015) report concluded that behavioral practices either alone or in combination with developmental practices were “well established,” and the National Autism Center (2015) considered a variety of behaviorally-based interventions to be “established.” However, in Sandbank et al. (2020), we showed that there were too few randomized controlled trials of behavioral interventions to make any conclusions about their efficacy for autistic children. In our review of interventions for transition-age autistic youth (Bottema-Beutel et al., 2022), we found that although 70% of the interventions tested were behaviorally-based, quality concerns prevented us from considering any intervention practice to have sufficient evidence. Because autism EBP frameworks do not distinguish between research that adheres to some quality standards but is still designed with significant risks of bias, and research with minimized risks of bias, the reports may mislead researchers, practitioners, and commissioners of services to conclude that behavioral interventions are better supported by research evidence than other kinds of interventions, given the high number of behavioral strategies labeled as EBPs. In reality, behavioral intervention research has more risks of bias relative to research examining other types of interventions (Sandbank et al., 2020).
We must improve the low standards underlying “evidence-based practice” – Kristen Bottema-Beutel, 2023
And if it turns out that, contrary to widespread assumptions, behavior modification techniques aren’t supported by solid data even when used with autistic kids, why would we persist in manipulating anyone with positive reinforcement? A rigorous new meta-analysis utterly debunks the claim that applied behavior analysis (ABA) therapy is the only intervention for children with autism that’s “evidence-based.” In fact, it raises serious questions about whether ABA merits that description at all.
You might assume that those who use the phrase “evidence-based practice” (EBP) are offering a testable claim, asserting that the practices in question are supported by good data. In reality, the phrase is more of an all-purpose honorific, wielded to silence dissent, intimidate critics, and imply that anyone who criticizes what they’re doing is rejecting science itself. It’s reminiscent of the way a religious leader might declare that what we’ve been told to do is “God’s will”: End of discussion.
Autism and Behaviorism – Alfie Kohn
Perspectives that lack knowledge are often dangerously misinformed.
You would think that would be a pretty obvious statement and perhaps you might think that there are certain contexts where that should be a mantra imprinted in the brains of everyone involved.
Naively, when I was much younger, less knowledgeable about myself and much less worldly-wise, I used to think that Autism Research would be one of those contexts.
How wrong I was and how terrifying it is when I look around and see so many Autistic people invested in Autism research like it’s written in the holy scripture of [insert religion here].
Autism research is incredibly flawed in an enormous number of ways. One example of how, is the fact that the sum total of all knowledge of Autism in academia is based on the work of two incredibly flawed men, both with incredibly flawed ideas and practice from the 1940s. Everything we know professionally and societally about Autism is underpinned by their work. As I’ve said so many times in talks and trainings the whole of Autism research is built on a foundation of sand.
Why is it a foundation of sand? Well, right from day one the narrative of Autism research has been this:
Autistic Masking: Kieran Rose a new Academic Paper
- Expert’ looks at Autistic person (usually child; usually white child; usually white boy child; usually white boy child that presents in a particular way).
- ‘Expert’ takes notes.
- ‘Expert’ forms opinion.
- ‘Expert’ writes it up.
- Another ‘expert’ nods wisely.
- ‘Expert’ publishes.
- ‘Experts’ applaud ‘Experts’.
- Whole world believes ‘Expert’.
- Services are developed around ‘Expert’ knowledge.
It’s now often just marketing jargon. Practices that are accepted as evidence-based generally don’t have to try to sell themselves as evidence-based.
Noah Sasson on Twitter
I’d be curious how many things labeled “evidence-based” are for profit.
Evidence-based Practice and Education
First, that the causal assumptions of medicine do not translate into education because education is not a physical process and “being a student is quite different from that of being a patient — being a student is not an illness, just as teaching is not a cure.”
“Since education does not operate through physical forces – unlike, say, the closed, deterministic systems that are relied upon in medicine – education is an open, recursive system that depends upon mutual interpretation and exchange as students make sense of new information. and teachers respond to student sense-making.”
In other words, learning is inherently complex and messy.
Schools Are Not Labs: Why “What Works” May Hurt
WHY “WHAT WORKS” WON’T WORK: EVIDENCE‐BASED PRACTICE AND THE DEMOCRATIC DEFICIT IN EDUCATIONAL RESEARCH
Why ‘What Works’ Still Won’t Work: From Evidence-Based Education to Value-Based Education
Education professor Yong Zhao argues that evidence-based education presumes a causal and technological model of professional action.
Zhao outlines the ways international ranking of educational systems “imposes a monolithic and West-centric view of societies on the rest of the world” and “distorts the purpose of education”, and he makes the important case for understanding what he calls the “side-effects” of “what works” to help us understand that “what works may hurt.”
Schools Are Not Labs: Why “What Works” May Hurt – YouTube
Medical research is required to investigate both the intended effects of any medical interventions and their unintended adverse effects, or side effects. In contrast, educational research tends to focus only on proving the effectiveness of practices and policies in pursuit of ‘what works’. It has generally ignored the potential harms that can result from ‘what works.’
What works may hurt: Side effects in education | SpringerLink
So what if we knew a particular instructional intervention was effective for knowledge transmission yet stifled creativity and problem solving, constrained exploration and discovery, and inhibited curiosity? Would we prescribe that intervention in every classroom for every child?
Zhao documents a number of side-effects associated with Direct Instruction models, and argues that direct instruction proponents have not failed to convince critics “because of their lack of data or rigorous research methods.”
He doubts the effectiveness of DI (direct instruction) and believes that the opposition “stems from a different set of concerns such as the rigidity and prescriptiveness of the approach, inconsistency with developmental theories, inappropriateness for certain groups of children and contexts, sustainability of the effects over time, suppression of learner autonomy and development of creativity, and other potential damaging side effects.”
Schools Are Not Labs: Why “What Works” May Hurt – YouTube
Complexity Reduction
To make evidence-based practices “work” – that is to create the kind of order which is needed to replicate the experimental conditions of the random controlled trial would mean transforming learning from an open, recursive system of sense-making into the closed, deterministic system of inputs and outputs valued in the clinical laboratory, a process Biesta refers to as complexity reduction.
Systems use complexity reduction to limit the number of possible actions and options that are available to make processes more efficient and to control outcomes.
Schools are an example of complexity reduction in education.
Making decisions about which interventions are implemented when, at what levels, to what extent, for whom, who is excluded, and even deciding who makes these decisions, are all exercises in power. As Biesta writes,
“Since any attempt to reduce the number of available options for action for the ‘elements’ within a system is about the exertion of power, complexity reduction should therefore be understood as a political act.”
Schools Are Not Labs: Why “What Works” May Hurt – YouTube
This becomes deeply problematic in those cases in which it is argued that professionals should only be allowed to do those things for which there is positive research evidence available, an approach which [has been], in my view, correctly identified as a form of totalitarianism.
Gert Biesta
Adverse Events and Unintended Consequences
But how can educators make a professional decision about “what works” without also understanding the potential for unintended consequences?
Schools Are Not Labs: Why “What Works” May Hurt – YouTube
Crucially, none of these EBP frameworks considers whether intervention researchers measure or report on adverse events, which are unintended negative consequences of interventions that can cause short- or long-term harms.
The pairing of low standards with insufficient consideration of adverse events that is common to each of these frameworks could mean that researchers routinely recommend interventions that confer little or no benefit, while also inadvertently putting autistic people at risk of harm.
Across these two reviews, we found that adverse events were rarely mentioned (they were mentioned in 7% of studies in our review on young children, and in only 2% of studies in our review on transition-age youth), but there is nevertheless evidence that they do occur (Bottema-Beutel et al., 2021a, 2022).
We must improve the low standards underlying “evidence-based practice” – Kristen Bottema-Beutel, 2023
Zhao argues that we need “both effective ways to transmit knowledge and foster creativity” and that by presenting side-effects alongside measures of effectiveness we can begin to bridge the gap between proponents and opponents of direct instruction as we work together to build a pedagogy that balances our desire for effective instruction without damaging curiosity, motivation, engagement, and creativity.
Schools Are Not Labs: Why “What Works” May Hurt – YouTube
Further, it’s important to recognize that the researchers themselves must be contextualized.
Schools Are Not Labs: Why “What Works” May Hurt – YouTube
Lived Experience Informed Practice
Lived Experience Informed Practice.
✨ lived experience is the foundation of practice.
✨ we can draw on research and clinical evidence but we prioritise lived experience over research.
✨ EBP says best available evidence must be utilised but LEIP says if the evidence doesn’t align with lived experience or reflect what the community is saying then we don’t have to use it.
✨ EBP asks, what does the research say?
✨ LEIP asks, what does lived experience say?
I mean, LEIP seems way more affirming than EBP when it comes to neurodivergence but that’s just me so 🤷🏻♂️🫠
Sonny Jane Wise on Instagram
Further reading,