Health systems claim to make decisions based on evidence. The GRADE Evidence to Decision (EtD) framework is the standard they’re supposed to use. When you apply it honestly to mainstream autism intervention, the results are damning. When you apply it to Stimpunks’ approach, the alignment is striking.

This page explains the framework, shows where conventional autism practice fails it, and shows how our work embodies what the framework actually requires.

What Is the GRADE EtD Framework?

GRADE stands for Grading of Recommendations, Assessment, Development, and Evaluations. It’s a widely adopted international standard for clinical and public health decision-making, used by the World Health Organization, Cochrane, and most major health bodies. The Evidence to Decision component addresses a specific question: even when you have evidence, how do you decide what to do with it?

The EtD framework requires decision-makers to weigh nine factors:

  1. Is the problem a priority? Who defined the problem, and whose experience shapes that definition?
  2. Desirable effects — What are the probable benefits, and how large?
  3. Undesirable effects — What are the probable harms, and how large?
  4. Certainty of evidence — How confident can we actually be in the estimates of benefit and harm?
  5. Values — How do the people affected value the main outcomes? Do they agree those outcomes matter?
  6. Balance of effects — When you weigh benefits against harms, which way does it tip?
  7. Resource use — What does this cost, and who bears that cost?
  8. Acceptability — Would the people affected accept this recommendation?
  9. Feasibility — Can this be implemented equitably?

These aren’t philosophical preferences. They’re the criteria that rigorous, ethical health decision-making is supposed to meet. Autism researcher Kristen Bottema-Beutel of Boston College applied this framework to the current autism policy landscape in a 2026 editorial in Autism (SAGE) and found that both US federal guidance and mainstream autism intervention research fail most of them — in predictable, ableist ways.

“Autism intervention researchers and providers often select intervention outcomes that are insufficiently (or not at all) aligned with autistic community values, and they rarely consult autistic communities about which problems should be prioritized in intervention research.”

Bottema-Beutel, 2026

The Through-Line: Autism Itself Is Positioned as Harm

Bottema-Beutel identifies two ableist assumptions that drive intervention failures in both federal policy and mainstream research:

  1. Avoiding autism is so desirable that any intervention possibly reducing its occurrence is worth attempting, regardless of unknown harms and absent evidence of benefit.
  2. Any intervention possibly associated with increased autism incidence should be avoided, regardless of robust evidence of benefit.

These assumptions are internally incoherent — they can’t coexist logically. But they share a root: autism is positioned as harm. Every criterion in the EtD framework bends around that root assumption. Problem definition, outcome selection, harm monitoring, community values — all distorted by the prior that being autistic is the worst possible outcome.

This is the clinical vocabulary for what we mean by “broken systems, not broken people.” The system’s framing is the problem. The people are not.

Where Mainstream Autism Intervention Fails the EtD

Criterion 1: Is the problem a priority?

The EtD asks who defined the problem in the first place. Autism intervention research has historically defined the problem as autistic traits — stimming, communication differences, executive function profiles — rather than hostile, inaccessible environments.

Stimpunks starts from a different problem definition: the mismatch between environments and neurodivergent bodyminds. That’s not an ideological stance. Under EtD, it’s a structural requirement. The problem must be defined by those experiencing it.

Criteria 2 and 3: Desirable and undesirable effects

Many autism interventions — including Applied Behavior Analysis (ABA) — were designed to return autistic children to “normalcy” (Lovaas et al., 1974). Many researchers and providers still define best outcomes as when autistic children no longer meet diagnostic criteria or show fewer autistic traits. That is not what most autistic people want from intervention.

Research by Kapp et al. (2019) documents that stimming — often a primary ABA target — is important to autistic people’s communication and self-regulation. Suppressing it is not a benefit from the perspective of the person receiving the intervention. It is a harm.

Meanwhile, undesirable effects are almost never monitored or reported in autism intervention research (Bottema-Beutel et al., 2021; Sandbank et al., 2023). Emerging correlational research is beginning to document long-term mental health harms associated with widely used interventions (Aguirre Mtanous et al., 2026; Anderson, 2023). The failure to track adverse events is not a gap. It is an ethical oversight.

Criterion 4: Certainty of evidence

Bottema-Beutel’s 2023 paper argues that autism researchers routinely communicate more confidence about benefits than the evidence warrants. The evidence base that many interventions cite is weaker than funders and families typically assume.

This matters. Funders allocating resources to autism intervention often do so under the impression that the evidence is stronger than it is. Honest communication of certainty — a core EtD requirement — is systematically absent.

Criterion 5: Values

This is the criterion most directly at stake in everything Stimpunks does.

The EtD framework requires explicit consideration of how the people affected by an intervention value its outcomes. Most autism intervention research simply doesn’t do this. Outcomes are selected by researchers and clinicians, not by autistic people. Sulek et al. (2025) found that what autistic communities prioritize for professional supports differs substantially from what providers focus on. Autistic people and their families want interventions that contribute to well-being. Intervention providers more often focus on compliance and reducing autistic traits (McConachie et al., 2018; Sulek et al., 2025).

When values are excluded from decision-making, the whole framework fails. You can have strong evidence for a precisely measured outcome that nobody wanted measured.

Criterion 6: Balance of effects

When you apply the EtD honestly to the dominant intervention landscape, the balance is unclear at best and potentially negative. Benefits are uncertain and defined by others. Harms are unmonitored and beginning to emerge in the literature. The precautionary logic runs in the opposite direction from how it’s typically applied: given unknown harms and uncertain benefits for outcomes the community didn’t define, the burden of proof should be higher, not lower.

Criterion 7: Resource use

Intensive ABA can cost tens of thousands of dollars per child per year, often drawing from the same funding pools that could support community-led mutual aid, accessible design, peer support networks, and capacity-building for autistic-led organizations.

Resource use under EtD isn’t only about financial cost. It includes opportunity cost. Every dollar in compliance-based intervention is a dollar not in environments designed to actually fit neurodivergent bodyminds.

Criterion 8: Acceptability

Interventions imposed without community consent, or whose primary outcomes autistic communities have repeatedly said they don’t want, fail this criterion regardless of efficacy evidence. The double bind — that autistic people who object to ABA are often dismissed as lacking insight — is itself an ableist override of the acceptability requirement.

Criterion 9: Feasibility

Even where accommodations are theoretically endorsed, the infrastructure to deliver them equitably often doesn’t exist. There are few formal “official” accommodations for autistic adults. As one participant in Rose and Lupton’s 2026 research put it: people understand you need glasses to see, but they don’t understand the need for sensory aids. The supports autistic people rely on appear improvised — ad hoc — because systems were never designed to formalize them.

Feasibility isn’t just about whether something can be done. It’s about whether it can be done equitably. That requires design.

What Stimpunks Actually Does Under This Framework

We don’t frame our work in EtD language. But our practice maps directly onto what the framework requires.

Problem definition by the community. We start from the neurodiversity paradigm: broken systems, not broken people. The problem is hostile environments, inaccessible design, and systems that were never built for neurodivergent bodyminds. Autistic and neurodivergent people define what needs to change.

Outcomes defined by participants. Our Pattern Library, Design Recipes, and ARLES methodology center what actually helps neurodivergent people learn, regulate, and connect — not what makes us look more neurotypical. Our knowledge-building process draws on lived experience as authoritative, not supplementary.

Harm minimization as a design principle. Regulation-first design, Cavendish Space, and our commitment to low-demand, low-stakes learning environments are harm-aware by construction. We ask: what does this cost the person? What might go wrong? These are EtD questions.

Community values in outcome selection. Our human-authored glossary, our insistence on lived experience testimony over AI-generated simulation, and our “nothing about us without us” orientation are, in EtD terms, the correct methodological approach to the values criterion. It isn’t ideology. It’s what valid decision-making requires.

Transparent communication of uncertainty. We don’t overclaim. Where evidence is uncertain, we say so. Where something works for some people in some contexts, we say that, rather than presenting it as universal.

Mutual aid as an alternative use of resources. Our mutual aid model — peer support, accessible learning, direct resource sharing — has a fundamentally different harm profile from compliance-based intervention. It doesn’t suppress traits. It doesn’t require outcomes the community didn’t define. The opportunity cost of not funding it is real.

A Framework for Funders

Funders allocating resources to autism-related work often ask about evidence. That’s the right question. The GRADE EtD framework is how evidence should be evaluated.

Applying it honestly means asking:

  • Who defined the problem this intervention addresses?
  • Are the outcomes things autistic people actually want?
  • Have harms been monitored and reported?
  • How certain is the evidence, and is that certainty communicated accurately?
  • Do the people this serves find it acceptable?
  • What is the opportunity cost of this investment compared to alternatives?

These questions don’t lead away from evidence. They lead toward it. And they lead toward models — like mutual aid, community-led design, and regulation-first support — that mainstream intervention research has systematically avoided asking about.

The evidence base is not neutral. Who defines the problem, selects the outcomes, and monitors the harms is a design choice. The GRADE EtD framework makes those choices legible. Applying it to autism is long overdue.


References

Aguirre Mtanous, N. G., Koenig, J., Nikahd, M., et al. (2026). Mental health outcomes associated with applied behavior analysis in a US national sample of privately insured autistic youth. Autism, 30, 484–494. https://doi.org/10.1177/13623613251390604

Alonso-Coello, P., Schünemann, H. J., Moberg, J., et al.; GRADE Working Group. (2016). GRADE Evidence to Decision (EtD) frameworks: A systematic and transparent approach to making well informed healthcare choices. BMJ, 353, i2016. http://dx.doi.org/10.1136/bmj.i2016

Anderson, L. K. (2023). Autistic experiences of applied behavior analysis. Autism, 27(3), 737–750. https://doi.org/10.1177/13623613221118216

Bottema-Beutel, K. (2023). We must improve the low standards underlying “evidence-based practice”. Autism, 27(2), 269–274. https://doi.org/10.1177/13623613221146441

Bottema-Beutel, K. (2026). When ableism supplants evidence: Federal autism guidance in the United States. Autism. https://doi.org/10.1177/13623613261439928

Bottema-Beutel, K., Crowley, S., Sandbank, M., & Woynaroski, T. G. (2021). Adverse event reporting in intervention research for young autistic children. Autism, 25(2), 322–335. https://doi.org/10.1177/1362361320965331

Kapp, S. K., Steward, R., Crane, L., et al. (2019). ‘People should be allowed to do what they like’: Autistic adults’ views and experiences of stimming. Autism, 23(7), 1782–1792. https://doi.org/10.1177/1362361319829628

Lovaas, O. I., Schriebman, L., & Koegel, R. L. (1974). A behavior modification approach to the treatment of autistic children. Journal of Autism and Child Schizophrenia, 4(2), 111–128.

McConachie, H., Livingstone, N., Morris, C., et al. (2018). Parents suggest which indicators of progress and outcomes should be measured in young children with autism spectrum disorder. Journal of Autism and Development, 48, 1041–1051.

Rose, M. C., & Lupton, D. (2026). Crafting livable worlds: Sensory, creative, and nonhuman supports in autistic adults’ everyday lives. Autism in Adulthood. https://doi.org/10.1177/25739581261426633

Sandbank, M., Bottema-Beutel, K., LaPoint, S. C., et al. (2023). Autism intervention meta-analysis of early childhood studies (Project AIM): Updated systematic review and secondary analysis. BMJ, 383, e076733.

Sulek, R., Edwards, C., Monk, R., et al. (2025). Community priorities for outcomes targeted during professional supports for autistic children and their families. Journal of Autism and Developmental Disorders, 55, 1890–1901.