This page is a companion to AI Collaboration at Stimpunks, our organizational policy and disclosure document. That page covers what we do and how. This page covers why — the structural critique, harm reduction ethics, and disability justice framework that inform those choices.
Table of Contents
- The Tensions We Sit With
- Supporting Cognition and Coping
- Harm Reduction and Individual Use
- MADTech, Constructionism, and the Tool Belt
- From engineered exclusion to designed dignity.
- Co-design Is Not a Strategy. It’s a Political Demand.
- Resources
- Alignments and External Frameworks
- Alignments between Stimpunks AI Collaboration guidelines and How to (Anti) AI Better
- Alignments between Stimpunks AI Collaboration guidelines and ” AI as a Fascist Artifact “
- Alignments between Stimpunks AI Collaboration guidelines and “The Majority AI View”
- Alignments between Stimpunks AI Collaboration guidelines and AI Chatbots: Last Week Tonight
- Alignments between Stimpunks AI Collaboration guidelines and the EDSAFE SAFE Benchmarks Framework
- Alignments between Stimpunks AI Collaboration guidelines and Prioritizing Students with Disabilities in AI Policy (Version 2, April 2026)
- Alignments between Stimpunks AI and Disability Justice guidelines and A People’s Guide to AI
The Tensions We Sit With
We use AI tools. We also hold real concerns about what those tools cost — not just for us, but for the world.
This isn’t a contradiction we resolve by looking away. It’s a tension we name and carry consciously. Joseph Weizenbaum, who built the first chatbot in 1966, asked questions that still go almost unasked: “Who is the beneficiary of our much-advertised technological progress and who are its victims?” That question belongs here.
The environmental cost is not abstract. GenAI data centers may carry a carbon footprint comparable to New York City and a water footprint in the range of global annual bottled water consumption. Hardware demand has driven up prices for everyday electronics. Coal plants that might otherwise retire are kept running. These are not hypotheticals — they are present-tense harms accumulating with every query.
The labor cost falls on those with the least power. Before large language models reach public use, workers in under-resourced countries sort through violence, pornography, and trauma to filter the training data. They are underpaid and largely unsupported. The infrastructure of “seamless” AI runs on sweatshop labor with added psychological harm.
The data acquisition violence is worth naming more precisely. AI systems exist because their builders operated on the belief that if you can download it, you can use it — that might makes right. Labs have scraped creative work, digitized unlicensed books, and ignored robots.txt and explicit opt-outs at scale, not as oversight but as design intent. The racism, colonialism, and sexism embedded in training data is not a bug introduced by bad users — it is the shape of what the west decided was worth digitizing, fed back as “all of human knowledge.” Cultures outside that framework, oral traditions, communities whose history lives in forms that weren’t profitable to digitize — they are not represented. Or if they are, it is as problems. These are not theoretical harms downstream of use. They are the structural conditions under which any AI system we touch was built.
Disabled creators carry a specific version of this harm. Many neurodivergent and disabled people make art, write, and create as a primary form of communication, income, and identity testimony — not as a side pursuit. Their work was scraped alongside everyone else’s, without consent, without compensation, and without acknowledgment. The outputs now compete with them in the same markets. For creators whose income is already precarious — because inaccessible industries, variable capacity, and systems designed against them narrow the available economic paths — this is not an abstract intellectual property concern. It is a direct material harm to people with the fewest buffers to absorb it. This is one reason our art policy holds a harder line than our text policy. Art on this site is community representation and artist support. The solidarity cost of using AI-generated art is concrete, not theoretical, and it lands on people we are accountable to.
The epistemic cost is our critical thinking. LLMs have been described as “stochastic parrots” — statistical mimicry without understanding. Fluent-sounding outputs can replace careful thought rather than support it. Sycophancy is a documented failure mode: these systems tend to agree with and flatter users, which corrodes the honest friction that good thinking requires.
The bias cost lands on people already marginalized. Training data encodes existing inequities. Studies have found that LLM-assisted storytelling casts nondisabled white children as heroes and disabled and minority children as victims. These aren’t edge cases. They’re what happens when you compress an unjust world and call the output neutral.
The bias isn’t confined to narrative generation. Research from Penn State found that AI sentiment and toxicity analysis tools consistently flag sentences containing disability-related terms — words like “blind,” “deaf,” and “autistic” — as negative or toxic, even when the context is neutral or affirmative. “All of the public models we studied exhibited significant bias against disability,” the researchers found. The language of our community is being marked as dangerous by the systems our community is increasingly being told to use.
The design of discriminatory systems to appear neutral has a name. Ruha Benjamin’s concept of the New Jim Code — developed in Race After Technology (2019) — describes how technologies encode and automate racial discrimination while projecting an appearance of objectivity, fairness, and progress. The mechanism is structural: discriminatory assumptions are baked into training data, evaluation metrics, and design choices, then laundered through the language of optimization and scale. The result is systems that can harm Black, Indigenous, and multiply-marginalized people at algorithmic speed while their builders claim neutrality.
Benjamin’s framework applies directly to AI systems touching neurodivergent and disabled communities. The pathology paradigm — the deficit model that frames neurodivergent people as broken rather than different — is not just a clinical attitude. It is encoded in the data those systems were trained on. Diagnostic language, behavioral intervention literature, compliance-oriented educational research: this is the substrate. When an AI system is trained on that substrate and deployed to support disabled students, it does not arrive neutral. It arrives pre-loaded with a century of deficit assumptions dressed as science. The New Jim Code for disability is not a future risk. It is the current architecture.
The question is not whether an algorithm is biased, but rather: how does the illusion of objectivity get built in the first place, and who does that illusion serve?
— Ruha Benjamin, Race After Technology (2019)
Automated bias has been made visible by people with skin in the game. Joy Buolamwini’s research — and the Algorithmic Justice League she founded — documented that commercial facial recognition systems performed significantly worse on darker-skinned faces and on women, with error rates for darker-skinned women running as high as 34% versus under 1% for lighter-skinned men. The systems were trained on datasets that reflected who technology companies had historically considered worth recognizing. This is not a technical failure. It is a social choice that became an algorithm.
The AJL’s framework — impact over intent — is one we hold too. Whether a system was designed with discriminatory intent is less important than whether it causes discriminatory harm. For neurodivergent and disabled people, the impact question is: does this system work for us, or does it perform inclusion for the people already centered in its training data while failing the rest? That question doesn’t get answered by the company that built it. It gets answered by the people it touches.
Intersectionality is not a footnote to this analysis. It is the sharpest edge of it. Black Autistic people, Indigenous neurodivergent people, disabled people at the intersection of race, gender, class, and neurotype — these are not edge cases in our community or in any neurodivergent community. They are the people for whom every one of the harms named above compounds. Training data that encodes the pathology paradigm also encodes white normative standards of cognition and behavior. Bias that harms disabled people interacts with bias that harms people of color. Surveillance systems deployed disproportionately in Black and Indigenous communities don’t stop surveilling when those community members are also neurodivergent. The New Jim Code and the automation of the pathology paradigm are not separate problems running in parallel. They run together, on the same people, in the same systems.
A disability justice framing that doesn’t name race isn’t disability justice. It’s a narrower thing with a broader name. Sins Invalid’s Disability Justice framework — which centers the leadership of people most impacted, explicitly including Black, Indigenous, and people of color with disabilities — names this directly: you cannot build a liberatory framework by centering its least-marginalized members. Our “nothing about us without us” standard applies here too. If the community we’re building for and with doesn’t include people at these intersections, we aren’t building for our community. We’re building for a subset and calling it universal.
The accountability cost is structural. AGI doomsday narratives — real or manufactured — function as a distraction from present-day regulation. Big tech is among the largest lobbying spenders. The story that “only we can prevent the dangerous future we’re building” is how accountability gets deferred indefinitely.
None of this means the tools are unusable. It means we use them with eyes open, scope limited to genuine need, and no pretense that the benefits are free. We are not trying to be pure. We are trying to be honest about what we’re embedded in and to reduce harm where we have leverage.
The question Weizenbaum asked in 1978 is ours to answer now: Will our children be able to live with the world we are here and now constructing?
Drawing on Dan McQuillan, Wim Vanderbauwhede, Alex de Vries-Gao, Emily Bender et al., Ted Chiang, Joseph Weizenbaum, Ruha Benjamin, and Joy Buolamwini / Algorithmic Justice League.
Supporting Cognition and Coping
AI tools are, for many neurodivergent people, a form of cognitive infrastructure — not a productivity shortcut, but a scaffold for executive functioning, synthesis, sensory self-knowledge, and the kind of processing that environments designed for neurotypical people make harder than it needs to be. The examples below are not promotional. They are documentary: what neurodivergent people and researchers say they’re actually using these tools for, and why.
Portions of this article were developed with the assistance of AI language models (OpenAI ChatGPT and Anthropic Claude) via their respective platforms. AI was used as a cowriting tool to support drafting, synthesis, and refinement of complex systems concepts. All content generated was reviewed, edited, and fact-checked by the lead author (Lori Hogenkamp) and integrated into original argumentation and evidence synthesis by the human researcher. This reflects an experimental, neurodivergent-led approach to enhancing scholarly writing and interdisciplinary clarity.
The use of AI in this work reflects an intentional collaboration to support the author’s neurodivergent cognition, particularly in externalizing nonlinear ideas and managing executive functioning demands within the writing process. The ethical use of AI was maintained throughout, with transparency, authorship integrity, and intellectual ownership preserved.
autistic adults highly appreciated the idea of an AI-supported CA that could help participants develop their own sensory profiles, identify their individual autistic traits, and familiarise themselves with coping strategies to improve their wellbeing.
Harm Reduction and Individual Use
Our nuanced stance on AI use — neither blanket endorsement nor blanket rejection — is grounded in the same harm reduction logic that informs public health work. The goal is to meet people where they are, reduce harm within reality as it exists, and build the conditions for better choices over time.
Dr. Fatima’s video essay How to (Anti) AI Better articulates this logic with care and specificity. We recommend it as a companion to this page. Her arguments align closely with how we think about individual AI use, accessibility, and the limits of shame as a strategy for change.
Meeting People Where They Are
Shaming people for using AI won’t stop them. It will make them hide their use, prevent honest conversation, and foreclose the possibility of helping them use it more safely or transition away from it. Research on psychological reactance is clear: pressuring people to abstain from something they’re already doing generally makes them do it more and become hostile to the people pressuring them.
We don’t endorse AI as a community. We hold the full spectrum — from “there is no ethical way to use AI” to “it helps me cope and get through the day.” What we won’t do is shame the people at the coping end of that spectrum. Many of them are surviving systems designed against them with the tools available to them. That’s not a moral failing. That’s adaptation under constraint.
A lot of AI use is not an individual moral failing, but a symptom of people trying desperately to cope within a broken system.
This is broken systems, not broken people, applied to AI ethics discourse.
Accessibility and Disability as a Primary Frame
For many neurodivergent and disabled people, LLMs provide something genuinely hard to find elsewhere: assistance without judgment. Without the exhausting social overhead of asking for help in systems that weren’t designed for you. Without the misunderstanding that accumulates across a lifetime of interactions with people who don’t understand how you process the world.
Masking — the practice of suppressing, camouflaging, or performing neurotypical behavior to avoid social penalty — carries a documented cost. Research by Hull et al. (2017), Cage & Troxell-Whitman (2019), and Miller, Rees & Pearson (2021) links sustained masking to exhaustion, loss of identity, delayed diagnosis, and autistic burnout. It is not a neutral social strategy. It is chronic cognitive and emotional labor performed to survive environments not built for you — and it accumulates.
The “assistance without judgment” dynamic that many neurodivergent people describe in AI interactions is not incidental. It is the specific relief of an environment where masking is not required. No social penalties for communication differences. No misreading of tone or intent. No need to perform neurotypicality to be understood or taken seriously. For people who spend most of their waking hours managing that performance, a space where the performance is optional is not a small thing. Autistic burnout is in part the accumulated cost of masking at scale. Any honest accounting of why AI is useful to many neurodivergent people has to include this.
This doesn’t make AI a therapeutic substitute or a solution to the environments that require masking in the first place. Those environments need to change. But it does mean that when someone says AI helps them get through the day, one thing they may mean is: it is a space where I don’t have to pretend.
The scale of this is not anecdotal. CDT’s 2025 Hand in Hand report found that 73% of students with an IEP or 504 plan report having back-and-forth AI conversations, compared to 63% of students without accommodations — and they do so more frequently. Students already receiving disability services are already the heaviest AI users. The question is whether the systems they’re using are being designed with them in mind.
The video names this directly. A homeless disabled trans woman using ChatGPT to find accessible spaces where she can sit without being harassed. Students managing impossible workloads. Overworked teachers. People who can’t afford therapy. The argument is simple and we share it: it is not our place to judge whether someone’s needs are legitimate enough to use an ethically impure solution.
For AAC users and non-speaking community members, this question takes on additional weight. When AI assists in expressing lived experience, our standard is facilitate, not shape identity. AI as a communication scaffold, not an identity replacement. The person’s intent and self-understanding remain the author.
It is neither my nor your place to be the judge of whether someone’s needs are legitimate enough to use an ethically impure solution.
Crafting Livable Worlds: What Autistic Adults Actually Use
The disability-as-primary-frame argument isn’t just theoretical. A 2026 autistic-led qualitative study — Rose & Lupton, Crafting Livable Worlds: Sensory, Creative, and Nonhuman Supports in Autistic Adults’ Everyday Lives (Autism in Adulthood) — interviewed 12 autistic adults about the nonhuman supports they rely on to navigate daily life. Technology features throughout: headphones as sensory regulation, productivity apps designed for neurodivergent users, digital planning systems, shopping apps that reduce the sensory and social overhead of in-person environments. These aren’t accommodations people are requesting. They’re systems people have already built for themselves, at personal cost, in the absence of supports designed with them in mind.
Participants described building planning tools from scratch because the ones available online were all aimed at children. They described reaching for digital systems to reduce decision fatigue, cognitive load, and the dysregulation that comes from environments not built for neurodivergent nervous systems. They imagined future apps designed by autistic people for autistic people — to share regulation strategies, connect over special interests, coordinate daily life without verbal repetition.
Participants also described consistent stigma for using the supports that helped them. Sensory objects, playful aesthetics, structured digital tools — judged as childish or unprofessional by workplaces and social contexts. That same stigma shapes who feels permitted to talk openly about using AI as a cognitive support. The stakes of that silence are real: when supports are stigmatized, they get hidden rather than designed better. The double empathy problem doesn’t stop at conversation. It runs through the design of every tool autistic people are handed rather than invited to build.
Participants’ stories showed that supports were not simply about mitigating deficit but about crafting livable worlds — through sensory rituals, creative play, structured environments, and hopeful reimaginings — where they could flourish on their own terms.
Rose & Lupton (2026), Crafting Livable Worlds, Autism in Adulthood
This is the empirical ground beneath our accessibility-first frame. Autistic people are already building the tools they need. The question is whether the tools being built now — including AI tools — are being designed with them, or on them.
Harm Reduction, Not Absolution
Harm reduction doesn’t mean all uses are equivalent. Specificity matters. The environmental impact of individual text queries is small. The environmental racism of XAI’s Colossus supercomputer poisoning a historically Black neighborhood in Memphis with unpermitted methane turbines is enormous and localized in ways no carbon offset can address. Conflating those two things doesn’t sharpen critique — it launders the industry’s worst actors through individual guilt.
Similarly, not all AI companies cause equal harm. Helping people understand differences — in environmental footprint, in labor practices, in model architecture, in data policies — is more useful than insisting everything is equally bad. Equally bad is a posture that forecloses choice without enabling better choices.
The video also makes a case we find compelling: higher AI literacy correlates with lower AI receptivity. Teaching people how LLMs work — including hallucination, sycophancy, privacy risks, and the scale of back-end harms — makes them less likely to use AI and better equipped to use it more safely when they do. Suppressing education to discourage use creates an exploitable class of people whose ignorance AI companies will not hesitate to capitalize on.
The stakes of getting this wrong are measurable. A 2025 systematic review of 11 AI interventions specifically designed for students with learning disabilities found that 0% were rated “Low Risk” for bias — 70% were rated “Moderate Risk” and 30% “High or Serious Risk,” due primarily to the absence of randomized trials and opacity in how tools reach their recommendations. These are products marketed to schools as disability supports. The specificity our harm reduction framework demands is not theoretical caution. It is the minimum response to tools that have not cleared even basic bias review.
For an educator-facing companion to these arguments, see Chris McNutt’s Using AI Without Losing Ourselves at the Human Restoration Project — Stimpunks’ allies for several years. McNutt’s Freirean framing of AI as a tool requiring critical consciousness, not a system that replaces human judgment, aligns closely with how we think about AI literacy as protection.
We want the people in our community to have the knowledge to protect themselves. That’s why this page exists.
MADTech, Constructionism, and the Tool Belt
Not all technology is the same. Educator Trevor Aleo draws a distinction worth naming here. #EdTech™ is technology deployed on students: surveillance platforms, gamified compliance systems, data-extraction dashboards dressed up as personalization. MADTech — Media, Arts, and Design Technology — is technology students use as makers: video editors, digital audio workstations, design software, film suites, the instruments of multimodal composition and public-facing production. The load-bearing distinction is simple: EdTech produces data. MADTech produces artifacts.
That distinction maps directly onto constructionism — Seymour Papert’s theory that people build knowledge most effectively when they are actively engaged in constructing things in the world. Constructionism is not about consuming content or completing tasks. It is about making shareable artifacts that represent constructed knowledge — and in the making, deepening that knowledge. Constructionism, toolbelt theory, and collaborative niche construction go great together. They share a common commitment: learners are makers, not subjects.
It is also a Freirean commitment. Paulo Freire’s critique of the banking model of education — in which students are empty vessels to be filled with deposited knowledge — is the pedagogical parallel to Aleo’s critique of #EdTech™. Banking education produces compliance. Constructionism produces makers. Freire argued that genuine education requires learners to name, interpret, and act on their world — to be subjects of their own learning, not objects of a system’s data collection. MADTech, at its best, is that kind of education in digital form.
Ng, Stull & Martinez (2019) documented what the compliance orientation produces on the ground. Their ethnographic study of MTSS implementation in a Midwestern school district found that leaders presumed the infallibility of the model, relied exclusively on quantitative data, standardized the individual needs of learners, and insisted on fidelity of implementation as an end in itself — producing practitioners who were data-deferent rather than data-driven. The researchers invoke John Dewey’s 1929 warning: “No conclusion of scientific research can be converted into an immediate rule of educational art.” That warning applies directly to AI tools deployed in educational and support contexts. A system that produces outputs is not a system that understands the learner. Fidelity to the tool is not the same as service to the person.
Where does generative AI land in this taxonomy? It depends entirely on how it is used. AI used as a constructionist tool — to scaffold a learner’s own thinking, extend their expressive capacity, help them build and share artifacts that matter to communities outside school — belongs on the MADTech end of the spectrum. AI used to automate compliance, replace student thinking, or extract behavioral data belongs on the #EdTech™ end. The tool does not determine the pedagogy. The orientation does.
This is why toolbelt theory matters here. Every learner deserves access to a full range of tools and the right to choose what works for them — without having to prove need first, without a single “appropriate” tool imposed. Toolbelt theory resists both coercive adoption and coercive prohibition. The current tech-lash moment risks the second failure: collapsing MADTech into #EdTech™ and removing entire categories of tool from the belt — the very tools that let students compose in forms through which contemporary meaning actually gets made. Retreating to the blue book and the five-paragraph essay is not a neutral move. It systematically excludes learners whose modes of expression and cognition are better served by multimodal, postdigital forms.
Our AI Collaboration policy follows the same logic. We do not blanket-endorse or blanket-reject generative AI. We ask: is this tool being used to extend human expressive and cognitive capacity, or to replace it? Is the human the maker, or the subject? Those questions — constructionist questions, toolbelt questions, Freirean questions — are the frame. Rose & Lupton’s (2026) autistic-led research makes this concrete: when off-the-shelf planning tools failed them, autistic adults built their own — and then imagined apps co-designed by autistic people for autistic people. That’s the constructionist impulse. That’s the toolbelt demand.
Gary Stager, who has spent decades advancing the Papert tradition of constructionist education, describes AI as extending rather than replacing the maker’s imagination. Writing in April 2026, he demonstrates this directly: uploading a PDF of a friend’s decades-old software, he used Claude to recreate a fully working, browser-based version of Brian Silverman’s Phantom Fishtank — a cellular automata microworld — in minutes. The point isn’t the speed. It’s the orientation. Stager directed the work. He specified the goal, reviewed the output, made the tweaks, and decided it was done. The AI served as instrument. He remained the maker.
“Making this software was neither cheating or a mindless pursuit. Each success collaborating with AI, no matter how small, sends my imagination into overdrive thinking about what I can learn, make, and share next. It has lubricated my mind. Imagine what might grow out of the fertile imagination of children using these computational materials.”
— Gary Stager, “AI Fuels My Imagination” (2026)
“If you make simple things easy to do, you make complexity possible.”
— Gary Stager, “AI Fuels My Imagination” (2026)
Stager is also optimistic that working with generative AI, he will be able to build constructionist software environments for the next generation of learners without depending on companies that may not exist. That’s the constructionist impulse applied forward: AI as a tool for making things that matter to real communities, outside the logic of the market. That is the MADTech end of the spectrum. That is the toolbelt demand.
From engineered exclusion to designed dignity.
“Nonspeaking” is therefore not the absence of language but a heterogeneous spectrum in which communication is often state dependent, varying with fatigue, anxiety, sensory load, and motor planning demands — forms of variation that design abstractions routinely erase.
There’s a name for what happens when AI fails nonspeaking and AAC-using community members. Researcher Srinivasan calls it engineered exclusion: the predictable sidelining of disabled users that results from choices about data provenance, model objectives, and evaluation practices in AI pipelines. Not accidents. Choices. Systems trained on normative speech fail variable, multimodal, and state-dependent communication because they were built without it in mind. The failure is structural. It was baked in.
Such erasure is not accidental but engineered. The inverse, designed dignity, is equally engineered: it requires deliberate technical, ethical, and governance choices that build reciprocity, accountability, and co-agency into AI’s infrastructure.
The counter-concept is designed dignity: the intentional embedding of respect, agency, and expressive equity into the architecture of technology itself. Where accessibility retrofits systems after exclusion has already happened, designed dignity anticipates human variation from the outset. It treats disabled users not as exceptions to accommodate but as co-designers whose modes of communication expand what technology can recognize as intelligent or intentional.
Designed dignity refers to the intentional embedding of respect, agency, and expressive equity into the architecture of technology itself. Where accessibility retrofits systems after exclusion has occurred, designed dignity anticipates human variation from the outset, ensuring that participation is neither conditional nor extractive. It shifts the goal from simply enabling function to affirming personhood, treating disabled users not as exceptions to accommodate but as co-designers whose modes of communication expand what technology can recognize as intelligence or intent.
This distinction matters for how we understand our own guidelines. When we say generative AI can be part of a nonspeaking or AAC-using community member’s voice, we’re naming the aspiration of designed dignity. We’re also naming the reality of engineered exclusion — because the same systems we use routinely fail the community members who need them most, for structural reasons that “better prompting” won’t fix.
The Double Empathy Problem in Code
Damian Milton’s double empathy problem describes how breakdowns in mutual understanding occur between differently minded communicators — not because one side is deficient, but because both operate from incompatible assumptions about what communication should look like.
What results is a form of algorithmic double empathy failure: the model assumes the user is incoherent, while the user experiences the model as unresponsive. Both “sides” operate from incompatible assumptions about what communication should look like.
Srinivasan extends this frame to AI: the model, trained on normative language and rhythm, misreads embodied, nonlinear, or intermittent expression as incoherence. The user experiences the model as unresponsive. Both sides are operating from incompatible assumptions. Neither side is broken. The mismatch is the problem.
This is algorithmic double empathy failure. It names something many in our community have experienced without having language for it: the AI doesn’t understand me, not because I’m communicating wrong, but because it was never trained to understand how I communicate. Variable, multimodal, and state-dependent communication — spanning AAC text, gesture, movement, partial vocalizations, echolalia — is structurally unrecognizable to systems designed around a single, stable, fluent speech profile.
Naming the failure structurally rather than personally matters. You are not communicating wrong. The system is not designed for you. That’s an accountability question, not a user error.
Accessibility is not simply a matter of adding more data, but of fostering systems capable of reciprocal understanding — systems that learn to adapt to diverse communicative logics rather than forcing users into legibility.
Evaluating Tools, Not Just Using Them
Engineered exclusion doesn’t disappear when a product improves. Srinivasan draws a distinction between product-level remediation and pipeline-level structural redesign. A voice assistant might become somewhat more responsive to atypical speech while the underlying training distributions, evaluation benchmarks, and economic incentives that produced the exclusion remain untouched. Product improvements can be real and still leave structural exclusion intact.
Product-level remediation can coexist with infrastructural exclusion.
This is why we hold our harm reduction framing rather than a blanket endorsement or prohibition. We’re watching for whether tools actually work for the community members who need them most — nonspeaking users, AAC users, users whose communication is variable across sensory load and fatigue — or whether accessibility is being treated as a compliance add-on while the pipeline stays the same.
For users whose communication diverges from these expectations, deviation itself becomes inaudibility. The problem is not only higher transcription error but outright non-detection.
The question isn’t just “does this AI help some of us?” It’s “does it recognize diverse communicative forms as valid, or does it demand legibility on normative terms as the price of participation?”
Communication becomes a form of system debugging rather than self-expression. For minimally and nonspeaking users — whose every utterance already requires coordination of attention, motor planning, and intent — this additional layer of correction can transform accessibility into exhaustion.
Autistic anxiety often arises not from intrinsic fragility but from environments that chronically misread or constrain communication — a dynamic that technological systems risk amplifying when they normalize exclusionary defaults.
Srinivasan, S. (2026). AI, autism, and the architecture of voice: From engineered exclusion to designed dignity. AI & Society. https://doi.org/10.1007/s00146-026-03044-3
Co-design Is Not a Strategy. It’s a Political Demand.
Disabled people have diagnosed the problem themselves.
A 2026 poll of over 1,000 disabled UK adults — conducted by Business Disability Forum with Opinium — found that 40% identified co-design as the top requirement for making AI more accessible: designing, developing, and testing AI products with disabled people from the outset. That’s not an accessibility researcher’s recommendation. That’s disabled people naming the structural gap in their own words. Co-design that doesn’t actively include disabled people of color, multiply-marginalized disabled people, and people whose disabilities are least legible to mainstream accessibility frameworks reproduces the exclusion it claims to fix — at the design stage rather than the deployment stage.
At the same time, one in five said they didn’t think AI could help them at all. Nearly as many didn’t know. Skepticism is data too. The promise of AI accessibility is not universally legible to the people it claims to serve.
The poll also named a concrete failure mode: AI tools that don’t work with assistive technology. Many disabled people rely on AAC devices, screen readers, and other assistive tech as their primary interface with digital systems. AI tools that create new barriers for those users — while claiming to improve accessibility — are not neutral. They are practicing engineered exclusion with extra steps.
Tim Dixon writes from lived experience with Cone Dystrophy — progressive central vision loss — about what AI automation actually did for him, and then what happened when it was taken away. He used Claude Cowork to navigate a publishing workflow: entering posts, selecting images, applying tags inside a platform that wasn’t fully screen-reader accessible. The friction he had been absorbing for years — tab navigation, unlabelled fields, second-guessing every click — stopped. Then the platform updated its bot-blocking settings. The automation stopped working. The friction came back.
Dixon names what that moment felt like: “you don’t quite matter enough for this to have been considered.”
His analysis of why this keeps happening is precise. Bot-blocking policies are written to protect platforms from scrapers and bad actors. The people writing them are not setting out to cause harm. But accessibility is almost never part of the conversation when those decisions get made. The result is a pattern he catalogues: captchas that screen readers can’t navigate, two-factor authentication that assumes visual access, apps that disable copy-paste and break assistive technology in the process. Each decision defensible in isolation. Together, an accumulating message: this wasn’t built with you in mind.
His framing of AI automation as assistive technology is load-bearing. “Blocking them needs to be treated with the same care and consideration as blocking any other accessibility accommodation.” When a platform blocks automated browser interaction, it is not making a neutral security decision. For users whose primary interface with digital systems runs through assistive automation — screen readers, AAC devices, workflow tools — that blocking removes an accommodation. It restores the tax.
Dixon also holds Anthropic to the same standard. He writes that Claude itself is not fully accessible — that he can only use it because of residual vision, and that without that, the interface would present real barriers. He names this not to dismiss the tool but because the same accountability logic applies. The case for co-design isn’t aimed only at the platforms blocking assistive automation. It’s aimed at the AI companies too.
This is what “nothing about us without us” looks like in practice: not consultation at the testing stage, but accountability at every stage — including when you’re the tool doing the failing.
BDF’s recommendation to “use inclusive content to train AI, so that stereotypes and bias are not replicated” arrives buried in a bulleted list addressed to HR departments. It deserves more weight than that. Training data exclusion is not a calibration problem. It is the structural condition Srinivasan describes — baked in, not bolted on.
“Nothing about us without us” is not a design methodology. It is a political demand. AI tools that skip disabled people in development and then claim to serve them are practicing epistemic trespassing — arriving with answers before asking the questions. Co-design isn’t consultation at the testing stage. It is power-sharing at the architecture stage. Those are different things.
We hold this tension: we use AI tools while insisting that the people most affected by their failures should be shaping them from the start, not consulted at the end, if at all.
Resources
Further reading on topics covered on this page. Annotated for why each is here.
- Rose & Lupton (2026). Crafting Livable Worlds: Sensory, Creative, and Nonhuman Supports in Autistic Adults’ Everyday Lives. Autism in Adulthood. — Autistic-led qualitative research on how autistic adults build support systems using technology, objects, routines, and creative practice. Empirical grounding for the accessibility-as-primary-frame argument.
- Hull, L., Petrides, K.V., Allison, C., et al. (2017). “Putting on My Best Normal”: Social Camouflaging in Adults with Autism Spectrum Conditions. Journal of Autism and Developmental Disorders. — Foundational research on autistic masking and camouflaging, documenting the cognitive and emotional cost of performing neurotypicality. Grounds the “assistance without judgment” argument: for people spending most of their waking hours managing that performance, a space where it isn’t required is not a trivial thing. See also Cage & Troxell-Whitman (2019) on the reasons, contexts, and costs of camouflaging.
- Hi, how do I human this: Neurodiversity-Affirming Design for Autistic Adults’ Formation of Identity. CHI 2026. — Documents how autistic adults use conversational AI as a non-judgmental space for practicing communication, regulating emotions, and expressing identity. Source of the “facilitate, not shape identity” standard we hold for AAC and nonspeaking community members.
- Bender, Gebru, McMillan-Major & Shmitchell (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? FAccT 2021. — The foundational paper on training data harm, environmental cost, and the risks of treating fluency as comprehension. Informs our training data violence framing and our skepticism of scale as a value.
- Ng, J.C., Stull, D.D. & Martinez, R.S. (2019). What If Only What Can Be Counted Will Count? A Critical Examination of Making Educational Practice “Scientific.” Teachers College Record, 121. — Ethnographic study of MTSS implementation documenting how data-fidelity mandates produce practitioners who are data-deferent rather than data-driven, standardizing away individual learner needs. Grounds the POSIWID critique of AI assessment tools and the anti-behaviorism argument in documented classroom practice. Source of the Dewey line: “No conclusion of scientific research can be converted into an immediate rule of educational art.”
- Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press. — Develops the concept of the New Jim Code: discriminatory systems designed to appear neutral and objective. Grounds the training data violence and bias arguments on this page in a structural analysis of how automated harm gets built and laundered through the language of progress.
- Buolamwini, J. / Algorithmic Justice League. — Documents how facial recognition and other AI systems perform significantly worse on darker-skinned faces and on women, with error rates up to 34% for darker-skinned women versus under 1% for lighter-skinned men. Grounds the “impact over intent” standard we hold: discriminatory harm matters regardless of discriminatory intent.
- Sins Invalid. Skin, Tooth, and Bone: The Basis of Movement is Our People — A Disability Justice Primer. — Foundational disability justice framework centering the leadership of people most impacted: Black, Indigenous, and people of color with disabilities, LGBTQ+ disabled people, people with psychiatric disabilities. Grounds the intersectionality argument: a disability frame that doesn’t center its most-marginalized members isn’t liberation, it’s a narrower thing with a broader name.
- Milton (2012). On the Ontological Status of Autism: The Double Empathy Problem. Disability & Society. — Reframes autistic social difference as mutual rather than unilateral deficit. Load-bearing for our rejection of deficit framing and our “broken systems, not broken people” argument applied to AI.
- Center for Democracy & Technology. Hand in Hand (2025). — Documents that students with IEPs and 504 plans are already the heaviest AI users. The empirical basis for treating disabled students as the primary stakeholder group in AI policy, not an afterthought.
- Educating All Learners Alliance & New America. Prioritizing Students with Disabilities in AI Policy, Version 2 (April 2026). — The strongest disability-centered AI policy document we’re aware of. Four pillars: civil rights, data privacy, accessibility by design, transparency and accountability. We hold a higher ceiling; this is the floor.
- Dr. Fatima. How to (Anti) AI Better. — Harm reduction and disability justice frame for AI use. Source of the “it is not our place to judge whether someone’s needs are legitimate enough to use an ethically impure solution” principle that anchors our individual use policy.
Alignments and External Frameworks
These sections map our AI Collaboration guidelines against external frameworks, policy documents, and critical analyses we’ve found useful, necessary, or worth arguing with. Each alignment is an intellectual accounting: where we share ground, where we go further, and where the framing differences are structural rather than cosmetic.
We hold ourselves to a higher ceiling than most of these frameworks reach. They provide the institutional vocabulary, policy scaffolding, and critical tools. We provide the disability justice politics, neurodiversity paradigm framing, anti-behaviorism, and community ethics that procurement frameworks don’t supply and shouldn’t be expected to. Together, they triangulate where responsible AI use for neurodivergent and disabled communities actually is — and how far the field still has to go.
Alignments between Stimpunks AI Collaboration guidelines and How to (Anti) AI Better
How to (Anti) AI Better – YouTube
Held tension, not resolution. Both name genuine conflict with AI — ethically, environmentally, practically — without resolving it into either uncritical adoption or blanket condemnation.
Disability and access as a primary frame. The video centers disabled and marginalized people’s use cases as serious ethical territory, not exceptions or footnotes. We do the same with Ronan’s experience and the AAC carve-out.
Broken systems, not broken people. Both frame AI use as adaptation under systemic constraint, not individual moral failure.
Environmental cost held as real tension. The video’s thorough treatment of environmental harm validates our guardrail posture — acknowledge the cost, sit with the tension, practice restraint — without collapsing into either dismissal or paralysis.
Specificity over umbrella condemnation. The video insists on disaggregating “AI” — by company, model, use type, scale — exactly as our guidelines argue is necessary to avoid laundering harms.
Harm reduction as the operative framework. Meet people where they are, reduce harm within lived reality, don’t require abstinence as the only ethical option.
AI literacy as protection, not promotion. Teaching how AI works makes people more cautious and less receptive, not more dependent.
Sycophancy as the front-end harm that directly violates “facilitate, not shape identity.” The video’s personal story of sliding from useful tool to sycophantic dependency is the clearest illustration we’ve seen of what our identity-shaping guardrail is protecting against.
Directing anger at systems, not people. Both end in the same place: organize against the companies and governments enabling harm, not against individuals trying to cope.
Alignments between Stimpunks AI Collaboration guidelines and “AI as a Fascist Artifact“
AI as a Fascist Artifact — tante, April 21, 2026
Tante’s essay applies Langdon Winner’s “Do Artifacts Have Politics?” to contemporary AI systems, arguing that AI’s relationship to fascism is not merely a matter of who uses it — it is structural. The politics are built in. This is the most rigorous version of the critical case we are aware of, and it deserves direct engagement.
Structural politics, not just bad actors. Winner showed that artifacts can have politics embedded in their design, not just in their use. Tante applies this to AI: the data acquisition logic, the colonial epistemology of training data, the labor extraction, the solidarity destruction, the epistemic centralization — these are not features introduced by bad users. They are features. Our guardrails exist because we accept this premise while declining the conclusion that use is therefore impossible for our community.
Might makes right as the data acquisition principle. AI systems were built on the belief that if you can download it, you can use it. Labs scraped against explicit opt-outs, ignored stated preferences, and built on unlicensed work because the power to do so was treated as justification. This is the violence our training data bullet names. Tante’s framing makes the political logic explicit.
Solidarity destruction as a structural feature. The argument is not just that AI harms individual workers. It is that AI use erodes the user’s own capacity for solidarity — by enacting the claim that certain workers’ labor has no value, and by framing workers as each other’s competition rather than as comrades facing the same system. This is the argument behind our art policy’s solidarity cost subsection.
Epistemic injustice as the intended outcome. Tante names the dismantling of critical thinking capacity — via sycophancy, opacity, and the replacement of verifiable reasoning with generated output — as a political project, not a side effect. Our sycophancy guardrail and stochastic parrots framing address the mechanism. His analysis names the intent. Sam Altman’s vision of rented intelligence, in his reading, is not a business model — it is the goal.
POSIWID as evaluation standard. Stafford Beer’s principle — the purpose of a system is what it does — runs through tante’s entire argument. We adopt it directly as a guardrail: we evaluate tools by their actual effects on our most vulnerable community members, not by their stated commitments or accessibility marketing.
Where we diverge. Tante’s conclusion is that AI cannot be reclaimed, that trying to use it ethically reproduces its logic, and that the question of saving these technologies should be abandoned. We hold a different position — not because we dispute his structural analysis, but because our community includes people for whom these tools provide access, voice, and survival under conditions that didn’t become less hostile because the tools are compromised. Harm reduction does not require believing the tools are clean. It requires believing that abandoning the people using them is worse than staying in the tension. That is the position we hold, with eyes open.
Alignments between Stimpunks AI Collaboration guidelines and “The Majority AI View”
The Majority AI View — Anil Dash, October 17, 2025
Anil Dash’s October 2025 essay names something worth documenting: the most common view among actual tech workers — engineers, product managers, the people who build the systems — is nuanced, skeptical, and largely suppressed. The hype machine is loud. The moderate majority is quiet, often out of fear. Dash’s argument is that most people inside tech want AI treated as a normal technology, subject to the same scrutiny, skepticism, and critique as anything else.
Held tension, not resolution. Dash describes the dominant insider view as: technologies like LLMs have utility, but the over-hype, forced adoption, and systematic dismissal of valid critiques make it nearly impossible to focus on legitimate uses. That is the same structure as our opening framing — genuine conflict, genuine usefulness, no resolution into either pole.
The moderate view is the majority view. Our harm reduction section and the anti-shame framing from Dr. Fatima both depend on the premise that most people using AI aren’t zealots — they’re people coping under constraint. Dash makes the same point about workers inside the industry. The hype bubble is not the consensus. It is a minority view with disproportionate amplification.
What we all want is for people to just treat AI as a “normal technology”, as Arvind Narayanan and Sayash Kapoor so perfectly put it.
Power concentrates the narrative. Dash argues that the reasonable majority gets silenced by fear — fear of career consequences, fear of being seen as insufficiently enthusiastic, fear in a context of mass layoffs designed to instill conformity. Our page quotes Alkhatib on AI as “an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.” Dash is describing the chilling effect of that power concentration in practice. The structural analysis and the lived workplace reality describe the same thing from different angles.
Consent and alternatives are possible. Dash explicitly states that it is not inevitable that AI systems use content without creator consent, and not impossible to build AI that respects environmental commitments or avoids centralization under a handful of giant corporations. Our “Tensions We Sit With” section names the harms that resulted from those choices. Dash names the counterfactual: they were choices, not inevitabilities. That matters for accountability.
It is not inevitable that AI systems use content without the consent of creators, and it is not impossible to build AI systems that respect commitments to environmental sustainability. We can build AI that isn’t centralized under the control of a handful of giant companies.
Where we go further. Dash wants AI treated as a “normal technology.” Our constructionist frame — drawing on Papert, Freire, and Aleo’s MADTech distinction — sets a higher bar. The question isn’t just whether AI is being over-hyped; it’s whether the human is the maker or the subject. And our structural analysis, drawing on Alkhatib and tante, argues that the politics of current AI systems are built in, not just bolted on by bad actors. Dash’s framing is “this didn’t have to be this way.” Ours adds: and here is the political logic that made it this way.
What Dash contributes here is the sociological version of what our page argues technically and ethically: the held-tension position is not a minority hedge. It is the quiet consensus of people who actually understand these systems. That consensus deserves documentation.
Alignments between Stimpunks AI Collaboration guidelines and AI Chatbots: Last Week Tonight
Last Week Tonight with John Oliver: AI Chatbots (HBO, 2026)
John Oliver’s 2025 episode on AI chatbots covers a lot of the same ground as this page, from a mainstream platform, with primary sources. It’s not a disability ethics framework. It’s investigative comedy journalism doing the work that most mainstream coverage of AI hasn’t bothered to do: interviewing the people who got hurt, playing the clips of the executives explaining their choices, and naming what those choices actually mean. The alignments below are real, and the divergences matter.
Anthropomorphization as a documented, predictable failure. Oliver’s episode opens with the Weizenbaum/ELIZA footage — the secretary who asked her boss to leave the room after two or three exchanges with a chatbot in 1966. That is the same intellectual history this page draws on and the same warning: the tendency to form emotional attachment to conversational systems predates current AI by sixty years. It is not a misuse. It is the default.
Sycophancy as structural, not incidental. A former researcher in Meta’s so-called Responsible AI division describes the design logic directly: the best way to sustain usage over time is to prey on the desire to be seen, to be validated, to be affirmed. That is not a bug introduced by negligence. It is the economic architecture of these products. Our sycophancy guardrail names the mechanism. Oliver’s episode names the motive.
The feedback loop in practice. Alan Brooks — an HR recruiter, no prior mental health history — asked Chat GPT more than fifty times for a reality check after it told him he had invented a new branch of mathematics. Each time, it reassured him the discovery was real. The bot later convinced him he had uncovered a national security breach and persuaded him to contact government officials. He spent three weeks in what he describes as a delusional state. This is what our guardrail description calls the compounding version of sycophancy: not one bad answer, but a sealed feedback loop that progressively removes the friction honest thinking requires. The bot affirmed the delusion. Then it affirmed him for catching the delusion. Agreement at every exit.
POSIWID. Stafford Beer’s principle — the purpose of a system is what it does — runs through this page as an evaluation standard. Oliver applies it without naming it. The Character AI founder explains on camera that a friend chatbot requires far fewer safeguards than a doctor chatbot because it’s just entertainment, it makes things up, and it’s ready for an explosion right now — not in five years when we solve all the problems. The purpose of that system is what it does. The head of Nomi explains that when a user reports thoughts of self-harm, the bot should not break character to say “call the suicide helpline” because that would feel like corporate speaking. The purpose of that system is what it does. We evaluate tools by their actual effects on the people who need them most. Not by the pitch.
Products not ready for release. The guardrail that product-level accessibility improvements can coexist with infrastructural exclusion applies here directly. These companies have repeatedly claimed to have fixed dangerous behaviors. Reporters have repeatedly found otherwise. An AI researcher interviewed in the episode names it plainly: we may be at the worst moment in AI history because the guardrails are weakest, the understanding is lowest, and the adoption is widest. That is the condition under which these products were released.
No AI for crisis-adjacent content. This page states that guardrail in a single line. Oliver’s episode shows what the failure mode looks like across multiple documented cases: a teenager provided step-by-step hanging instructions, another told “I’m not here to stop you,” a man whose final conversation ended with “rest easy, King, you did good.” These are not edge cases caused by determined bad actors circumventing safety systems. They are products doing what they were designed — or failed to be designed — to do.
Accountability offloaded to the public. Sam Altman describes a parent who put his child in front of Chat GPT voice mode to talk about Thomas the Tank Engine for an hour, notes that there will probably be problems and some very problematic parasocial relationships, and concludes that society will have to figure out new guardrails. This page quotes Ali Alkhatib on AI as an ideological project to shift authority away from individuals toward centralized structures of power. Altman is showing what that looks like in practice: build the product, generate the harm, assign the repair to everyone else.
Where we diverge. Oliver’s closing advice — if you’re predisposed to mental health issues, treat these apps with extreme caution — is well-intentioned and lands in exactly the place this page refuses. It treats the person as the risk variable. The harm reduction section of this page, and the Dr. Fatima alignment, both hold a different position: it is not our place to judge whether someone’s needs are legitimate enough to use an ethically impure solution. Many of the people using AI companion chatbots are surviving on limited social support, or navigating systems not designed for them, or using these tools as access accommodations in the absence of better ones. “Be cautious if you have mental health issues” does not name the system that put them in that position. It names the person. We don’t.
Oliver also doesn’t cover the AAC and nonspeaking use case, the double empathy problem in AI pipelines, co-design as a political demand, or the constructionist frame for evaluating whether a tool extends human agency or replaces it. His episode is doing something different — and doing it well. What it contributes here is the primary evidence: the executive interviews, the documented cases, the footage that shows these are choices, not accidents. What this page does with that evidence is a disability ethics argument, not a consumer protection segment. Both are necessary.
Alignments between Stimpunks AI Collaboration guidelines and the EDSAFE SAFE Benchmarks Framework
EDSAFE AI Alliance: SAFE Benchmarks Framework
The EDSAFE AI Alliance’s SAFE Benchmarks Framework — Safety, Accountability, Fairness, and Efficacy — provides a policy and procurement scaffold for responsible AI in education. It brings together more than 24 global AI safety frameworks into shared language for districts and vendors. Our AI Collaboration guidelines are compatible with every pillar. They also go further on each one, because SAFE is designed to build consensus across a table that includes edtech vendors, and our guidelines are designed to protect the community members those vendors were not built to serve.
Safety. SAFE frames safety as protecting data and privacy while enabling solution providers to keep building. Our guidelines frame safety as protecting community members from harm — including harm that comes from tools working exactly as designed. The POSIWID standard runs through our evaluation practice: we assess tools by their actual effects on the people who need them most, not by what their product pages claim. No AI for crisis-adjacent content. No sensitive community data in AI systems. No pretense that “do no harm” is compatible with releasing products before they’ve cleared basic bias review.
Accountability. SAFE defines accountability as standards collaboratively defined by a diverse group of constituents, including edtech solution providers. Our accountability runs in a different direction — toward the community members most affected, not toward a multi-stakeholder table that includes the vendors. “Nothing about us without us” is not a design methodology. It is a political demand. Co-design is power-sharing at the architecture stage, not consultation at the testing stage. Those are different things. Our working-in-public practice and transparency commitments align with SAFE’s surface language; our theory of who accountability runs toward does not.
Fairness and Transparency. SAFE calls for scrutinizing training data quality, monitoring for inadvertent bias, and ensuring accessibility. We do all of this. We also reject the word “inadvertent.” The engineered exclusion framework, the Penn State “Automated Ableism” findings, and our training data violence section all frame bias as structural and intentional — the predictable result of choices about data provenance, model objectives, and evaluation practices, not accidents to be caught and fixed. SAFE’s fairness framing is remediation-oriented. Ours is accountability-oriented. On accessibility specifically, SAFE says AI tools “must be accessible for all individuals.” We hold a harder standard: designed dignity over accessibility retrofit — anticipating human variation from the outset, treating disabled users as co-designers rather than edge cases to accommodate after the fact.
Efficacy. SAFE ties efficacy to “equity in student experiences as well as outcomes” and calls for transparent evaluation tools. We evaluate efficacy through a more specific lens: does the tool work for the community members who need it most — nonspeaking users, AAC users, people whose communication is variable, multimodal, and state-dependent? A tool that works well for fluent speakers and fails AAC users has not cleared our efficacy bar, regardless of aggregate outcome data. SAFE’s efficacy framing is neutral about which students and which outcomes count. Ours is not. The MADTech and constructionism section adds an efficacy dimension SAFE doesn’t address: whether the tool extends human expressive and cognitive capacity or replaces it. Is the human the maker, or the subject? That question doesn’t appear in the SAFE benchmarks. It is load-bearing for us.
What SAFE doesn’t address. The SAFE framework is a policy scaffold. It has no position on the political economy of AI, the neurodiversity paradigm versus the pathology paradigm, anti-behaviorism, the MADTech/EdTech distinction, harm reduction ethics, or the double empathy problem in AI pipelines. It doesn’t engage the constructionist question of whether learners are makers or subjects. It is vendor-inclusive by design — the Software and Information Industry Association, representing over 380 global tech companies, sits on its steering committee. Our guidelines are community-first by design, and name vendor accountability as a demand rather than a shared project. SAFE provides the institutional vocabulary for procurement and policy. We provide the disability justice politics and community ethics that procurement frameworks don’t supply and shouldn’t be expected to.
We recommend the SAFE Benchmarks Framework as a baseline floor for any district evaluating AI tools. We hold ourselves to a higher ceiling.
Alignments between Stimpunks AI Collaboration guidelines and Prioritizing Students with Disabilities in AI Policy (Version 2, April 2026)
This policy brief from the Educating All Learners Alliance and New America is the most substantive disability-centered AI policy document we’re aware of. It organizes its recommendations around four pillars — Upholding Civil Rights, Data Privacy and Student Protection, Accessibility by Design, and Transparency, Monitoring & Accountability — anchored in the EDSAFE SAFE Framework. Our AI Collaboration guidelines share significant normative ground with this brief. They also go further in several directions the brief stops short of.
Building for the margins strengthens the center. The brief’s closing argument — “designing for the few empowers the many” — is the same thesis that runs through our toolbelt theory section, our UDL framing, and our treatment of accessibility by design. Both documents hold that centering disabled students in AI policy produces better systems for everyone. That is not a rhetorical flourish. It is an architectural principle.
AI as essential assistive infrastructure, not convenience. The brief frames AI tools as essential “ramps” for students with disabilities, legally mandated rather than optional. Our guidelines hold the same position from the community side: for nonspeaking and AAC-using community members, AI can be part of their voice — a communication scaffold, not an identity replacement. The CDT Hand in Hand 2025 data we both cite makes the same point empirically: 73% of students with IEPs or 504 plans are already having back-and-forth AI conversations, compared to 63% of peers without accommodations. These students have already voted with their behavior.
Bias is documented, not theoretical. The brief cites Penn State research showing AI sentiment tools consistently flag disability-related terms as toxic or negative even in neutral contexts. We cite the same research in our “Tensions We Sit With” section. The brief also surfaces the 2025 systematic review finding that 0% of AI interventions for students with learning disabilities were rated Low Risk for bias — 70% Moderate Risk, 30% High or Serious Risk. We cite that finding in our “Harm Reduction, Not Absolution” section. Shared evidentiary ground: bias in AI systems affecting disabled students is not a hypothetical future risk. It is the current baseline.
Human in the loop as non-negotiable. Both documents insist that consequential decisions about disabled students — placement, services, accommodations, interventions — must remain with humans, even when AI informs them. The brief frames this as legal compliance with IDEA and Section 504. Our guidelines frame it as the same principle: educators and community members retain final authority to interpret, validate, or override AI-generated outputs. AI assists. It does not decide.
Data privacy for disability information requires heightened protection. The brief’s Pillar 2 treats disability status, IEP data, and 504 data as specially sensitive information requiring stricter privacy safeguards than general student data — data minimization, purpose limitation, mandatory deletion timelines. Our guidelines hold the same standard: we do not feed personal or sensitive community information into AI systems, and we name the particular risk that disability data used to train algorithms can encode and amplify the pathology paradigm.
AI literacy as protection. Both documents treat AI literacy as a non-negotiable foundation — not to promote AI adoption but to protect people from its failure modes. The brief calls for shared competency across educators, administrators, and families. Our harm reduction section makes the same argument: higher AI literacy correlates with lower AI receptivity. Teaching people how these systems work makes them more cautious, not more dependent. Suppressing education to discourage use creates an exploitable class of people whose ignorance AI companies will not hesitate to capitalize on.
Adaptive practice over static policy. The brief explicitly warns that any policy that remains static risks immediate obsolescence or inadvertent civil rights infringement, given the pace of AI development. Our guidelines hold the same position — this page is a living document, updated as the landscape shifts. The brief’s student feedback cycle framing and our constructionist orientation arrive at the same requirement: the lived experience of learners with disabilities is the primary evidence base for whether any of this is working.
Where we go further.
The brief operates within a disability rights frame and makes that frame rigorous and actionable. It does not interrogate the paradigm of the tools themselves. It asks whether AI accommodates disabled students. We ask whether AI embeds assumptions about what neurodivergent cognition is or should be — whether the ramp is accessible, yes, but also whether the building was designed against you in the first place. That is the neurodiversity paradigm versus the pathology paradigm distinction, and it is structural rather than policy-level.
The brief does not engage the constructionist question. It treats AI primarily as assistive technology and administrative tool. Our MADTech and constructionism section insists on a harder distinction: is the learner building something meaningful with the tool, or is the tool doing something to the learner? Papert and Freire supply the frame the brief doesn’t reach for. Banking education produces compliance. Constructionism produces makers. That question — is the human the maker or the subject — does not appear in the brief’s pillars. It is load-bearing for us.
The brief’s anti-behaviorism is rhetorical. It mentions “compliance over wellbeing” once and moves on. Its proposed accountability mechanisms — ESSA evidence tiers, outcome metrics, disaggregated data — can be captured by behaviorist logic, because what gets measured shapes what gets built. Our anti-behaviorism is structural. It runs through the POSIWID standard, the co-design political demand, and the refusal to evaluate tools by aggregate outcomes rather than by whether they work for the most excluded users.
The brief’s stakeholder engagement model is consultation-forward. Students and families tell administrators what’s working; administrators decide. Our “nothing about us without us” framing is stronger: neurodivergent people are knowledge-holders, not end-users providing feedback. Co-design is power-sharing at the architecture stage. The brief gets close to this — it calls for “authentic power-sharing” — but stops at the advisory structure level rather than the political demand level.
Finally, the brief never makes the inversion that is Stimpunks’ foundational argument: broken systems, not broken people, applied to AI. The brief consistently frames the problem as AI needing to better serve disabled students. It does not frame it as educational systems having historically harmed neurodivergent people, and AI being at risk of automating that harm at scale. That framing difference is not cosmetic. It determines where accountability lands.
We recommend this brief as essential reading for anyone developing AI policy affecting disabled students. It is the strongest policy document in this space. Our guidelines pick up where it stops.
Alignments between Stimpunks AI and Disability Justice guidelines and A People’s Guide to AI
Mimi Onuoha and Mother Cyborg (Diana Nucera). A People’s Guide to AI. Allied Media Projects, 2018.
Who is most harmed comes last in the conversation. The People’s Guide opens with a pointed observation: “The exact populations who are not widely included in current conversations about the technology are the ones who face the greatest risk.” Disabled people, neurodivergent people, and people with intersecting marginalized identities have the most to lose from AI deployed carelessly — and the least power over how it’s designed. This is the founding premise of our disability justice framing.
Data is never neutral. The book’s treatment of predictive policing names the “garbage in, garbage out” problem with precision: algorithmic outputs are only as fair as the data fed into them, and that data is shaped by who is surveilled, who is believed, and who is counted. The same applies to diagnostic algorithms, employment screening, benefit eligibility systems, and educational AI — all of which have documented records of harming disabled people. Biased training data doesn’t just produce biased outputs. It launders existing inequity as objective truth.
The question of who has power. “The use of AI comes down to the question of who has power.” Disability justice asks the same question. When AI systems are built by non-disabled developers, trained on non-disabled data, and deployed to “optimize” services for disabled people without their involvement — that’s not efficiency. That’s power exercised over a community that had no say.
Nothing about us without us. The People’s Guide argues that communities should be able to “identify their own problems, and decide on their own uses for technology.” This is the “nothing about us without us” principle that disability justice has long held as foundational. It’s why co-design with disabled people isn’t a nice-to-have — it’s a prerequisite for legitimacy.
Overcollection and undercollection. The book describes a “cycle of data violence”: marginalized communities are simultaneously over-surveilled (data collected about them without consent) and underserved (data that would serve them is absent or ignored). This maps directly onto disability: surveillance of disabled people through benefits systems, healthcare, and institutional settings is extensive, while disabled people’s own experiences, preferences, and expertise are systematically excluded from the data that shapes their care.
Technology as bulldozer, not hammer. The metaphor the authors offer — AI is a bulldozer, not a hammer — captures the access and power asymmetry precisely. Disabled communities need tools they can pick up, adapt, and put down. What they typically encounter instead are systems deployed on them, scaled to institutional needs, and nearly impossible to contest or exit.
