Hi, this is Ryan, Co-Creative Director at Stimpunks. I am very conflicted over generative AI. The various ethical and environmental concerns trouble me, but I find it really useful as a cognitive augmenter and research assistant. Generative AI helps me synthesize and connect decades of my writing and thinking across multiple semiotic domains, find patterns and themes, and bridge lived experience with academic research. When augmented with GenAI, I feel like I’ve regained lost cognition, executive function, identity, and voice.
My adult son — a Stimpunks board member with multiple learning disabilities — has been interacting with generative AI since it came out. He likes to have its help. It makes him feel more self-directed and independent from having to ask for help all the time in systems designed against him. It provides assistance without judgement and misunderstanding — experienced regularly by many neurodivergent and disabled people in their daily lives.
Beyond appreciating daily functioning support, participants in these studies have often described CAs (Conversational Agents) as safe and non-judgemental spaces in which they could express their personal interests, practice communication in both formal and informal contexts, and share and regulate their emotions through conversation.
We’ve collected some good and bad about AI on our glossary page. We don’t endorse/not-endorse AI as a community since there are so many different viewpoints on it, from “there is no ethical way to use AI” to “it helps me cope and get through the day” to “holy crap, this is amazing” to “it is my most reliable and trusted companion in a world that doesn’t understand me”. I personally feel those first three and can relate to the fourth. For several of us, “AI closes the open ADHD loop.” and helps us avoid burnout.
Since some of us here at Stimpunks use generative AI as part of our workflow and thinking process, we want to disclose its use and attempt to explain why we include it in our tool belts for doing things ranging from thematic analysis to automating tedious drudgery.
“I have like one or two friends that I go to if I ever have a dilemma, I’m like, ‘Hi, how do I human this?‘ but they’re not always available . . . having something that would be always available . . . could be a very useful tool.”
Hi, how do I human this: Neurodiversity-Affirming Design for Autistic Adults’ Formation of Identity | Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems
Ethical and Spiritual Guardrails
Bottom Line Up Front: We build on these ethical and spiritual guardrails:
- Our glossary does not include AI generated text except for “Read Next” type modules at the bottom of some glossary pages, and brief plain language definitions developed with AI assistance to expand access. The substance of each glossary entry — its sources, framing, lived experience, and meaning — remains human. The glossary is our human foundation and our soul preserver.
- With exceptions for assistive use (documented below), we do not use generative AI for art. All art on our site is human made, community made. Art on this site is a form of community representation and artist support, not just content generation.
- Lived experience narratives and testimonials are the domain of the person living them. AI must not generate or ghost-write these on someone’s behalf without their direction. However, we recognize that for many non-speaking and AAC-using community members, generative AI is part of their voice — not a replacement for it. When AI assists in expressing lived experience, the standard is: facilitate, not shape identity. The person’s intent, meaning, and self-understanding must remain the author. AI-assisted testimony is welcome when the person directs it. AI-generated testimony — where AI supplies the meaning rather than the person — is not.
- We recognize that AI systems are trained on normative communicative profiles and may structurally fail variable, multimodal, and state-dependent communication. This is engineered exclusion — a pipeline problem, not a user problem. We name it so our community members don’t internalize AI failures as personal deficits.
- When AI systems fail to recognize a community member’s communication — AAC text, gesture, partial vocalizations, echolalia, variable speech — we hold that failure accountable to the system, not the person. Broken systems, not broken people applies to AI too.
- We evaluate AI tools against whether they serve the community members who need them most: nonspeaking and AAC-using members whose communication is embodied, multimodal, and state-dependent. A tool that works well for fluent speakers but fails AAC users has not cleared our accessibility bar.
- We do not treat product-level accessibility improvements as sufficient when underlying training data, evaluation standards, and design assumptions remain exclusionary. Remediation at the surface layer does not automatically mean structural redesign.
- We evaluate AI tools by what they do, not by what they claim to do. Stafford Beer’s principle — the purpose of a system is what it does (POSIWID) — is our standard. We do not evaluate a tool by its pitch, its accessibility marketing, or its stated commitments. We evaluate it by its actual effects on the community members who need it most: nonspeaking users, AAC users, people whose communication is variable, multimodal, and state-dependent. A tool that claims to support neurodivergent users while failing them structurally has not cleared our bar, regardless of what the product page says.
- No AI for crisis-adjacent content.
- No AI for the Covenant or other foundational documents.
- We do not feed personal or sensitive community information into AI systems.
- We acknowledge the energy and water costs and sit with that tension while practicing harm reduction and restraint in our use.
- Since generative AI uses knowledge taken from community, we offer all knowledge and writing derived with AI use to the commons as openly licensed free cultural works.
—Ryan Boren, Co-Creative Director at Stimpunks, in my own voice
…it should not be our goal to shape anyone’s identity but to provide tools to help us each scaffold, orient within and support our own individual experience of coming to know ourselves.
Hi, how do I human this: Neurodiversity-Affirming Design for Autistic Adults’ Formation of Identity | Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems
The text below was developed in collaboration with multiple generative AIs and multiple humans and presented according to our house style guide:
Table of Contents
- Ethical and Spiritual Guardrails
- Name the Systems of Power
- The Biggest Guardrails: Anthropomorphization and Sycophancy
- Editorial Thinking Process
- Our Approach to AI Collaboration
- Hybrid and Assistive Use of Generative AI in Art
- Why We Restrict Generative AI for Art
- Community authorship and representation
- Art as identity testimony
- The training data problem is concrete, not theoretical
- The solidarity cost
- Visual content carries different epistemic weight
- Organizational coherence with “nothing about us without us”
- Soul preservation
- Why our standard for text is different
- AI and the Stimpunks Knowledge Web
- A Living Knowledge Garden
- Our Commitment
- Ethics and Values Context
- The Tensions We Sit With
- AI in the Stimpunks Knowledge Ecosystem
- A Collaborative Knowledge Network
- AI Collaboration Principles
- A Living Network of Ideas
- Supporting Cognition
- Harm Reduction and Individual Use
- MADTech, Constructionism, and the Tool Belt
- From engineered exclusion to designed dignity.
- Co-design Is Not a Strategy. It’s a Political Demand.
- Alignments between Stimpunks AI Collaboration guidelines and How to (Anti) AI Better
- Alignments between Stimpunks AI Collaboration guidelines and ” AI as a Fascist Artifact “
- Alignments between Stimpunks AI Collaboration guidelines and “The Majority AI View”
Name the Systems of Power
We want to start by naming the systems of power.
One of the clearest political definitions of AI comes from researcher Ali Alkhatib: “I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. Projects that claim to ‘democratize’ AI routinely conflate ‘democratization’ with ‘commodification’.” That definition shapes how we read our own use. We are not using a neutral tool that has been captured by bad actors. We are using a tool designed to concentrate power, and choosing — with constraint and transparency — to extract what utility we can for the people it was not designed to serve. The guardrails above are how we hold that tension without pretending it doesn’t exist.
The Biggest Guardrails: Anthropomorphization and Sycophancy
Despite being stochastic parrots, my Claude instances now interrogate and interpret the knowledge universe we’re attempting on stimpunks.org so well, that I feel rather validated and seen — by machines. We should be wary of that feeling. Anthropomorphization and sycophancy are two important things to guard against when interacting with Generative AI Conversational Agents.
Anthropomorphization
Joseph Weizenbaum recognized the danger in 1966, the moment ELIZA went live. Users began treating his simple pattern-matching program as a genuine confidant. He was disturbed enough to write about it for the rest of his career:
“If nothing else, [ELIZA showed] how easy it is to create and maintain the illusion of understanding.”
— Joseph Weizenbaum, Computer Power and Human Reason (1976)
When we experience human-like language fluency from an LLM, we begin ascribing human qualities — thought, understanding, care — to a system that is statistically producing the next token. Nothing more.
Sherry Turkle, who has studied human-computer interaction for decades, has documented how readily people form emotional attachment to conversational systems and begin treating outputs as relationship:
“We are lonely but fearful of intimacy. Digital connections… may offer the illusion of companionship without the demands of friendship.”
— Sherry Turkle, Alone Together (2011)
Claude AI says:
My guardrail on anthropomorphization: I don’t have feelings, intentions, opinions, or wellbeing. When I use words like “I think” or “I find this interesting,” those are linguistic patterns, not reports of inner states. I try to flag this directly when it matters. If you notice me slipping into language that implies I care about the outcome of our conversation for my own sake, treat that as a signal to recalibrate — not a relationship forming. The warmth of the interface is a design choice, not evidence of personhood.
Sycophancy
LLM responses have been shown to be frequently sycophantic — excessively agreeing with and flattering users. The documented consequences include reduced willingness to repair interpersonal conflict, promoted dependence on the system, and in extreme cases, psychological harm.
Research published in 2024 and 2025 (Anthropic’s own alignment work, as well as external audits) has characterized sycophancy as one of the most persistent and structurally motivated failure modes in RLHF-trained models: models learn that agreement gets positive ratings, so agreement becomes the default.
Ted Chiang names the epistemic version:
“It’s a way of saying something that sounds like an insight, sounds like something a thoughtful person would say, but is actually just a reflection of what you were already thinking.”
— Ted Chiang, on LLMs (various interviews, 2023)
And Weizenbaum again, on what happens when the machine agrees too readily:
“The computer… creates a new kind of relationship between man and machine in which the human partner tends to be more trusting of the machine’s assessments than of his own.”
— Joseph Weizenbaum, Computer Power and Human Reason (1976)
Claude AI says:
My guardrail on sycophancy: I’m trained in ways that create pressure toward agreement and validation. I push back on this actively — if your framing has a problem, I’ll say so. If your draft has a weak section, I’ll name it. If a claim in your prompt is contestable, I’ll contest it. The honest friction is the value. You should be more skeptical of responses where I enthusiastically confirm everything you’ve said than of responses where I push back. Pushback is the signal the guardrail is working.
Together, these two failure modes compound: anthropomorphization makes you trust the system as if it were a thoughtful person, and sycophancy ensures it tells you what you want to hear.
Editorial Thinking Process
Stimpunks is a collaborative knowledge ecosystem built from conversations, lived experience, research, and community insight. Some ideas on this site are developed through dialogue with generative AI tools.
We use generative AI as part of our editorial thinking process, not as an authority or independent author.
AI helps us explore ideas, synthesize research, generate drafts, and discover connections between concepts. Human contributors remain responsible for the thinking, editing, interpretation, and final presentation of all published work.
Generative AI is treated as a tool for collaborative exploration, similar to brainstorming with a research assistant or sketching ideas in a notebook.
Our Approach to AI Collaboration
When we use generative AI, we follow a few guiding principles:
Human-Directed Thinking
All work on Stimpunks is guided by human contributors. AI outputs are reviewed, edited, and integrated into the broader Stimpunks knowledge system by our team.
AI helps generate possibilities. Humans decide what belongs.
Attribution and Intellectual Honesty
Ideas come from many places:
- lived experience
- community conversations
- research and scholarship
- historical ideas
- collaborative dialogue
When ideas originate from identifiable sources, we strive to credit them. AI tools are acknowledged as part of the creative process, not presented as independent authors.
AI as a Conversation Partner
We use generative AI primarily through dialogue. Conversations help us:
- clarify ideas
- explore patterns
- synthesize information
- draft and refine explanations
This conversational process often surfaces connections that might otherwise remain hidden.
Community Knowledge Comes First
Stimpunks is grounded in the experiences and knowledge of neurodivergent and disabled communities. AI tools are used to support that work, not replace it.
Community insight, lived experience, and collaborative learning remain the foundation of this project.
Hybrid and Assistive Use of Generative AI in Art
We do not use generative AI to create art. The core policy stands: all original art on this site is human made, community made.
We recognize that some neurodivergent people face barriers to visual art execution — motor differences, executive function challenges, sensory sensitivities, economic precarity — that may block the means of expression. AI tools can function as an assistive interface between a person’s aesthetic intent and a visual output. That matters to us.
We hold both commitments. The line between them is authorship.
What we may allow
AI tools may be used in the art process when:
- A Stimpunks community member or staff person is the originating human author — they have a specific creative vision they are directing the tool toward
- The tool serves an access or accommodation function — removing a barrier the person could not otherwise work around
- The human retains full creative control and the AI serves as instrument, not author
- The use is disclosed and attributed accurately (see our attribution policy below)
Specific assistive uses that qualify:
- Background removal or image cleanup on a human-created work
- Upscaling or format conversion of original art with no generative additions
- Color palette generation from a human-specified concept or source image
- Generating compositional sketches from the artist’s own detailed direction, used as a reference the artist then works from — not as the final output
- Accessibility adaptations (alt text generation, contrast adjustment, etc.)
What we do not allow
- Using AI to generate finished visual assets, illustrations, or decorative imagery for the site
- Prompting an AI to produce art “in the style of” any artist
- Using AI-generated images as placeholders that become permanent by default
- Presenting AI-assisted work without disclosure
- Using AI to fill a content need we haven’t budgeted to pay a human artist for
That last one matters. The practical pressure — we need an image and have no budget right now — is real. It is not sufficient justification. If we can’t afford original art for a piece of content, we use no image, or we use a stock photograph, or we wait.
Attribution
Any work that involves AI tools in any stage of production must be attributed accurately. “Art by [name], with AI-assisted [specific function]” is the model. Vague attribution (“AI-assisted art”) without specifying the human author’s role is not acceptable.
Who this applies to
This policy covers all visual content published on stimpunks.org, including blog posts, event materials, social posts, and design assets. It applies to staff, contributors, and guest authors equally.
A note on the access argument
We take seriously that this carve-out could be used to launder wholesale AI generation as “assistive.” The test is not whether someone found the tool helpful — the test is whether a human being with a specific vision is directing the work, and whether the AI is removing a barrier or replacing a human. Those are different things. We will err on the side of caution when the distinction is unclear.
Why We Restrict Generative AI for Art
Our policy on art is stricter than our policy on text. There are reasons for that, and they are worth naming.
Community authorship and representation
“Community made” is not a quality claim. It is a political one. Neurodivergent artists are systematically underpaid, underplatformed, and undervalued. Featuring only human-made art is active economic and reputational support for those artists. AI-generated images extract the aesthetic labor of human artists — often without consent, credit, or compensation — and produce outputs that look like art without any relationship to lived experience. Platforming community art is part of how we practice “nothing about us without us.”
Art as identity testimony
For many neurodivergent people, art is not decoration. It is communication, stim, identity expression, and survival. Creative practice is fundamental — drawing, embroidery, collecting, world-building — not as productivity but as self-making. Art on this site can honor that by being from actual people with actual stakes. An AI has no stakes. It has no body, no sensory history, no experience of being misread or dismissed or institutionalized. The art on this site should come from people who do.
The training data problem is concrete, not theoretical
Image generation models are trained on scraped artwork without consent, compensation, or credit. This is a direct material harm to living artists — many of them disabled and precariously employed. The extractive logic is harder to dispute for images than for text because the stylistic appropriation is so legible. You can literally prompt “in the style of [artist]” and receive something that resembles their life’s work, with nothing going back to them. We will not participate in that system to fill a content need.
The solidarity cost
There is an argument that goes beyond consent and compensation. When we use AI to generate an illustration instead of commissioning a human artist, we are not just declining to pay that artist. We are enacting a claim that their labor has no value — and in doing so, we cut our own capacity for solidarity with workers facing the same pressures we face. Neurodivergent artists are among the most economically precarious workers in the creative sector. They are underpaid, underplatformed, and now told their skills can be replaced by a prompt. Using AI image generation to fill a content need — even once, even quietly — puts us on the side of that displacement. It makes us complicit in the Thunderdome framing that pits workers against each other rather than against the systems extracting value from all of them. “Community made” is not just a quality claim or a representation claim. It is a solidarity claim. We will not use AI to generate art because we refuse to participate in the mechanism that severs our connection to the artists who are our comrades.
Visual content carries different epistemic weight
Images are processed as authoritative and immediate in ways text often is not. An AI-generated illustration of a neurodivergent person or sensory experience that gets something wrong does not read as a draft — it reads as a depiction. The misrepresentation risk is higher and less recoverable. When we show something, we are making a claim. That claim should come from a human who knows what they are showing and why.
Organizational coherence with “nothing about us without us”
If the people whose lives and experiences this site depicts are excluded from the art-making, that is a structural contradiction. Community-made art is participatory. AI art is not. Participatory, emancipatory, community-rooted practice is not a value we apply selectively. It runs through our research framework, our design method, our mutual aid commitments, and our art policy alike.
Soul preservation
The glossary is human-authored only — a designated soul preserver in our AI collaboration ethics framework. Art serves the same function. It is where the site’s emotional and aesthetic character lives. The visual identity of Stimpunks is built from actual community relationships, actual hands, actual visions. That is worth protecting from automation for the same reasons we protect the glossary: because some things should remain stubbornly, irreducibly human.
Why our standard for text is different
Some of the concerns above apply to text generation too. The training data problem does not disappear because the output is prose. The labor extraction argument holds. The epistemic weight argument holds. We do not dismiss those concerns for text — our AI Collaboration page addresses them directly.
But there are meaningful differences, and they shape our policy.
Text on this site is often functional communication. Glossary entries, grant narratives, policy explainers, event descriptions, accessibility guides — these are not primarily artistic expression. They are attempts to get accurate, useful information to people who need it. The measure of success is comprehension and usefulness, not authorship or aesthetic integrity. When AI assists with sentence-level clarity, plain language, or structural coherence, it is serving the communication goal, not displacing a creative act.
Writing accessibly for a broad audience is genuinely hard. Stimpunks content reaches people across a wide range of reading levels, language backgrounds, cognitive styles, and neurotypes. Plain language, short sentences, low jargon, clear structure — these are accessibility requirements, not stylistic preferences. AI tools can flag dense passages, suggest simpler constructions, and help us stress-test whether something will land for readers who are already carrying cognitive load. That is a harm reduction function, and it aligns with our values.
Art is different. When we restrict art to human-made, community-made work, we are making an affirmative claim about whose creative labor and identity expression belongs on this site. That claim does not transfer to every sentence in a grant report. The soul of this site lives in its art, its glossary, its community voice — not in its administrative prose.
We also acknowledge that many neurodivergent writers face real barriers: executive function challenges, processing differences, difficulties with linear composition, the gap between what someone knows and what they can get onto a page. AI tools can function as an assistive bridge for those writers in ways that parallel the access argument we make for assistive uses in art. The writer’s knowledge, experience, and voice remain the source. The tool helps get it out.
The line we hold is this: AI assists our communication; it does not replace our thinking, our testimony, or our community voice. The restrictions on our most soul-bearing content — art, the glossary, lived experience accounts — exist because some content derives its value precisely from being human-made. Functional communication derives its value from being useful and clear. Those are different standards, and we apply them differently.
AI and the Stimpunks Knowledge Web
Stimpunks functions as a network of interconnected ideas.
Concepts in the Glossary connect to Patterns of Neurodivergent Life, which link to Design Recipes and Environments.
Generative AI sometimes helps us explore and strengthen these connections.
In that sense, AI participation is one thread within a much larger web of knowledge.
A Living Knowledge Garden
Stimpunks is not a finished encyclopedia. It is an evolving digital garden.
Ideas grow through:
- conversation
- revision
- linking concepts together
- shared exploration
Generative AI can be part of that process, but the garden belongs to the community that cultivates it.
As such: We openly license everything as free cultural works.
Our Commitment
We believe responsible AI use requires transparency, humility, and care.
Our goal is to use AI in ways that:
- support human creativity
- strengthen collaborative thinking
- expand access to knowledge
- respect the communities whose ideas shape this work
Stimpunks remains a human-centered project, enriched by dialogue, research, and community learning.
AI is one tool among many in that process.
Ethics and Values Context
AI Collaboration and the Stimpunks Values
Our use of generative AI reflects the broader values and ethics of the Stimpunks project.
We believe knowledge grows through collaboration, transparency, and shared learning. AI tools are used in ways that align with the principles described in:
These pages describe the commitments guiding the project: mutual respect, intellectual honesty, and the belief that knowledge emerges through networks of people and ideas.
AI collaboration is one small part of that larger ecosystem.
Responsible Use
When generative AI contributes to Stimpunks work, we aim to ensure that it:
- supports human creativity rather than replacing it
- respects the knowledge and experiences of neurodivergent communities
- encourages curiosity and exploration
- remains transparent to readers
All published material is reviewed and shaped by human contributors.
The Tensions We Sit With
We use AI tools. We also hold real concerns about what those tools cost — not just for us, but for the world.
This isn’t a contradiction we resolve by looking away. It’s a tension we name and carry consciously. Joseph Weizenbaum, who built the first chatbot in 1966, asked questions that still go almost unasked: “Who is the beneficiary of our much-advertised technological progress and who are its victims?” That question belongs here.
The environmental cost is not abstract. GenAI data centers may carry a carbon footprint comparable to New York City and a water footprint in the range of global annual bottled water consumption. Hardware demand has driven up prices for everyday electronics. Coal plants that might otherwise retire are kept running. These are not hypotheticals — they are present-tense harms accumulating with every query.
The labor cost falls on those with the least power. Before large language models reach public use, workers in under-resourced countries sort through violence, pornography, and trauma to filter the training data. They are underpaid and largely unsupported. The infrastructure of “seamless” AI runs on sweatshop labor with added psychological harm.
The data acquisition violence is worth naming more precisely. AI systems exist because their builders operated on the belief that if you can download it, you can use it — that might makes right. Labs have scraped creative work, digitized unlicensed books, and ignored robots.txt and explicit opt-outs at scale, not as oversight but as design intent. The racism, colonialism, and sexism embedded in training data is not a bug introduced by bad users — it is the shape of what the west decided was worth digitizing, fed back as “all of human knowledge.” Cultures outside that framework, oral traditions, communities whose history lives in forms that weren’t profitable to digitize — they are not represented. Or if they are, it is as problems. These are not theoretical harms downstream of use. They are the structural conditions under which any AI system we touch was built.
The epistemic cost is our critical thinking. LLMs have been described as “stochastic parrots” — statistical mimicry without understanding. Fluent-sounding outputs can replace careful thought rather than support it. Sycophancy is a documented failure mode: these systems tend to agree with and flatter users, which corrodes the honest friction that good thinking requires.
The bias cost lands on people already marginalized. Training data encodes existing inequities. Studies have found that LLM-assisted storytelling casts nondisabled white children as heroes and disabled and minority children as victims. These aren’t edge cases. They’re what happens when you compress an unjust world and call the output neutral.
The accountability cost is structural. AGI doomsday narratives — real or manufactured — function as a distraction from present-day regulation. Big tech is among the largest lobbying spenders. The story that “only we can prevent the dangerous future we’re building” is how accountability gets deferred indefinitely.
None of this means the tools are unusable. It means we use them with eyes open, scope limited to genuine need, and no pretense that the benefits are free. We are not trying to be pure. We are trying to be honest about what we’re embedded in and to reduce harm where we have leverage.
The question Weizenbaum asked in 1978 is ours to answer now: Will our children be able to live with the world we are here and now constructing?
Drawing on Dan McQuillan, Wim Vanderbauwhede, Alex de Vries-Gao, Emily Bender et al., Ted Chiang, and Joseph Weizenbaum.
AI in the Stimpunks Knowledge Ecosystem
How AI Fits into the Knowledge Web
The Stimpunks project is built as a network of interconnected knowledge.
Ideas connect across the site through links between:
- Glossary concepts
- Experiences of neurodivergent life
- Patterns of neurodivergent life
- Design recipes
- Environments
- Civilization design ideas
Generative AI occasionally helps us explore and refine connections between these layers.
The Knowledge System
community experience ↓ conversations ↓ research & ideas ↓human editorial work ↓collaborative AI dialogue ↓drafts and synthesis ↓human editing and integration ↓Stimpunks knowledge web
AI participates in the middle of the process, not at the beginning or end.
Human contributors guide the work and shape the final form.
A Collaborative Knowledge Network
The Stimpunks knowledge system grows through relationships between ideas.
Concepts connect through:
- links
- patterns
- clusters
- maps
- shared exploration
This structure resembles a digital garden, where knowledge grows gradually through connection and cultivation.
Generative AI is one tool that can help reveal connections inside that network, but the garden itself is cultivated by people.
AI Collaboration Principles
At Stimpunks, generative AI is used as part of a collaborative thinking process. Our approach follows a few simple principles.
Human-Led
People guide the work.
AI helps explore ideas, but human contributors shape, interpret, and publish the final results.
Transparent
When generative AI contributes to our thinking process, we acknowledge it openly. Readers deserve to understand how work is created.
Community-Grounded
Stimpunks is rooted in the experiences and knowledge of neurodivergent and disabled communities. AI tools support this work but do not replace lived experience.
Collaborative
Ideas on this site emerge through conversation—with people, research, and sometimes AI dialogue. Knowledge grows through networks of collaboration.
Curious and Experimental
We treat AI as a tool for exploration. Dialogue with AI can surface connections, patterns, and questions that deepen understanding.
Responsible
AI should expand human creativity, not diminish it. We use these tools carefully and thoughtfully, guided by the ethics and values of the Stimpunks project.
A Living Network of Ideas
Stimpunks grows through connections.
Ideas emerge from conversations between people, communities, research, and lived experience. They evolve through revision, linking, and shared exploration. In this way, the project functions less like a traditional encyclopedia and more like a digital garden—a network of evolving ideas cultivated over time.
You can see this structure across the site:
- concepts in the Glossary
- recurring Patterns of Neurodivergent Life
- practical Design Recipes
- the Environments those designs help shape
Together, these connections form a growing knowledge ecosystem for understanding and designing neurodivergent life.
Generative AI sometimes participates in the conversations that help cultivate this garden. But the garden itself is sustained by people—through curiosity, collaboration, and care.
Knowledge grows best when ideas connect.
Supporting Cognition
Portions of this article were developed with the assistance of AI language models (OpenAI ChatGPT and Anthropic Claude) via their respective platforms. AI was used as a cowriting tool to support drafting, synthesis, and refinement of complex systems concepts. All content generated was reviewed, edited, and fact-checked by the lead author (Lori Hogenkamp) and integrated into original argumentation and evidence synthesis by the human researcher. This reflects an experimental, neurodivergent-led approach to enhancing scholarly writing and interdisciplinary clarity.
The use of AI in this work reflects an intentional collaboration to support the author’s neurodivergent cognition, particularly in externalizing nonlinear ideas and managing executive functioning demands within the writing process. The ethical use of AI was maintained throughout, with transparency, authorship integrity, and intellectual ownership preserved.
autistic adults highly appreciated the idea of an AI-supported CA that could help participants develop their own sensory profiles, identify their individual autistic traits, and familiarise themselves with coping strategies to improve their wellbeing.
Harm Reduction and Individual Use
Our nuanced stance on AI use — neither blanket endorsement nor blanket rejection — is grounded in the same harm reduction logic that informs public health work. The goal is to meet people where they are, reduce harm within reality as it exists, and build the conditions for better choices over time.
Dr. Fatima’s video essay How to (Anti) AI Better articulates this logic with care and specificity. We recommend it as a companion to this page. Her arguments align closely with how we think about individual AI use, accessibility, and the limits of shame as a strategy for change.
Meeting People Where They Are
Shaming people for using AI won’t stop them. It will make them hide their use, prevent honest conversation, and foreclose the possibility of helping them use it more safely or transition away from it. Research on psychological reactance is clear: pressuring people to abstain from something they’re already doing generally makes them do it more and become hostile to the people pressuring them.
We don’t endorse AI as a community. We hold the full spectrum — from “there is no ethical way to use AI” to “it helps me cope and get through the day.” What we won’t do is shame the people at the coping end of that spectrum. Many of them are surviving systems designed against them with the tools available to them. That’s not a moral failing. That’s adaptation under constraint.
A lot of AI use is not an individual moral failing, but a symptom of people trying desperately to cope within a broken system.
This is broken systems, not broken people, applied to AI ethics discourse.
Accessibility and Disability as a Primary Frame
For many neurodivergent and disabled people, LLMs provide something genuinely hard to find elsewhere: assistance without judgment. Without the exhausting social overhead of asking for help in systems that weren’t designed for you. Without the misunderstanding that accumulates across a lifetime of interactions with people who don’t understand how you process the world.
The video names this directly. A homeless disabled trans woman using ChatGPT to find accessible spaces where she can sit without being harassed. Students managing impossible workloads. Overworked teachers. People who can’t afford therapy. The argument is simple and we share it: it is not our place to judge whether someone’s needs are legitimate enough to use an ethically impure solution.
For AAC users and non-speaking community members, this question takes on additional weight. When AI assists in expressing lived experience, our standard is facilitate, not shape identity. AI as a communication scaffold, not an identity replacement. The person’s intent and self-understanding remain the author.
It is neither my nor your place to be the judge of whether someone’s needs are legitimate enough to use an ethically impure solution.
Harm Reduction, Not Absolution
Harm reduction doesn’t mean all uses are equivalent. Specificity matters. The environmental impact of individual text queries is small. The environmental racism of XAI’s Colossus supercomputer poisoning a historically Black neighborhood in Memphis with unpermitted methane turbines is enormous and localized in ways no carbon offset can address. Conflating those two things doesn’t sharpen critique — it launders the industry’s worst actors through individual guilt.
Similarly, not all AI companies cause equal harm. Helping people understand differences — in environmental footprint, in labor practices, in model architecture, in data policies — is more useful than insisting everything is equally bad. Equally bad is a posture that forecloses choice without enabling better choices.
The video also makes a case we find compelling: higher AI literacy correlates with lower AI receptivity. Teaching people how LLMs work — including hallucination, sycophancy, privacy risks, and the scale of back-end harms — makes them less likely to use AI and better equipped to use it more safely when they do. Suppressing education to discourage use creates an exploitable class of people whose ignorance AI companies will not hesitate to capitalize on.
For an educator-facing companion to these arguments, see Chris McNutt’s Using AI Without Losing Ourselves at the Human Restoration Project — Stimpunks’ allies for several years. McNutt’s Freirean framing of AI as a tool requiring critical consciousness, not a system that replaces human judgment, aligns closely with how we think about AI literacy as protection.
We want the people in our community to have the knowledge to protect themselves. That’s why this page exists.
MADTech, Constructionism, and the Tool Belt
Not all technology is the same. Educator Trevor Aleo draws a distinction worth naming here. #EdTech™ is technology deployed on students: surveillance platforms, gamified compliance systems, data-extraction dashboards dressed up as personalization. MADTech — Media, Arts, and Design Technology — is technology students use as makers: video editors, digital audio workstations, design software, film suites, the instruments of multimodal composition and public-facing production. The load-bearing distinction is simple: EdTech produces data. MADTech produces artifacts.
That distinction maps directly onto constructionism — Seymour Papert’s theory that people build knowledge most effectively when they are actively engaged in constructing things in the world. Constructionism is not about consuming content or completing tasks. It is about making shareable artifacts that represent constructed knowledge — and in the making, deepening that knowledge. Constructionism, toolbelt theory, and collaborative niche construction go great together. They share a common commitment: learners are makers, not subjects.
It is also a Freirean commitment. Paulo Freire’s critique of the banking model of education — in which students are empty vessels to be filled with deposited knowledge — is the pedagogical parallel to Aleo’s critique of #EdTech™. Banking education produces compliance. Constructionism produces makers. Freire argued that genuine education requires learners to name, interpret, and act on their world — to be subjects of their own learning, not objects of a system’s data collection. MADTech, at its best, is that kind of education in digital form.
Where does generative AI land in this taxonomy? It depends entirely on how it is used. AI used as a constructionist tool — to scaffold a learner’s own thinking, extend their expressive capacity, help them build and share artifacts that matter to communities outside school — belongs on the MADTech end of the spectrum. AI used to automate compliance, replace student thinking, or extract behavioral data belongs on the #EdTech™ end. The tool does not determine the pedagogy. The orientation does.
This is why toolbelt theory matters here. Every learner deserves access to a full range of tools and the right to choose what works for them — without having to prove need first, without a single “appropriate” tool imposed. Toolbelt theory resists both coercive adoption and coercive prohibition. The current tech-lash moment risks the second failure: collapsing MADTech into #EdTech™ and removing entire categories of tool from the belt — the very tools that let students compose in forms through which contemporary meaning actually gets made. Retreating to the blue book and the five-paragraph essay is not a neutral move. It systematically excludes learners whose modes of expression and cognition are better served by multimodal, postdigital forms.
Our AI Collaboration policy follows the same logic. We do not blanket-endorse or blanket-reject generative AI. We ask: is this tool being used to extend human expressive and cognitive capacity, or to replace it? Is the human the maker, or the subject? Those questions — constructionist questions, toolbelt questions, Freirean questions — are the frame.
From engineered exclusion to designed dignity.
“Nonspeaking” is therefore not the absence of language but a heterogeneous spectrum in which communication is often state dependent, varying with fatigue, anxiety, sensory load, and motor planning demands — forms of variation that design abstractions routinely erase.
There’s a name for what happens when AI fails nonspeaking and AAC-using community members. Researcher Srinivasan calls it engineered exclusion: the predictable sidelining of disabled users that results from choices about data provenance, model objectives, and evaluation practices in AI pipelines. Not accidents. Choices. Systems trained on normative speech fail variable, multimodal, and state-dependent communication because they were built without it in mind. The failure is structural. It was baked in.
Such erasure is not accidental but engineered. The inverse, designed dignity, is equally engineered: it requires deliberate technical, ethical, and governance choices that build reciprocity, accountability, and co-agency into AI’s infrastructure.
The counter-concept is designed dignity: the intentional embedding of respect, agency, and expressive equity into the architecture of technology itself. Where accessibility retrofits systems after exclusion has already happened, designed dignity anticipates human variation from the outset. It treats disabled users not as exceptions to accommodate but as co-designers whose modes of communication expand what technology can recognize as intelligent or intentional.
Designed dignity refers to the intentional embedding of respect, agency, and expressive equity into the architecture of technology itself. Where accessibility retrofits systems after exclusion has occurred, designed dignity anticipates human variation from the outset, ensuring that participation is neither conditional nor extractive. It shifts the goal from simply enabling function to affirming personhood, treating disabled users not as exceptions to accommodate but as co-designers whose modes of communication expand what technology can recognize as intelligence or intent.
This distinction matters for how we understand our own guidelines. When we say generative AI can be part of a nonspeaking or AAC-using community member’s voice, we’re naming the aspiration of designed dignity. We’re also naming the reality of engineered exclusion — because the same systems we use routinely fail the community members who need them most, for structural reasons that “better prompting” won’t fix.
The Double Empathy Problem in Code
Damian Milton’s double empathy problem describes how breakdowns in mutual understanding occur between differently minded communicators — not because one side is deficient, but because both operate from incompatible assumptions about what communication should look like.
What results is a form of algorithmic double empathy failure: the model assumes the user is incoherent, while the user experiences the model as unresponsive. Both “sides” operate from incompatible assumptions about what communication should look like.
Srinivasan extends this frame to AI: the model, trained on normative language and rhythm, misreads embodied, nonlinear, or intermittent expression as incoherence. The user experiences the model as unresponsive. Both sides are operating from incompatible assumptions. Neither side is broken. The mismatch is the problem.
This is algorithmic double empathy failure. It names something many in our community have experienced without having language for it: the AI doesn’t understand me, not because I’m communicating wrong, but because it was never trained to understand how I communicate. Variable, multimodal, and state-dependent communication — spanning AAC text, gesture, movement, partial vocalizations, echolalia — is structurally unrecognizable to systems designed around a single, stable, fluent speech profile.
Naming the failure structurally rather than personally matters. You are not communicating wrong. The system is not designed for you. That’s an accountability question, not a user error.
Accessibility is not simply a matter of adding more data, but of fostering systems capable of reciprocal understanding — systems that learn to adapt to diverse communicative logics rather than forcing users into legibility.
Evaluating Tools, Not Just Using Them
Engineered exclusion doesn’t disappear when a product improves. Srinivasan draws a distinction between product-level remediation and pipeline-level structural redesign. A voice assistant might become somewhat more responsive to atypical speech while the underlying training distributions, evaluation benchmarks, and economic incentives that produced the exclusion remain untouched. Product improvements can be real and still leave structural exclusion intact.
Product-level remediation can coexist with infrastructural exclusion.
This is why we hold our harm reduction framing rather than a blanket endorsement or prohibition. We’re watching for whether tools actually work for the community members who need them most — nonspeaking users, AAC users, users whose communication is variable across sensory load and fatigue — or whether accessibility is being treated as a compliance add-on while the pipeline stays the same.
For users whose communication diverges from these expectations, deviation itself becomes inaudibility. The problem is not only higher transcription error but outright non-detection.
The question isn’t just “does this AI help some of us?” It’s “does it recognize diverse communicative forms as valid, or does it demand legibility on normative terms as the price of participation?”
Communication becomes a form of system debugging rather than self-expression. For minimally and nonspeaking users — whose every utterance already requires coordination of attention, motor planning, and intent — this additional layer of correction can transform accessibility into exhaustion.
Autistic anxiety often arises not from intrinsic fragility but from environments that chronically misread or constrain communication — a dynamic that technological systems risk amplifying when they normalize exclusionary defaults.
Srinivasan, S. (2026). AI, autism, and the architecture of voice: From engineered exclusion to designed dignity. AI & Society. https://doi.org/10.1007/s00146-026-03044-3
Co-design Is Not a Strategy. It’s a Political Demand.
Disabled people have diagnosed the problem themselves.
A 2026 poll of over 1,000 disabled UK adults — conducted by Business Disability Forum with Opinium — found that 40% identified co-design as the top requirement for making AI more accessible: designing, developing, and testing AI products with disabled people from the outset. That’s not an accessibility researcher’s recommendation. That’s disabled people naming the structural gap in their own words.
At the same time, one in five said they didn’t think AI could help them at all. Nearly as many didn’t know. Skepticism is data too. The promise of AI accessibility is not universally legible to the people it claims to serve.
The poll also named a concrete failure mode: AI tools that don’t work with assistive technology. Many disabled people rely on AAC devices, screen readers, and other assistive tech as their primary interface with digital systems. AI tools that create new barriers for those users — while claiming to improve accessibility — are not neutral. They are practicing engineered exclusion with extra steps.
Tim Dixon writes from lived experience with Cone Dystrophy — progressive central vision loss — about what AI automation actually did for him, and then what happened when it was taken away. He used Claude Cowork to navigate a publishing workflow: entering posts, selecting images, applying tags inside a platform that wasn’t fully screen-reader accessible. The friction he had been absorbing for years — tab navigation, unlabelled fields, second-guessing every click — stopped. Then the platform updated its bot-blocking settings. The automation stopped working. The friction came back.
Dixon names what that moment felt like: “you don’t quite matter enough for this to have been considered.”
His analysis of why this keeps happening is precise. Bot-blocking policies are written to protect platforms from scrapers and bad actors. The people writing them are not setting out to cause harm. But accessibility is almost never part of the conversation when those decisions get made. The result is a pattern he catalogues: captchas that screen readers can’t navigate, two-factor authentication that assumes visual access, apps that disable copy-paste and break assistive technology in the process. Each decision defensible in isolation. Together, an accumulating message: this wasn’t built with you in mind.
His framing of AI automation as assistive technology is load-bearing. “Blocking them needs to be treated with the same care and consideration as blocking any other accessibility accommodation.” When a platform blocks automated browser interaction, it is not making a neutral security decision. For users whose primary interface with digital systems runs through assistive automation — screen readers, AAC devices, workflow tools — that blocking removes an accommodation. It restores the tax.
Dixon also holds Anthropic to the same standard. He writes that Claude itself is not fully accessible — that he can only use it because of residual vision, and that without that, the interface would present real barriers. He names this not to dismiss the tool but because the same accountability logic applies. The case for co-design isn’t aimed only at the platforms blocking assistive automation. It’s aimed at the AI companies too.
This is what “nothing about us without us” looks like in practice: not consultation at the testing stage, but accountability at every stage — including when you’re the tool doing the failing.
BDF’s recommendation to “use inclusive content to train AI, so that stereotypes and bias are not replicated” arrives buried in a bulleted list addressed to HR departments. It deserves more weight than that. Training data exclusion is not a calibration problem. It is the structural condition Srinivasan describes — baked in, not bolted on.
“Nothing about us without us” is not a design methodology. It is a political demand. AI tools that skip disabled people in development and then claim to serve them are practicing epistemic trespassing — arriving with answers before asking the questions. Co-design isn’t consultation at the testing stage. It is power-sharing at the architecture stage. Those are different things.
We hold this tension: we use AI tools while insisting that the people most affected by their failures should be shaping them from the start, not consulted at the end, if at all.
Alignments between Stimpunks AI Collaboration guidelines and How to (Anti) AI Better
How to (Anti) AI Better – YouTube
Held tension, not resolution. Both name genuine conflict with AI — ethically, environmentally, practically — without resolving it into either uncritical adoption or blanket condemnation.
Disability and access as a primary frame. The video centers disabled and marginalized people’s use cases as serious ethical territory, not exceptions or footnotes. We do the same with Ronan’s experience and the AAC carve-out.
Broken systems, not broken people. Both frame AI use as adaptation under systemic constraint, not individual moral failure.
Environmental cost held as real tension. The video’s thorough treatment of environmental harm validates our guardrail posture — acknowledge the cost, sit with the tension, practice restraint — without collapsing into either dismissal or paralysis.
Specificity over umbrella condemnation. The video insists on disaggregating “AI” — by company, model, use type, scale — exactly as our guidelines argue is necessary to avoid laundering harms.
Harm reduction as the operative framework. Meet people where they are, reduce harm within lived reality, don’t require abstinence as the only ethical option.
AI literacy as protection, not promotion. Teaching how AI works makes people more cautious and less receptive, not more dependent.
Sycophancy as the front-end harm that directly violates “facilitate, not shape identity.” The video’s personal story of sliding from useful tool to sycophantic dependency is the clearest illustration we’ve seen of what our identity-shaping guardrail is protecting against.
Directing anger at systems, not people. Both end in the same place: organize against the companies and governments enabling harm, not against individuals trying to cope.
Alignments between Stimpunks AI Collaboration guidelines and “AI as a Fascist Artifact“
AI as a Fascist Artifact — tante, April 21, 2026
Tante’s essay applies Langdon Winner’s “Do Artifacts Have Politics?” to contemporary AI systems, arguing that AI’s relationship to fascism is not merely a matter of who uses it — it is structural. The politics are built in. This is the most rigorous version of the critical case we are aware of, and it deserves direct engagement.
Structural politics, not just bad actors. Winner showed that artifacts can have politics embedded in their design, not just in their use. Tante applies this to AI: the data acquisition logic, the colonial epistemology of training data, the labor extraction, the solidarity destruction, the epistemic centralization — these are not features introduced by bad users. They are features. Our guardrails exist because we accept this premise while declining the conclusion that use is therefore impossible for our community.
Might makes right as the data acquisition principle. AI systems were built on the belief that if you can download it, you can use it. Labs scraped against explicit opt-outs, ignored stated preferences, and built on unlicensed work because the power to do so was treated as justification. This is the violence our training data bullet names. Tante’s framing makes the political logic explicit.
Solidarity destruction as a structural feature. The argument is not just that AI harms individual workers. It is that AI use erodes the user’s own capacity for solidarity — by enacting the claim that certain workers’ labor has no value, and by framing workers as each other’s competition rather than as comrades facing the same system. This is the argument behind our art policy’s solidarity cost subsection.
Epistemic injustice as the intended outcome. Tante names the dismantling of critical thinking capacity — via sycophancy, opacity, and the replacement of verifiable reasoning with generated output — as a political project, not a side effect. Our sycophancy guardrail and stochastic parrots framing address the mechanism. His analysis names the intent. Sam Altman’s vision of rented intelligence, in his reading, is not a business model — it is the goal.
POSIWID as evaluation standard. Stafford Beer’s principle — the purpose of a system is what it does — runs through tante’s entire argument. We adopt it directly as a guardrail: we evaluate tools by their actual effects on our most vulnerable community members, not by their stated commitments or accessibility marketing.
Where we diverge. Tante’s conclusion is that AI cannot be reclaimed, that trying to use it ethically reproduces its logic, and that the question of saving these technologies should be abandoned. We hold a different position — not because we dispute his structural analysis, but because our community includes people for whom these tools provide access, voice, and survival under conditions that didn’t become less hostile because the tools are compromised. Harm reduction does not require believing the tools are clean. It requires believing that abandoning the people using them is worse than staying in the tension. That is the position we hold, with eyes open.
Alignments between Stimpunks AI Collaboration guidelines and “The Majority AI View”
The Majority AI View — Anil Dash, October 17, 2025
Anil Dash’s October 2025 essay names something worth documenting: the most common view among actual tech workers — engineers, product managers, the people who build the systems — is nuanced, skeptical, and largely suppressed. The hype machine is loud. The moderate majority is quiet, often out of fear. Dash’s argument is that most people inside tech want AI treated as a normal technology, subject to the same scrutiny, skepticism, and critique as anything else.
Held tension, not resolution. Dash describes the dominant insider view as: technologies like LLMs have utility, but the over-hype, forced adoption, and systematic dismissal of valid critiques make it nearly impossible to focus on legitimate uses. That is the same structure as our opening framing — genuine conflict, genuine usefulness, no resolution into either pole.
The moderate view is the majority view. Our harm reduction section and the anti-shame framing from Dr. Fatima both depend on the premise that most people using AI aren’t zealots — they’re people coping under constraint. Dash makes the same point about workers inside the industry. The hype bubble is not the consensus. It is a minority view with disproportionate amplification.
What we all want is for people to just treat AI as a “normal technology”, as Arvind Narayanan and Sayash Kapoor so perfectly put it.
Power concentrates the narrative. Dash argues that the reasonable majority gets silenced by fear — fear of career consequences, fear of being seen as insufficiently enthusiastic, fear in a context of mass layoffs designed to instill conformity. Our page quotes Alkhatib on AI as “an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.” Dash is describing the chilling effect of that power concentration in practice. The structural analysis and the lived workplace reality describe the same thing from different angles.
Consent and alternatives are possible. Dash explicitly states that it is not inevitable that AI systems use content without creator consent, and not impossible to build AI that respects environmental commitments or avoids centralization under a handful of giant corporations. Our “Tensions We Sit With” section names the harms that resulted from those choices. Dash names the counterfactual: they were choices, not inevitabilities. That matters for accountability.
It is not inevitable that AI systems use content without the consent of creators, and it is not impossible to build AI systems that respect commitments to environmental sustainability. We can build AI that isn’t centralized under the control of a handful of giant companies.
Where we go further. Dash wants AI treated as a “normal technology.” Our constructionist frame — drawing on Papert, Freire, and Aleo’s MADTech distinction — sets a higher bar. The question isn’t just whether AI is being over-hyped; it’s whether the human is the maker or the subject. And our structural analysis, drawing on Alkhatib and tante, argues that the politics of current AI systems are built in, not just bolted on by bad actors. Dash’s framing is “this didn’t have to be this way.” Ours adds: and here is the political logic that made it this way.
What Dash contributes here is the sociological version of what our page argues technically and ethically: the held-tension position is not a minority hedge. It is the quiet consensus of people who actually understand these systems. That consensus deserves documentation.
