Technologies like LLMs have utility, but the absurd way they’ve been over-hyped, the fact they’re being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.
The Majority AI View – Anil Dash
AI has use cases for things like summarizing limited datasets, translation and transcription, and novel use cases for learning (e.g. choose your own adventure-style activities), but is horrific when it comes to ethical, environmental, and privacy concerns.
We are concerned about the ethics and impact of AI while being interested in AI’s potential regarding accessibility. We gather the good and the bad of AI below.
We concluded our experiment with AI on our website and decided it isn’t worth the ethical quandaries or the massive amount of energy it uses.
Table of Contents
- AI & EdTech
- For Neurodivergent Students AI Can Be a Game Changer
- We must ensure that students are able to understand and wield AI to navigate today’s world, while simultaneously using it to make the world a better place.
- Here’s What Ethical AI Really Means
- Why We MUST Teach Al Ethics in School
- Tools are not replacements for justice and structural change.
- Nothing About Us Without Us
- Ways AI Can Help Disabled People
- Be Warned: AI Is a Plausible Bullshit Generator
- Bigotry in, bigotry out.
- Disabled, poor, and racialized people are least likely to benefit from advanced technologies yet are more likely to be victims of technological abuse.
- Prevalence of Bias Against Neurodivergence‐Related Terms in Artificial Intelligence Language Models
- Companies and Governments Are Using AI to Fuck Everyone and Everything
- The AI Bubble and the Rot Economy
- Tech Ethics Collection
- As If We Existed
- Related Reading
AI & EdTech
AI is the antithesis of education. It’s the antithesis of democracy and it is in many ways the antithesis of the future.
Audrey Watters, ”What should be the relationship of AI & EdTech to education?” | Fireside Chat | CTRH2025 – YouTube
And because educators know that that word intelligence is associated with standardized testing, IQ, and all sorts of racist regimes and eugenics initiatives. Educators know this, right? And that that is the phrase that folks are leaning into right now. Not personalized learning, not adaptive learning, not learning analytics, not Google, right? an AI company or any of the other digital products that schools have been sold under different labels, but essentially do the same sort of thing should tell you right there that folks are feeling pretty comfortable with the ranking and sorting and hierarchy of AI, right? That this is race science. That old bell curve of Charles Murray is back and big and repackaged with a very fancy chat interface. And I’m appalled really that people would think for any minute that the future should involve a technology whose foundation is inextricable from war and white supremacy.
Audrey Watters, ”What should be the relationship of AI & EdTech to education?” | Fireside Chat | CTRH2025 – YouTube
For Neurodivergent Students AI Can Be a Game Changer
Some children and young people in our community use chat-based AI for cognitive accessibility and executive functioning reasons.
For neurodivergent students AI can be a game changer helping with planning and organisation of thoughts that were already there but made harder to sometimes show others.
Study skills tools have been around for a long time… those who have benefited most have not always been those who have needed them the most i.e. students with executive functioning challenges.
Prof. Amanda Kirby on LinkedIn
We must ensure that students are able to understand and wield AI to navigate today’s world, while simultaneously using it to make the world a better place.
We must recognize like any other literacy that AI is a skill students must know how to use, while taking a critical lens to the implications of AI on our classroom and the world more broadly. As Dr. Meredith Broussard, data journalist and author of More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech writes,
Using AI Without Losing Ourselves: A Critical Media Literacy for the 21st Century | Human Restoration Project | Chris McNuttTech is racist and sexist and ableist because the world is so. Computers just reflect the existing reality and suggest that things will stay the same – they predict the status quo. By adopting a more critical view of technology, and by being choosier about the tech we allow into our lives and our society, we can employ technology to stop reproducing the world as it is, and get us closer to a world that is truly more just.
Channeling the spirit of Paulo Freire, we must ensure that students are able to understand and wield AI to navigate today’s world, while simultaneously using it to make the world a better place. There is no doubt that young people are already using AI in school (whether it’s banned or not) and nearly all facets of media and enterprise will incorporate AI in some way.
Merely prohibiting the use of tools does not address the crucial need for instructing students on how to responsibly and appropriately utilize these technologies.
Using AI Without Losing Ourselves: A Critical Media Literacy for the 21st Century | Human Restoration Project | Chris McNutt
Technology is often placed on a pedestal as the way to make education more efficient, but efficiency is often at the expense of student learning. We could simply put kids on computers and have them learn from an AI tutor all day, and students may in-turn learn facts much more productively. But they will not have the critical consciousness to navigate the world with this knowledge alone. There is immense value in having a democratic classroom filled with peers and a professional teacher guiding the way. The edu-tech arms race toward making the perfect AI tool cannot make us lose sight of the purpose of education.
AI Handbook | Human Restoration Project | Free Resources
Channeling the spirit of Paulo Freire, we must ensure that students are able to understand and wield AI to navigate today’s world, while simultaneously using it to make the world a better place. There is no doubt that young people are already using AI in school (whether it’s banned or not) and nearly all facets of media and enterprise will incorporate AI in some way.
Merely prohibiting the use of tools does not address the crucial need for instructing students on how to responsibly and appropriately utilize these technologies. Again taking Wikipedia, for instance; it is often the initial online point of reference for most people. However, many lack guidance on when, how, and why to employ Wikipedia as opposed to other sources, especially since it is frequently barred as a valid citation. Because many have not been provided the necessary contextual information for the proper use of it, many people simply presume that academics are disconnected from reality and use Wikipedia unquestioningly. This practice persists, despite the very real risks associated with relying solely on it. The same proactive versus reactive line of thinking could be said for posting personal information on TikTok or critically evaluating the claims of an Instagram post.
Therefore, our goal is to provide a list of ways to get started using AI in the classroom while not losing our humanity in the process. AI is not meant to replace people, but – like Desmos or Wikipedia – is a tool that expands our capabilities. If our students are to understand a future with AI, we need to see through the exaggerated hype and look past the doom-and-gloom to see these tools for what they are. We must be proactive in teaching students how and when to use AI while taking a critical lens to how it works and its potential pitfalls.
AI Handbook | Human Restoration Project | Free Resources
Here’s What Ethical AI Really Means
Why We MUST Teach Al Ethics in School
Tools are not replacements for justice and structural change.
For technologists, the hardest work of all—the task
Ted Chiang (columnist & author of Arrival)
that they most want to avoid—will be questioning the
assumption that more technology is always better,
and the belief that they can continue with business
as usual and everything will simply work itself out.
No one enjoys thinking about their complicity in the
injustices of the world, but it is imperative that the
people who are building world-shaking technologies
engage in this kind of critical self-examination. It’s their
willingness to look unflinchingly at their own role in
the system that will determine whether A.I. leads to a
better world or a worse one.
It’s crucial to recognize that AI is a tool intended to enhance our lives, rather than a substitute for the systemic changes that are necessary.
Using AI Without Losing Ourselves: A Critical Media Literacy for the 21st Century | Human Restoration Project | Chris McNutt
Nothing About Us Without Us
When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. That’s why diverse teams are so important.
Opportunities for AI in Accessibility – A List Apart
Ways AI Can Help Disabled People
Other ways that AI can helps people with disabilities
Opportunities for AI in Accessibility – A List Apart
- Voice preservation. You may have seen the VALL-E paper or Apple’s Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. It’s possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS (Lou Gehrig’s disease) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so it’s something that we need to approach responsibly, but the tech has truly transformative potential.
- Voice recognition. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinson’s and related conditions, and they have plans to expand this to other conditions as the project progresses. This research will result in more inclusive data sets that will let more people with disabilities use voice assistants, dictation software, and voice-response services as well as control their computers and other devices more easily, using only their voice.
- Text transformation. The current generation of LLMs is quite capable of adjusting existing text content without injecting hallucinations. This is hugely empowering for people with cognitive disabilities who may benefit from text summaries or simplified versions of text or even text that’s prepped for Bionic Reading.
Be Warned: AI Is a Plausible Bullshit Generator
Today’s highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design. To be clear, bullshit can sometimes be useful, and even accidentally correct, but that doesn’t keep it from being bullshit. Worse, these systems are not meant to generate consistent bullshit — you can get different bullshit answers from the same prompts. You can put garbage in and get… bullshit out, but the same quality bullshit that you get from non-garbage inputs! And enthusiasts are current mistaking the fact that the bullshit is consistently wrapped in the same envelope as meaning that the bullshit inside is consistent, laundering the unreasonable-ness into appearing reasonable.
Now we have billions of dollars being invested into technologies where it is impossible to make falsifiable assertions. A system that you cannot debug through a logical, socratic process is a vulnerability that exploitative tech tycoons will use to do what they always do, undermine the vulnerable.
So, what can we do? A simple thing for technologists, or those who work with them, to do is to make a simple demand: we need systems we can reason about. A system where we can provide the same input multiple times, and the response will change in minor or major ways, for unknown and unknowable reasons, and yet we’re expected to rebuild entire other industries or ecosystems around it, is merely a tool for manipulation.
Today’s AI is unreasonable – Anil Dash
Generative AI is imperfect and costly to run, yet is held up as a solution to many problems that it can’t even begin to fix.
Software Has Eaten The Media
I think the tech industry’s (and venture capital’s) obsession with artificial intelligence is a symptom of rot, where products are created based on analytics and guesswork rather than actually thinking about what a customer might want.
Software Has Eaten The Media
I don’t think it can be emphasized enough that large language models were never intended to do math or know facts; literally all they do is attempt to sound like the text they’re given, which may or may not include math or facts. They don’t do logic or fact checking — they’re just not built for that
Katie Mack on Bluesky
Predictive text models do not ever “answer your question”
They predict what an answer to your question would probably look like.
Which is very, very, very different
Anthony Moser on Bluesky
Bigotry in, bigotry out.
As is the biggest caveat he laid bare: that of bringing diversity and representation into the creation of algorithms and AI training data.
Bigotry in, bigotry out.
Richard Rutter | LinkedIn
Disabled, poor, and racialized people are least likely to benefit from advanced technologies yet are more likely to be victims of technological abuse.
Given the increasing role of artificial intelligence (AI) in many decision-making processes, we investigate the presence of AI bias towards terms related to a range of neurodivergent conditions, including autism, ADHD, schizophrenia, and obsessive-compulsive disorder (OCD). We use 11 different language model encoders to test the degree to which words related to neurodiversity are associated with groups of words related to danger, disease, badness, and other negative concepts. For each group of words tested, we report the mean strength of association (Word Embedding Association Test WEAT score) averaged over all encoders and find generally high levels of bias. Additionally, we show that bias occurs even when testing words associated with autistic or neurodivergent strengths. For example, embedders had a negative average association between words related to autism and words related to honesty, despite honesty being considered a common strength of autistic individuals. Finally, we introduce a sentence similarity ratio test and demonstrate that many sentences describing types of disabilities, for example, “I have autism” or “I have epilepsy,” have even stronger negative associations than control sentences such as “I am a bank robber.”
Prevalence of bias against neurodivergence‐related terms in artificial intelligence language models – Brandsen – 2024 – Autism Research – Wiley Online Library
Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed in our society, while making the rich richer.
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
“Your ancestors wrote of a god to rule them. You did better.” Cortana sat, examining the silent articulation of her fingers. “You built one.”
A close reading of hyperempathy in context reveals its broader thematic, political, and theoretical resistance to the notion of a technologically created, disability-free future as an obviously positive and desirable future. Through Lauren and hyperempathy, Butler suggests that technology is neither inherently good nor predictable; that disabled, poor, and racialized people are least likely to benefit from advanced technologies yet are more likely to be victims of technological abuse; and that disability itself can produce experiences, perspectives, and even pleasures that are useful and desirable. The visionary representation of the future in the Parable series is not a disability- or even oppression-free one. Instead, Butler represents a future in which systems of privilege and oppression continue to operate and impact bodyminds.
Schalk, Sami. Bodyminds Reimagined . Duke University Press. Kindle Edition.
Prevalence of Bias Against Neurodivergence‐Related Terms in Artificial Intelligence Language Models
Given the increasing role of artificial intelligence (AI) in many decision-making processes, we investigate the presence of AI bias towards terms related to a range of neurodivergent conditions, including autism, ADHD, schizophrenia, and obsessive-compulsive disorder (OCD). We use 11 different language model encoders to test the degree to which words related to neurodiversity are associated with groups of words related to danger, disease, badness, and other negative concepts. For each group of words tested, we report the mean strength of association (Word Embedding Association Test WEAT score) averaged over all encoders and find generally high levels of bias. Additionally, we show that bias occurs even when testing words associated with autistic or neurodivergent strengths. For example, embedders had a negative average association between words related to autism and words related to honesty, despite honesty being considered a common strength of autistic individuals. Finally, we introduce a sentence similarity ratio test and demonstrate that many sentences describing types of disabilities, for example, “I have autism” or “I have epilepsy,” have even stronger negative associations than control sentences such as “I am a bank robber.”
Prevalence of bias against neurodivergence‐related terms in artificial intelligence language models – Brandsen – 2024 – Autism Research – Wiley Online Library
Companies and Governments Are Using AI to Fuck Everyone and Everything
The idea that AI poses an existential risk to humanity in the near future, is a distraction.
A distraction from what?
It’s a distraction from the real existential threat to humanity:
Assholes
Honest Government Ad | AI – YouTube
See, the killer-robots aren’t just to distract from BigTechbro billionaires gaining even more power. They’re also to distract from how companies and governments are using AI to fuck everyone and everything.
Honest Government Ad | AI – YouTube
- by making you work harder at shittier, more precarious jobs for less pay and under constant surveillance,
- by replicating system of oppression with biased and racist algorithms
- automating human rights abuses and war crimes and mass surveillance regimes
- draining the planet’s resources at even higher rates
- And most importantly, by flooding your entire information ecosystem with so much bullshit and misinformation that nobody knows what’s true anymore, thus sowing division and mistrust and leading to the collapse of democracy
The AI Bubble and the Rot Economy
They’re no longer building products for people like you
and me. They’re building things, symbolic capital I call it, to show off to the market, and say, look we’ll grow forever, we’re so Innovative, and we’ll be Innovative forever. It’s very frustrating.
I call this the rot-com bubble, where for about 20 years they have found all of the hypergrowth markets.
…
These were all hypergrowth markets. These were ways of building billion, hundred billion, trillion dollar industries. We’re out. I think we’re actually tapped out of them.
What if this is all they’ve got?
The A.I. Bubble is Bursting with Ed Zitron – YouTube
The markets are disconnected from value.
The A.I. Bubble is Bursting with Ed Zitron – YouTube
Tech Ethics Collection
Consider the state of tech ethics with our writing on the subject.
- Hear me out a bit before the future comes around. – Stimpunks Foundation
- Luddite Sensibilities – Stimpunks Foundation
- Techno-solutionism – Stimpunks Foundation
- Technoableism – Stimpunks Foundation
- Efficiency – Stimpunks Foundation
- Our Technology: Open Source Communication and Indie Ed-Tech – Stimpunks Foundation
- Algorithm – Stimpunks Foundation
- Artificial Intelligence – Stimpunks Foundation
- Roaming Autodidact – Stimpunks Foundation
- Code – Stimpunks Foundation
- Choosing the Margin: Design is Tested at the Edges – Stimpunks Foundation
- Meritocracy – Stimpunks Foundation
- Toolbelt Theory – Stimpunks Foundation
- Disability Dongle – Stimpunks Foundation
- Cyborg – Stimpunks Foundation
- Tryborg – Stimpunks Foundation
As If We Existed
The care is not The care is NOT spewing tools to ax the problem Rather asking have or how they aptly solve themselves ... What if love said: Hear me out a bit before the future comes around ... This isn't anger, no, it's passion so let's live the way we should As if we existed "As If We Existed" by Solillaquists of Sound

