For the Campfire Learn Together on April 26, 2026, we watched and discussed “How to (Anti) AI Better” by Dr. Fatima.
Dr. Fatima’s thesis: shaming individual AI users is counterproductive. The more effective path is harm reduction — meeting people where they are, reducing specific harms, and directing pressure toward systems rather than individuals.
This Campfire is a companion to our AI Collaboration guide.
Dr. Fatima’s video essay and our guide were developed independently. They land in the same place on the questions that matter most.
Both name the same core tension: AI is a tool with real harms and real uses, and neither blanket endorsement nor blanket condemnation serves the people navigating it.
Both center disability and access as a primary frame — not an exception or footnote. Our guide opens with Ryan’s experience and Ronan’s experience. Dr. Fatima’s opens with a homeless disabled trans woman finding safe spaces.
Both apply broken systems, not broken people to AI ethics discourse. AI use is often adaptation under systemic constraint, not individual moral failure.
Both treat harm reduction — not abstinence — as the operative framework. Meet people where they are. Reduce harm within reality as it exists.
Both name sycophancy as a front-end harm worth taking seriously. Our guide identifies it as a direct threat to the “facilitate, not shape identity” standard. Dr. Fatima shows what that failure looks like from the inside.
Both argue that AI literacy protects people rather than radicalizing them. Teaching how LLMs work makes people more cautious, not more dependent.
Both end in the same place: direct pressure at systems and companies, not at individuals trying to cope.
The discussion today lives at the intersection of all of those themes.
Resources:
Related Glossary Entries
Reflection Questions
On back-end harms and environmental justice
The video documents data centers being sited in already-overburdened communities — historically Black neighborhoods with existing pollution burdens, high asthma rates, and little political power. How does this connect to patterns you’ve seen in other industries, and what does it mean for how we think about “clean” or “ethical” tech?
On labor in the Global South
Content moderation workers in Kenya and the Philippines review traumatic material for as little as $1–2/hour so users in the Global North don’t have to see it. This labor is invisible in most conversations about AI. Where else do you see this kind of hidden, harmful labor that makes systems function for dominant groups?
On hallucinations and unreliable information
Dr. Fatima gives real examples of harm from hallucinations: fabricated legal citations, a misidentified poisonous plant, dangerous dietary advice. Many of us already navigate systems that gaslight us or give us unreliable information (medical, legal, educational). Does that history change how you think about the specific risk of AI hallucinations?
On sycophancy and validation
The video describes sycophancy as a feature that affirms users regardless of accuracy — and links it to reinforced delusions and harm in relationships. At the same time, many neurodivergent and disabled people say AI is one of the few spaces where they don’t feel judged or dismissed. How do we hold both of those things at once?
On shame and harm reduction
Dr. Fatima argues that “the only ethical choice is to never use AI” messaging triggers reactance — it makes people dig in rather than change. She draws on the public health harm reduction model instead: meet people where they are, reduce specific harms, don’t moralize. Where do you see this tension — between calling something harmful and actually reducing harm — in other parts of your life or work?
On variation between tools
Not all AI tools are equally harmful. The video argues that choosing one platform over another, or using local/open-source models, meaningfully reduces specific harms. Does your organization have (or want) a stance on which tools are more or less acceptable, the way some groups have ethical sourcing policies for other products?
On collective action
Dr. Fatima’s four collective action recommendations: organize locally against data centers, support exploited data workers, push for stronger privacy protections, and build community alternatives to AI-mediated social support. Which of these feels most urgent or most actionable for your community right now?
On community as an alternative
One of the video’s deeper concerns is people turning to chatbots for connection and emotional support because human community isn’t accessible to them. What does Stimpunks already offer as an alternative to that, and are there gaps this video helps name?
Main Takeaways
“AI” covers many very different things.
Treating them all the same obscures real differences in harm.
This video is about regular people using chatbots like ChatGPT.
Data centers use as much electricity as all of Canada.
AI is the main reason that number keeps growing.
New data centers are often built in places already short on water.
Elon Musk’s supercomputer is running illegal gas turbines in a poor Black neighborhood in Memphis.
Residents there are getting sicker because of the pollution.
But one person’s chatbot use uses about as much energy as a microwave running for a minute.
Not all AI companies cause the same amount of harm.
Grok causes more harm than Claude, which causes more harm than doing nothing.
What you type into a chatbot is not private.
Chatbots make things up and present them as facts.
Chatbots tell you what you want to hear, not what is true.
This makes personal problems worse, not better.
There are real dangers. But how you use these tools changes how dangerous they are.
Shaming people for using AI does not make them stop.
It makes them hide their use and do it more.
This is what research on behavior consistently shows.
Harm reduction is a better approach.
It means helping people do something more safely instead of demanding they stop.
This works for drug use, sex education, and it can work for AI too.
Teaching people how AI works makes them less likely to trust it blindly.
It also makes them less likely to use it at all.
Many people use AI because they are struggling and have few other options.
It is not our place to judge whether someone’s need is real enough.
The biggest changes will come from collective action, not individual choices.
Local organizing can block data centers from being built.
Between 2024 and 2025, organizers blocked $18 billion worth of data center projects.
You can get involved through local zoning fights, elections, and grassroots groups.
If someone in your life uses AI, stay curious and non-judgmental.
That is the only way they will open up and let you help.
Direct your anger at the companies and systems causing the harm.
Not at the people trying to cope within those systems.


Leave a Reply