engineered exclusion—the predictable sidelining of disabled users resulting from choices about data provenance, model objectives, and evaluation practices within AI systems.
Engineered exclusion differs from adjacent concepts in AI ethics such as algorithmic bias or algorithmic oppression (Benjamin 2019) in that it does not primarily describe disparate error rates or representational harm across demographic groups. Rather, it names a patterned infrastructural misalignment between system assumptions and communicative forms that fall outside dominant data distributions. While bias frameworks often focus on inequitable outputs across predefined categories, engineered exclusion highlights how entire communicative modalities—multimodal, state-dependent, AAC-mediated—may be structurally unrecognized at the level of training data, model objectives, and evaluation benchmarks. In this sense, it extends but is not reducible to fairness discourse, emphasizing pipeline architecture rather than outcome disparities alone (Benjamin 2019; Whittaker et al. 2021).
What results is a form of algorithmic double empathy failure: the model assumes the user is incoherent, while the user experiences the model as unresponsive. Both “sides” operate from incompatible assumptions about what communication should look like. Reframing engineered exclusion in this way highlights that accessibility is not simply a matter of adding more data, but of fostering systems capable of reciprocal understanding—systems that learn to adapt to diverse communicative logics rather than forcing users into legibility.
