The Face, the Brain, the Lie: When AI Diagnosis Scans for Autism but Sees No One
A new paper by B. Shanmathi, S. Kannagi, N. Thamilarasi, Ferin Kingsly M, Neerukattu Madhan Kumar and Sunnapugunta Venkatesh from the Velammal Institute of Technology claims to have built a diagnostic AI system that can detect autism using facial scans, electroencephalogram (EEG) signals and magnetic resonance imaging (MRI). They call it efficient. Objective. Scalable. A “promising step forward.”
It’s not. It’s a step further down the road of erasure, just paved with better math.
The Promise: Automation Without Subjectivity
The researchers use a convolutional neural network (EfficientNet) to combine three data types: MRI and functional MRI (fMRI), EEG signals and facial expressions. These are then processed through a web-based platform built with Django, allowing clinicians to upload data and receive real-time predictions of ASD likelihood.
In other words: you upload a scan of someone’s brain or face and a machine tells you if they’re autistic.
The team reports 91.46 percent accuracy based on publicly sourced datasets and CNN model evaluations. They frame this as a leap forward — faster, more consistent and less “subjective” than conventional behavioral assessments.
The Reality: A Pipeline of Precision Harm
Let’s not be distracted by the stats. The problem here isn’t the model’s performance.
It’s the assumptions it encodes — and the systems it serves.
1. Autism is framed as defect, not difference.
The paper opens with this: “ASD is described as a neurodevelopmental disorder with impairments in communication, social interaction and behavior.”
That’s not a hypothesis. It’s a declaration. One the model is built to enforce — at scale and without friction.
There’s no room in this framework for autistic selfhood, culture, adaptation or consent. Just symptoms to be detected. Patterns to be flagged. Brains to be sorted.
2. Multimodal surveillance becomes medical certainty.
Facial recognition, EEG scans and MRIs are not neutral tools. They are deeply invasive technologies with long histories of misuse — especially against marginalized bodies.
To fuse them into a single classification system is not just innovative. It’s dangerous.
Because when machines read our expressions as “flat,” our EEGs as “atypical,” our brains as “deviant” — and label us accordingly — they aren’t just describing. They’re deciding.
3. There is no autistic perspective. Anywhere.
This paper does not ask what these traits mean. It asks only how well they can be modeled.
There are no autistic co-authors, no lived experience considerations, no ethics section and no questions about misclassification, coercion or consent.
Just a pipeline: input the brain, output the label. Autistic or not. High risk or low.
This isn’t a diagnostic tool. It’s a sorting mechanism — and it works best when we don’t speak.
What’s Being Optimized?
Let’s be honest: these models aren’t built for autistic people. They’re built for systems that want faster, cheaper and more scalable answers.
And when efficiency becomes the metric, humanity becomes the cost.
Toward a Better Question
We don’t oppose innovation. But we oppose innovation that erases the people it claims to help.
So here’s what we ask:
- Who defines what autism looks like in these datasets?
- Who gets to decide what counts as “real” autism?
- And what happens to those of us whose brains — or faces — don’t conform?
If a tool can scan a face but not see the person behind it, it doesn’t belong in care. It belongs in a report to the Ethics Board — and a reckoning with the systems that built it.
And if we’re building systems to detect autism without autistic people at the center, we’re not diagnosing.
We’re disappearing.