Autism Answers Back

When Autism Becomes a Classification Problem

AABrealperson What AI-driven diagnosis forgets about human dignity

In the language of machine learning, there is a new success story. In terms of autistic lived experience, though, it's another case of being modeled before being heard.

A new study by Tathagat Banerjee, Department of Computer Science and Engineering, Indian Institute of Technology Patna, India, published in the International Journal of Developmental Neuroscience, introduces a deep learning model — AKAttNet — paired with a feature selection algorithm (EIA) to classify autism using publicly available screening data. It boasts up to 98% accuracy across multiple datasets and promises a future where autism can be "detected" faster, earlier, more objectively.

It doesn’t use the word cure. It doesn’t cite RFK Jr. It doesn’t make grand philosophical claims. It simply optimizes a system and reports the metrics.

And that’s exactly the problem.

Because when autism becomes a classification task, the human stakes get buried beneath the model architecture. The study never asks what this kind of system might do to the people it claims to serve. It just asks how well it performs.

And performance, in this paradigm, means: how efficiently we can label the autistic child.

The Frame: Accuracy Over Autonomy

The paper is obsessed with metrics — accuracy, kappa scores, Jaccard similarity — but never once mentions:

It treats behavioral screening data — mostly multiple-choice questionnaires from children and adults — as clean input, not context-dependent communication. It frames current diagnostic tools as flawed because they rely on "human interpretation," then offers AI as the solution — a faster, cheaper, data-driven way to decide who is autistic.

But when you replace human interpretation with automated certainty, you don’t eliminate bias. You obscure it. You bury it in training data, abstract it through neural weights and scale it across institutions without ever asking: What if the problem isn’t speed, but framing?

Autism as a Cost, Not a Culture

Before the algorithm is even introduced, the paper lays the groundwork for urgency. It cites CDC prevalence rates. It lists the financial burden of autism on families and systems — up to $2.4 million per person — without asking whether those "costs" reflect systemic inaccessibility more than individual deficit.

This is not incidental. It is the logic of classification at scale. If autism is framed as:

Then AI diagnosis becomes a form of optimization — not for autistic people, but for the systems that manage them.

The goal is not understanding. It’s sorting. Efficiently. Permanently.

The Digital Scarlett Letter "A"

The model is trained on four datasets, including adult and child screening tools from UCI and Kaggle. These tools are based on binary answers to questions like "Does the person find it hard to make eye contact?" or "Does the child enjoy social games?"

They’re imperfect instruments — built around neurotypical norms — yet here, they become the ground truth for a machine. There is no discussion of nuance. No accounting for race, class, gender or trauma. No margin for cultural context, internalized masking or different modes of expression.

And if the algorithm gets it wrong?

There is no feedback loop. No voice from the autistic person whose identity is being algorithmically interpreted. No ethical reflection on what it means to be misclassified, misrepresented or medically labeled by a black box.

Because in this paradigm, the model is the expert. The autistic person is the data.

From Diagnosis to Infrastructure

This isn’t just about one paper. It’s about the infrastructure it helps build.

A tool like this will be marketed as efficiency. It will be adopted by overwhelmed clinics, under-resourced schools and health systems hungry for automation. And every time it gets one "right," it will reinforce the idea that autism is best understood through pattern recognition — not conversation.

But who decides which patterns count?

Who interprets the signal?

Who benefits from this kind of early detection — and who bears the cost when the system is wrong?

These are not meant to be engineering questions. They are supposed to be ethical ones. And this paper doesn’t ask a single one.

A Better Frame

Autism classification is not inherently evil. But it is never neutral.

If we build tools that define autism without autistic input, we’re not creating solutions. We’re hardcoding exclusion. Even with an optimized feature-selection process like EIA, reducing dimensionality doesn’t reduce ethical responsibility. You can refine input without reflecting on whose patterns you’re modeling — or why.

If we chase accuracy without understanding what’s being measured, we’re not helping. We’re optimizing for compliance. If we describe autism only through deficit and burden, then no matter how elegant our algorithms, the story we’re telling is the same one we’ve always told:

Autism is treated as a problem to detect — not a person to include, support when needed, and respect without erasing their agency.

This study doesn’t speak cruelty. But it speaks convenience. And in the wrong hands, that’s just as dangerous.

We don’t need faster classification.
We need slower science.
We need participatory ethics.
We need systems that don’t just sort us — but see us.

Because the next time an autistic child gets flagged by a neural net, I want someone in the room who remembers:
They are not a pattern.
They are a person.

#AI-and-autism #autism-research #diagnostic-industrial-complex #narrative-justice #pathology-as-default #research-ethics #surveillance-technologies