Autism Answers Back

Classification Is Not Care — Especially When the System Calls It That

file_00000000e504623088c75a22233be338 Autism isn’t a disease. But to read a recent study published in the SAIEE Africa Research Journal, you’d think it was a pathology just waiting for the right algorithm.

The study, titled "Autism spectrum disorder detection using parallel DCNN with improved teaching learning optimization feature selection scheme," was authored by Trupti Dhamale, Sonali Bhandari, Vishal Harpale, Priyanka Sakhi, Kavita Napte and Vikas Karandikar of Pune Institute of Computer Technology in India.

It proposes a neural network pipeline to detect autism using fMRI data from the ABIDE-I dataset. They call their model “optimized.” They say it improves early diagnosis. They link detection to health, well-being and care.

What they never mention is a single autistic person.

Not in the data. Not in the design. Not in the discussion of impact.

And that absence isn’t a mistake. It’s the design.

What the Paper Actually Does

The study claims to do the following:

The math is clean. The harm is buried in the framing.

Who Gets to Define “Well-Being”?

To exhausted moms still fighting for services — this may sound like hope. Faster detection? Earlier help? Maybe this tool will see what the school doesn’t.

But here’s the truth: systems that promise efficiency without inclusion don’t deliver help. They deliver sorting.

You’ve waited 18 months for someone to take your child seriously. This model offers 96% certainty — but certainty of what? That a scan can capture your kid’s future? That a label ensures access? Or that the system is one step closer to screening without listening?

Families are promised care — and delivered a label with no support behind it.

Autism detection without autistic input doesn’t create support. It creates silence dressed as certainty.

To the Researchers Who Wrote This

You describe autism as a disease that “impairs mental strength.”

You write that it “prevents learning, developing, controlling, interacting.”

You claim your work “improves quality of life” — without ever asking what quality means to those of us you’re trying to diagnose.

Your system is accurate. But accurate classification of a population you’ve erased from your design isn’t innovation.

It’s diagnostic extraction.

You extracted patterns from our scans like there was no one on the other end.

But someone was.

Someone is.

If your pipeline can predict autism but not name a single autistic collaborator — what exactly have you optimized?

To the Researchers in the Room Watching This Happen

Maybe you’re not like them. Maybe your paper has a stakeholder section. Maybe you’ve thought about harm.

Then take this as a mirror: accuracy does not absolve you.

The tools you build will be used beyond your intentions — in school gatekeeping, in immigration risk models, in predictive sorting.

If you don’t say what your model isn’t for, someone else will decide.

To the Professionals Who Fund, Cite and Scale This Work

Ask yourself: who defines success here?

If well-being is measured in detection rates and not in dignity, you’re not funding care. You’re automating compliance.

You don’t need to be anti-tech. But you need to be pro-truth.

Because what gets built in these journals becomes infrastructure. And infrastructure decides who gets seen, who gets sorted and who gets shut out.

What Should Have Been Asked

Until those questions are central, your model isn’t supporting us.

It’s scanning us.

And when it scans us without consent, without care and without context — it doesn’t just misread who we are.

It clears the way for systems that were never built to hold us at all.

#AI-ethics #autism-research #data-consent #diagnostic-bias #medical-framing #participatory-research