Autism Answers Back

When the Algorithm Sees a Problem in Your Face

file_00000000756861f7ba9bbd45eddc8b57 A critique of "Deep learning for the identification of autism traits in children through facial expressions: a systematic review" *by Daniella Romani Palomino, Christian Ovalle, and Heli Cordova-Berona Universidad Tecnológica del Perú, Lima, Perú published in the International Journal of Electrical and Computer Engineering (IJECE), June 2025


A new paper claims to systematically review how deep learning can be used to "identify autism traits in children through facial expressions."

Let that sentence settle.

Not understand. Not engage. Not support.

Identify. Through faces. Using machines.

The authors — Palomino, Ovalle, and Cordova-Berona — argue that facial image analysis via neural networks offers a way to automate autism diagnosis without human oversight. They frame traditional diagnostic methods as inefficient, subjective and slow. In contrast, they praise AI as fast, scalable and more accurate than clinical observation.

This is not neutral science. This is surveillance with code.

The paper opens by defining autism as a neurological disorder that creates a lifelong burden on individuals, families, education and society. It does not cite autistic scholars. It does not acknowledge autistic life beyond diagnosis. It never considers the ethical, social or existential implications of turning children's faces into diagnostic targets.

What traits are being flagged? The paper doesn’t say. For what purpose? Early detection. What comes after that? Nothing is said about support. Only about detection, accuracy and technological scalability.

According to the authors, “deep learning models have achieved detection rates of around 90%” using facial expression data. The models mentioned include ResNet, Xception, and CNNs trained on publicly available images of children — images likely scraped or used without meaningful consent.

This isn’t just about access. It’s about power.

A study like this doesn’t need to say the word "cure" to be dangerous. It feeds a pipeline that turns difference into deviance, and deviance into detection. And once something can be detected, it can be flagged. Removed. Prevented.

The authors present this as progress. But a system that uses facial features to classify human beings is not progress. It is regression disguised as innovation.


And here’s the deeper irony:

For decades, autistic people have been told we lack empathy. That we can't read faces. That we misinterpret emotion.

But now, researchers, and the systems they build, are doing exactly that to us.

They point a camera at our expressions, and call the result a "trait."

They train an algorithm on neurotypical assumptions, and call it insight.

They flatten human behavior into datasets and call it understanding.

Autistic people know what this is.

It’s not curiosity. It’s containment.

And when that containment is published without our input and built without our consent? That’s not just exclusion.

It’s erasure.


Here’s the truth:

You don’t need to see our faces to understand us. You need to listen.

You don’t need to detect autistic traits. You need to stop treating them like errors.

And if your science can’t bear public scrutiny — if it can’t stand the eyes of the people it studies — then maybe the problem isn’t in our expressions. Maybe it’s in yours.

#autism-research #bioethics #facial-recognition #participatory-research #surveillance #systemic-inequality