Autism Answers Back

When Motion Becomes Diagnosis: Kinematics, AI, and the Rise of Algorithmic Ableism

AABaijobscreening Imagine you're in a pre-screening for a job interview . You’re answering questions clearly. You’ve done your research. You’re qualified.

What you don’t know is that your motion is being recorded — subtle shifts in your hand, the arc of your wrist, the timing of your reach for a glass of water. A hidden AI system is analyzing those movements in real time. And before you even finish speaking, it has flagged you:

"Not the right fit."

Not because of your résumé. Not because of your answers. But because your movement didn’t match the neurotypical baseline.

This isn’t science fiction or a scene from a 1984 sequel called Big Brother Goes Corporate. It’s a foreseeable future. And the latest study from Khoshrav et al. — "Deep learning diagnosis plus kinematic severity assessments of neurodivergent disorders" — is building the scaffolding.

What the Study Claims

The researchers attached motion sensors to participants performing a simple touchscreen-reaching task. Using deep learning, they trained a model to identify whether someone was autistic, had ADHD, both or was neurotypical. It worked, they say, with about 71% accuracy.

But they didn’t stop at prediction. They also measured tiny fluctuations in motion — and assigned those fluctuations statistical values like entropy and Fano Factor. They claim these are biometrics of severity. More variability? More severe.

From that, they draw sweeping conclusions:

And they suggest this could be used for early screening in schools, clinics and non-medical settings.

What They Never Ask

Because here’s the risk: If your movement becomes your metric, then every space becomes a test.

The Return of Biological Essentialism

This study revives a dangerous idea with new math: that who you are — and how much you matter — can be reduced to how your body moves.

Biological essentialism is the belief (or assumption) that a person’s identity, behavior or worth can be fully or primarily explained by their biology — especially their brain, genes, or physiology. The idea in this study is biological essentialism by algorithm:

It reboots the same pathologizing logic that once labeled autistic children "disordered" for how they walked, sat or failed to make eye contact. But now it’s automated. And hidden behind metrics.

The researchers don’t engage with the ethical implications. They don’t include autistic co-authors. They never ask what the participants themselves think about being classified by entropy.

From Clinical Tool to Employment Filter

Let’s follow this trajectory to its logical end:

A tech startup adopts this system to streamline hiring. Applicants perform a "cognitive motor task" during a pre-interview screening. The company never tells them what’s being measured. They just use motion patterns to "predict fit."

If your micro-movements deviate from the training set? You’re filtered out. No explanation.

Not because you lacked skill. But because your body didn’t conform to neurotypical standards.

This is algorithmic ableism.
This is bio-behavioral surveillance masquerading as support.

Prediction Is Not Progress

Yes, the model predicts. But it learns from labels created in a diagnostic-industrial complex that treats difference as defect. Replicating that system isn’t accuracy. It’s assimilation.

Worse, the model invites a future where autistic people are:

What We Actually Need

If you want to support autistic people:

Build systems that listen. That include. That reflect the complexity of real people.

And if you're studying us, start with our voices. Not our wrists.

Because when prediction becomes policy and policy becomes practice, it's not just a study anymore.

It's a filter. It's a gate. It's a closed door.

And autistic people? We're not stepping aside.
We're stepping through.

#algorithmic-ableism #autism-research #hiring-bias #medical-model #participatory-research #surveillance