Autism Answers Back

A Blood Test Says You’re Autistic. What Could Go Wrong?

AABbloodai What if your child’s autism diagnosis came not from a person — but from an app, a hospital server and a blood test based on a retracted dataset?

That’s not science fiction. That’s the design.

In Neural Computing and Applications, a 2025 study by Warda M. Shaban of the Nile Higher Institute for Engineering and Technology (Mansoura, Egypt) introduces ASA — Automatic Screening Autism — a system that takes blood biomarker data and feeds it through a machine-learning pipeline. The final output? A diagnosis. Delivered by algorithm. No clinician required.

They call it "infected patient detection." And they say it’s 99.1% accurate.

The dataset? It was retracted in 2024 due to analytical errors. ASA uses it anyway — and cites it as a clinical foundation.

The Frame, Unmasked

This isn’t just a tech paper. It’s a full stack of assumptions:

ASA feeds user-entered biomarker data through an app, pushes it to a hospital server, classifies it with an optimized deep neural net and stores the results via a fog/cloud system. The diagram says a doctor may follow up. But the decision is made.

It’s biometric sorting dressed up as early care.

What the Study Actually Does

Let’s start with the data: 154 boys, aged 18 months to 8 years, screened for 1,125 blood proteins. That dataset, from a 2021 study by Hewitson et al., was retracted in 2024 due to analytical errors.

Still, this paper uses it. It trains a model on that data. And it builds an entire diagnostic architecture around it.

ASA isn’t only tested on proteomics. It also uses standard questionnaire-style datasets (like UCI/Kaggle screenings for toddlers and adults). But it’s the blood test that headlines the system. The diagrams center it. The optimization workflow is built on it.

And the figures repeatedly label autistic children as "infected patients."

Not once. Across multiple sections.

It sounds like a zombie movie, not a scientific research study.

That’s not metaphor. That’s design language.

They Mention Ethics — But Only Abstractly

The paper lists the usual: consent, bias, privacy. But no opt-out is coded into the model. No participatory design is mentioned. No mitigation appears in the diagrams. There is no mechanism for challenge, for ambiguity, for refusal.

ASA doesn’t diagnose with care. It diagnoses with automation. And it treats consent as a sidebar.

Why This Matters

Because there is no blood test that can diagnose autism. Because retracted data cannot be the basis for clinical tools. Because "infected patient" is not a neutral term. It’s a structural one.

This system isn’t just premature. It’s dangerous. Not because the code is faulty — but because the frame is. And if you think language like that won’t shape policy, access, insurance, and stigma, you haven’t been paying attention.

What’s Missing

Final Line

You can optimize the algorithm all you want. You can tune the weights, strip the outliers, run your cross-validation until the numbers gleam.

But if the training set was retracted, And the framing calls autistic kids "infected patients," Then what you’ve built isn’t a screening tool.

Then what you’ve built isn’t a screening tool. It’s a warning system. And the warning is ASA.