Autism Answers Back

Faster, But Still Not Ours

file_00000000b01061f4822b1baed0e70a7e A critique of “Predictors of Caregiver Satisfaction with a Hybrid Care Model for Autism Diagnostic Evaluations” by Michele Kilmer, Emily Shah, Danielle Randolph and Lauren Quetsch, University of Arkansas – Journal of Pediatric Nursing, 2025
Read the study


Caregivers liked it.

That’s the core finding of this paper — and it’s not a small one. In a hybrid autism evaluation model combining telehealth and in-person visits, most families reported positive experiences. Few barriers. Shorter waits. Respectful, competent staff.

In a system built on delay and distrust, those things matter.

Especially for rural families, where providers are scarce and diagnostic timelines can stretch 1 to 3 years, this model delivered care in just under 9 months. Most caregivers said the process felt clear and compassionate. Many felt seen.

That’s real.

But satisfaction is not justice.


What the Study Measured — and What It Didn’t

This paper focuses on caregiver satisfaction — not clinical validity, and definitely not autistic experience. No autistic participants were included. There is no indication autistic researchers were involved as co-authors. The diagnostic framework itself goes unquestioned.

The authors did verify diagnostic accuracy in a subset of cases (n=29), all of which were confirmed later in double-provider evaluations. But the study’s primary concern wasn’t whether the diagnosis was meaningful — only whether the process was workable.

The question wasn’t: “Did this process honor your child’s humanity?” It was: “Was the video clear, and were the staff polite?”

Important? Yes. Sufficient? Not even close.


Whose Relief Gets Measured?

It’s easy to praise this model as “accessible.” And in many ways, it is:

This is care in a landscape where most families are told to wait — or give up.

But here’s the structural tension:

The model reduces the logistical burden. Not the diagnostic one.

It doesn’t reform how autism is defined.
It doesn’t question what diagnosis will later cost the child — in school, in care, in autonomy.
It just makes it easier to step onto a track whose rules were written without us.

So yes, caregivers were satisfied. But what were they satisfied with?

The format.
The tone.
The speed.

Not the frame.

Let me be clear: this paper doesn’t set out to harm — and it succeeds in doing something valuable within its scope. But the system it works inside still defines success by speed, not transformation. And when diagnosis is streamlined but never reimagined, the deeper questions remain untouched. That's not on these researchers.

What This Study Gets Right

There’s nothing small about that.

And in a field where “access” often means waitlists, trauma, and bureaucratic exhaustion, this model does more than fill gaps — it repairs them.


What It Still Doesn’t Do

This model is faster.
More convenient.
More human.

But it’s not designed to transform the diagnostic system — only to deliver its outcomes more smoothly.

And if the diagnostic process itself is still shaped by deficit framings, insurance incentives, or school compliance systems — then speeding it up just means:

Harm, faster.

That’s not a flaw of this study.
But it is the boundary of its imagination.


So What Now?

Use this study to build better infrastructure.
Use it to push for hybrid options in Medicaid.
Use it to demand systems that meet families where they are.

But don’t stop there.

Because faster is good.

But faster isn’t justice.
Not unless we change what we’re speeding toward.

#autism-diagnosis #healthcare-access #participatory-research #systemic-inequality #telehealth