r/science Professor | Medicine 5d ago

Neuroscience New study finds online self-reports may not accurately reflect clinical autism diagnoses. Adults who report high levels of autistic traits through online surveys may not reflect the same social behaviors or clinical profiles as those who have been formally diagnosed with autism spectrum disorder.

https://www.psypost.org/new-study-finds-online-self-reports-may-not-accurately-reflect-clinical-autism-diagnoses/
7.8k Upvotes

892 comments sorted by

View all comments

137

u/[deleted] 5d ago

[deleted]

77

u/mancapturescolour 5d ago edited 5d ago

As a general but oversimplified rule, a normal distribution (bell curve) starts to appear at n=30 observations.

It's possible to make conclusions from 56 samples but, of course, having more observations will help decide the variability of the sample mean and thus increase confidence that your observations are reflecting the true population mean.

I'd argue it's enough to make an initial study, and then expand and look at more samples. But, to your point, you'd have to be careful when designing the study to avoid bias and so on, which might be more prevalent with a smaller sample size.

Edit: Took out an incorrect statement to avoid confusion.

Edit 2: Thanks to the comment below, adding in the group designs via info from the OP link.

The researchers compared three groups of adults.
 
• One group included 56 individuals who were diagnosed with autism after undergoing in-person clinical evaluations.
 
• The second group consisted of 56 people recruited online who reported high levels of autistic traits using a standard survey.
 
• A third group, also recruited online, reported low levels of autistic traits and served as a comparison.
 
All participants were matched in age and gender to allow for fair comparisons.

Thus, the sample size seems to be 56 + 56 + (whatever number of participants are in group 3), giving a minimum n>112, unless I'm mistaken. In that case, the sample size would not be as inadequate as suggested. That they also matched the groups adds robustness to their initial findings.

9

u/roccmyworld 5d ago

It was 56 in each group.

3

u/telionn 5d ago

What you're saying is that the authors used two completely different sample selection methods, one for the formally diagnosed group and another for everyone else. This completely invalidates the statistical analysis that follows. Nobody can control for the inherent differences between sampling medical records and sampling people on the internet.

1

u/mancapturescolour 5d ago

You know what? That's a good catch.

Would it be possible to conclude anything about the respective groups independent of one another, since there is at last some matching going on?

Meaning, rather than generalizing results across the study, they can say "for group 1, we found X. Due to limitations in the study design, we were unable to conclude the same in group 2 but that is something to investigate further"

33

u/Plenkr 5d ago

at least it's indicative that there's something there to be researched further.

30

u/MajorityCoolWhip 5d ago

There are 3 groups of 56, for a total sample of 168, which is fairly large.

13

u/roccmyworld 5d ago

Why do you think that? Did you run the numbers? Sample size is dependent on the effect size that you observe. 56 people in each group could absolutely be a highly valid study.

8

u/diarmada 5d ago

56 is a good start, in my experience.

2

u/Sandstorm52 5d ago

Depends on effect size. In some cases, you can make strong conclusions from a handful of individuals.

-5

u/laziestmarxist 5d ago

Yes but if the mods didn't post sloppy, poorly researched pseudoscience with completely speculative conclusions five times per day what would this sub dissolve into flame wars over

4

u/wildbergamont 5d ago

The comment you're replying to isn't even accurate.