74
submitted 5 months ago* (last edited 5 months ago) by Ragdoll_X@lemmy.world to c/dataisbeautiful@lemmy.ml

https://fivethirtyeight.com/features/science-isnt-broken/


Another study with the same goal of comparing the results from different research teams found similar disparities, though the graphs aren't quite as pretty.

you are viewing a single comment's thread
view the rest of the comments
[-] Eheran@lemmy.world 7 points 5 months ago

If we only look that those with p <0.05 (green) and with 95 % confidence interval, then there are 17 teams left. And they all(!) agree with more than 95% conference.

[-] BearOfaTime@lemm.ee 3 points 5 months ago

And you missed the pint in the very article about how p value isn't really as useful as it's been touted.

[-] Eheran@lemmy.world 4 points 5 months ago

That's not the point, which is that the results are indeed mostly very similar, unlike what OP claims.

I never said that only looking at p values is a good idea or anything else like that.

[-] bhmnscmm@lemmy.world 2 points 5 months ago

So ignore all non-significant results? What's to say those methods result in findings closer to the truth than the methods with no significant results.

The issue is that so many seemingly legitimate methods produce different findings with the same data.

this post was submitted on 06 May 2024
74 points (93.0% liked)

Data Is Beautiful

6795 readers
17 users here now

A place to share and discuss data visualizations. #dataviz


(under new moderation as of 2024-01, please let me know if there are any changes you want to see!)

founded 3 years ago
MODERATORS