this post was submitted on 15 Aug 2024
615 points (97.1% liked)

memes

10309 readers
1764 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

Sister communities

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] dogsoahC@lemm.ee 4 points 3 months ago (1 children)

That's the point. To make the low-population area more intense. Because relative to the population density, there were 100 times as many sightings. Or what am I missing.

[โ€“] Fermion@feddit.nl 3 points 3 months ago

I'm not saying normalization is a bad strategy, just that it, like any other processing technique comes with limitations and requires extra attention to avoid incorrect conclusions when interpreting the results.

Because relative to the population density, there were 100 times as many sightings. Or what am I missing.

If you were to attempt to trap and tag bigfoots in both areas, would you end up with 100 times as many angry people in a gorilla suit in the small town? No. You would end up with 1 in both areas. So while the tiny town does technically have 100x the density per capita, each region has only one observable suit wearer.

Assuming the distribution of gorilla suit wearers is uniform, you would expect approximately 99 tiny towns with no big foot sightings for every 1 town with a sighting. So if you were to sample random small towns, because the map says big foots live near small towns, you would actually see fewer hairy beasts than your peer who decided to sample areas with higher population density.

If we could have fractional observations, then all this would be a lot more straightforward, but the discrete nature of the subject matter makes the data imherently noisy. Interpreting data involving discrete events is a whole art and usually involves a lot of filtering.