No experiment, no proof. But, taken with a grain of salt a good survey can be better than pure speculation where experiment is impossible or unethical. On the other hand experiments can prove something, but depending on how reduced or artificial the context they may not be proving as much as you hope, either. Science is just difficult in general.
Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
Exactly. Luckily I'm in a field where true experiments are possible, but I have many colleagues who can't ethically run true experiments. It's surveys or nothing for the most part. They have very advanced statistics to account for the lack of control in their research.
And even if you can carry out a proper experiment, it might be useful to see of there’s already a survey on the same topic. If there is, you can use that data to design your experiment, and hopefully you’ll be able to take important variables into account.
where true experiments are possible, but I have many colleagues who can’t ethically run true experiments. It’s surveys or nothing for the most part. They have very advanced statistics to account for the lack of control in their research.
There is a whole discipline on causal inference with observational data that is more than a hundred years old (e.g. John Snow doing a diff-in-diff-strategy). Usually, it boils down to not having to control for every detail but to get plausibly exogeneous variation in your treatment either due to a policy only implemented in one group(state), a regulatory threshold, or other "natural experiments". Social scientists typically need to rely on such replacements for true experiments. Having a good survey is only the first step before you even think at how you could potentially get at the effects of interest. Looking at some correlations in a survey is usually only some first descriptive to find interesting patterns. Survey design itself is a whole different problem. There you also have a experiments and try to find how non-response and wrong answers work. For example, there are surveys in scandinavia, the netherlands, france in Germany that can easily be linked to social security (or even individual credit card data in the danish case) to validate answers or directly use high-quality administrative data.
I think you should generate a survey to see what people think...
I think this is moreso a misunderstanding - surveys on their own, in raw form, are not science
There’s all kinds of bs that can come up like:
- selection bias
- response bias
- general recollection errors/noise (especially for scary or traumatic experiences - there’s a bunch of papers on this behavior)
But data scientist can account for these by looking at things like sample selection (randomly selected so as to represent the nation/region/etc), pilot runs, transparency (fucking huge dude, tell everyone and anyone exactly what you did so we can help point out bullshit), and stuff like adjusting for non-responses.
Non responses are basically the idea that some people simply don’t give a fuck enough to do the survey. Think about a survey your Human Resources team at work might send out - people who fuckin hate working there and don’t see it changing anytime soon might not vote, which means there would be less people expressing their distaste which leads to a false narrative: that people like working there.
Hope this makes sense! Stay curious!!
PS/EDIT: Check out the SAGE method for data science for some more info! (There’s probably a YouTube vid instead of the book if you’d prefer I’m sure!)
I deal with the fallout of this, or something closely related to it, frequently in my industry.
Manufacturers think focus groups represent the needs and opinions of the general public. What they categorically fail to realize is what focus groups actually represent is in fact the types of people who attend focus groups.
The kind of people who respond to surveys are the kinds of busybodies who respond to surveys. Not an actual vertical cross-section of the populace.
If you look into the other methods, they're also filled with flaws and biases.
Shoddy use of them is normal, that is true.
Don't toss-out the baby, with the bath water, tho, eh?
Training people in critical-thinking, & having quality standards for doing surveys, would help our world more, than would removing a method of discovery.
_ /\ _
Survey says, you need more data.