[old scientist, pointing at some data] After decades of research, thousands of experiments, a massive amount of peer reviewing, we can finally confidently conclude…
[smug dude with a ridiculous hairstyle] Uh yeah, but this TikTok by PatriotEagle1776 says your research is wrong
Researchers who already thought of all this and it’s in the study: -_-
A lot of mass media is full of bullshit, and people showing skepticism, asking further questions, and wanting second opinions is generally a good, healthy response. Particularly in an era of Dr. Oz professional bullshit and blaring “Head On, Apply Directly To The Forehead!” style advertisements.
Tbf, research based on a survey is much less valuable than a double blind randomized study
You might need a larger sample, and sometimes a blind study is just not possible.
Even then, the error bars are usually huge. If we’re talking about cigarette smoke causing lung cancer (which has a relative risk increase of 10,000%) then those error bars aren’t an issue. But if you’re surveying people for their diet over the past 30 years to connect to colon cancer and you gey a relative risk increase of ~5℅ then the whole thing should be thrown out because the error bars are more like ± 100%
Thus the larger sample, to get something statistically significant. Which might not be practical due to cost.
Some methods suck no matter how much data you throw at it.
The study I was referencing had thousands of people taking their survey and the data quality was terrible because that’s what you get when asking people to recall what they ate over the past 20-30 years. Adding yet more people to the study won’t clean up the data and would start adding enough cost that it’d be cheaper to do close observation studies of 100 people and woupd actually achieve usable results.
The general guidelines on epidemiological studies (which both of my examples are) is that you cannot draw conclusions from a relative risk increase less than 100%.
So please stop with the blanket statement of “more data means better results”. It’s not true, and it’s the same claim that AI tech bros keep making to fleece gullible investors
More data does mean better results.
So when I can’t get a useful trendline on a graph of % of redheads born per number of bananas eaten by the mother, you’re saying it’s because I didn’t collect enough data? Why didn’t I think of that?
No trend is also a result, more data, more confidence.