Not statistically significant…
Most people have no idea what “Not statistically significant” means and I don’t see the media being too eager to fix this.
Say you read the following piece in a newspaper:
A study done at the University of Washington showed that, after controlling for race and socioeconomic class, there was no statistically significant difference in athletic performance between those who stretched for 5 minutes before running and those who did no stretching at all.
What do you conclude from that? Stretching is useless? WRONG.
Here’s what the hypothetical study actually was: I picked four random guys on campus and asked two of them to stretch and two of them not to. The ones who stretched ran 10% faster.
Why is this then not statistically significant? Because the sample size was too small to infer anything useful and the study was designed poorly.
All “not statistically significant” tells you is that you can’t infer anything from the study but word the study carefully enough and you can have people believe the opposite is true.
Have you ever heard the claim “There’s no statistically significant difference between going to an elite Ivy League school and an equally good state school?” Perhaps from here, here or even here?
Well, from this paper (via a comment in an Overcoming Bias post):
In other words, the study says no such thing, it simply says the study itself was not sufficient to prove that Ivy League educations made you more money because the data wasn’t good enough and yet the media has twisted this into a positive assertion that state schools do indeed make you as much money as Ivy Leagues.
I’m generously inclined to believe that most cases that I see of this error are caused by incompetence but it’s pretty trivial to see how this could be used for malice. Want the public to believe that Internet usage doesn’t cause social maladjustment? Just design a shitty study and claim “We found no statistical difference in social competence between heavy internet users, light internet users and non users”. Bam, half the PR work has already been don for you.
Here’s another statistical gem I see all the time:
An analysis done at the University of Washington showed that there was zero correlation between race and financial attainment after controlling for IQ, education levels, socioeconomic status and gender.
Heartwarming right, it means if we put blacks and whites in the same situation, they should earn the same amount of money. WRONG.
The key here is to see that we’re looking for financial attainment and controlling for socioeconomic status. Those two things mean the same damn thing. Basically, all this study told us was that being rich causes you to be rich.
Most people view the “controlling for” section of statistical reporting as a sort of benign safeguard. Controlling for things is like… due diligence right, the more the better… It’s easy to numb people into a hypnotic lull with a list of all the things you control for.
But controlling for factors means you get to hide the true cause for things under benign labels. That’s why I’m always so wary of studies that control for socioeconomic status or education levels, especially when they don’t have to. Sure, socioeconomic status might cause obesity but what causes socioeconomic status.
When people do bother to talk about statistical manipulation, they usually focus on issues of statistical fact: Aggressive pruning of outliers, shotgun hypothesis testing and overly loose regressions. But why bother with having to sneak poorly designed studies past peer review when you can just publish a factually accurate study which implies a conclusion completely at odds with the data? That way, you sneak past the defenses of anyone who actually does know something about statistics.
Sometimes, I swear, the more statistically savvy a person thinks they are, the easier they are to manipulate. Give me a person who mindlessly parrots “Correlation does not imply causation” and I can make him believe any damn thing I want.