Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's exactly the opposite of what p values mean. Other replies addressed the publication bias problem, but even absent that, p < 0.05 usually implies a probability much higher than 5% that any single "significant" result is false:

http://www.statisticsdonewrong.com/p-value.html

The p value describes the probability that a nonexistent effect will falsely be be called significant. You can't turn that around (without Bayes' theorem and a prior) to calculate the probability that the effect is nonexistent.

To compound this, most studies are underpowered -- they don't collect enough data to detect the effect they're looking for. So a statistically insignificant result usually does not mean the effect does not exist.



There is a great (cheesy) explanation of the statistical 'significance' of p-values: http://theconversation.com/the-problem-with-p-values-how-sig...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: