That's exactly the opposite of what p values mean. Other replies addressed the publication bias problem, but even absent that, p < 0.05 usually implies a probability much higher than 5% that any single "significant" result is false:
The p value describes the probability that a nonexistent effect will falsely be be called significant. You can't turn that around (without Bayes' theorem and a prior) to calculate the probability that the effect is nonexistent.
To compound this, most studies are underpowered -- they don't collect enough data to detect the effect they're looking for. So a statistically insignificant result usually does not mean the effect does not exist.
http://www.statisticsdonewrong.com/p-value.html
The p value describes the probability that a nonexistent effect will falsely be be called significant. You can't turn that around (without Bayes' theorem and a prior) to calculate the probability that the effect is nonexistent.
To compound this, most studies are underpowered -- they don't collect enough data to detect the effect they're looking for. So a statistically insignificant result usually does not mean the effect does not exist.