Tuesday, July 15, 2014

Scientists’ Grasp of Confidence Intervals Doesn’t Inspire Confidence

Sometimes it’s hard to have confidence in science. So many results from published scientific studies turn out to be wrong. Part of the problem is that science has trouble quantifying just how confident in a result you should be. Confidence intervals are supposed to help with that. They’re like the margin of error in public opinion polls. If candidate A is ahead of candidate B by 2 percentage points, and the margin of error is 4 percentage points, then you know it’s not a good bet to put all your money on A. The difference between the two is not “statistically significant.” Traditionally, science has expressed statistical significance with P values, P standing for the probability that the result you observe is a fluke. P values have all sorts of problems...Consequently many experts have advised using confidence intervals instead, and their use is becoming increasingly common. While there are some advantages in that, it is sadly the case that confidence intervals are also not what they are commonly represented to be. more

No comments: