Statistical Errors in Medical Studies

Posted on March 14, 2010  Comments (4)

I have written about statistics, and various traps people often fall into when examining data before (Statistics Insights for Scientists and Engineers, Data Can’t Lie – But People Can be Fooled, Correlation is Not Causation, Simpson’s Paradox). And also have posted about reasons for systemic reasons for medical studies presenting misleading results (Why Most Published Research Findings Are False, How to Deal with False Research Findings, Medical Study Integrity (or Lack Thereof), Surprising New Diabetes Data). This post collects some discussion on the topic from several blogs and studies.

HIV Vaccines, p values, and Proof by David Rind

if vaccine were no better than placebo we would expect to see a difference as large or larger than the one seen in this trial only 4 in 100 times. This is distinctly different from saying that there is a 96% chance that this result is correct, which is how many people wrongly interpret such a p value.

So, the modestly positive result found in the trial must be weighed against our prior belief that such a vaccine would fail. Had the vaccine been dramatically protective, giving us much stronger evidence of efficacy, our prior doubts would be more likely to give way in the face of high quality evidence of benefit.

While the actual analysis the investigators decided to make primary would be completely appropriate had it been specified up front, it now suffers under the concern of showing marginal significance after three bites at the statistical apple; these three bites have to adversely affect our belief in the importance of that p value. And, it’s not so obvious why they would have reported this result rather than excluding those 7 patients from the per protocol analysis and making that the primary analysis; there might have been yet a fourth analysis that could have been reported had it shown that all important p value below 0.05.

How to Avoid Commonly Encountered Limitations of Published Clinical Trials by Sanjay Kaul, MD and and George A. Diamond, MD

Trials often employ composite end points that, although they enable assessment of nonfatal events and improve trial efficiency and statistical precision, entail a number of shortcomings that can potentially undermine the scientific validity of the conclusions drawn from these trials. Finally, clinical trials often employ extensive subgroup analysis. However, lack of attention to proper methods can lead to chance findings that might misinform research and result in suboptimal practice.

Why Most Published Research Findings Are False by John P. A. Ioannidis

There is increasing concern that most current published research findings are false…

a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.

A finding from a well-conducted, adequately powered randomized controlled trial starting with a 50% pre-study chance that the intervention is effective is eventually true about 85% of the time.

One additional complexity of medical studies is the interaction of individual’s genetic makeup. Companies like Millennium Labs are attempting to provide more personalized drug use advice based on the patient’s genes.

We’re so good at medical studies that most of them are wrong by John Timmer

In the end, Young noted, by the time you reach 61 tests, there’s a 95 percent chance that you’ll get a significant result at random. And, let’s face it—researchers want to see a significant result, so there’s a strong, unintentional bias towards trying different tests until something pops out.

even the same factor can be accounted for using different mathematical means. The models also make decisions on how best handle things like measuring exposures or health outcomes. The net result is that two models can be fed an identical dataset, and still produce a different answer.

Odds are, it’s wrong by Tom Siegfried

Ioannidis claimed to prove that more than half of published findings are false, but his analysis came under fire for statistical shortcomings of its own. “It may be true, but he didn’t prove it,” says biostatistician Steven Goodman of the Johns Hopkins University School of Public Health. On the other hand, says Goodman, the basic message stands. “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.

“Determining the best treatment for a particular patient is fundamentally different from determining which treatment is best on average,” physicians David Kent and Rodney Hayward wrote in American Scientist in 2007. “Reporting a single number gives the misleading impression that the treatment-effect is a property of the drug rather than of the interaction between the drug and the complex risk-benefit profile of a particular group of patients.”

Related: Bigger Impact: 15 to 18 mpg or 50 to 100 mpg?Meaningful debates need clear informationSeeing Patterns Where None ExistsFooled by RandomnessPoor Reporting and Unfounded ImplicationsIllusion of Explanatory DepthMistakes in Experimental Design and Interpretation

4 Responses to “Statistical Errors in Medical Studies”

  1. Health 2.0 News: Human Tamagotchis and Twitter Tummy Tone « ScienceRoll
    March 23rd, 2010 @ 7:43 pm

    […] Statistical Errors in Medical Studies (Curious Cat) […]

  2. Curious Cat Science and Engineering Blog » Evidence that Refined Carbohydrates Threaten the Heart
    April 28th, 2010 @ 8:47 pm

    […] The medical studies about what food to eat to remain healthy can be confusing but some details are not really in doubt. So while the exact dangers of processed carbohydrates, fat, excess calories and high fructose corn syrup may be in question their is no doubt we, in the USA, are not as healthy as we should be. And food is a significant part of the problem. Eat food, not too much, mostly plants and get enough exercise is good advice. […]

  3. Gravity and the Scientific Method » Curious Cat Science Blog
    April 18th, 2011 @ 8:05 am

    The scientific method (combined with our human involvement) doesn’t mean new ideas are accepted easily but it does mean new ideas compete on the basis of evidence not just the power of those that hold the beliefs…

  4. Majority of Clinical Trials Don’t Provide Meaningful Evidence » Curious Cat Science Blog
    May 3rd, 2012 @ 6:33 am

    “The analysis, published today in the Journal of the American Medical Association, found the majority of clinical trials is small, and there are significant differences among methodical approaches, including randomizing, blinding and the use of data monitoring committees…”

Leave a Reply