Lying with statistics: when statistics can’t tell the truth, or why I’m interested in the statistics of drug safety (and an application to the thimerosal-autism controversy)


Drug safety is hard to study. There are so many things that can go wrong with the human body. To statistically analyze every single possible thing that can go wrong is impossible. There are thousands of possible adverse events, a whole lot of laboratory measurements that have to be taken (so we can address, among other things, whether the drug is hurting the liver, heart, and kidneys), physical exam measurements, vitals (blood pressure, temperature, respiration).

Even if you don’t find an adverse event, you still are analyzing the thousands of possible ones. They all simply have a frequency of 0, which means that the upper 95% confidence limit on a single event is 3/n (3 divided by the sample size of the treatment group of the study). To adjust this number for multiple comparisons (essentially, we have to divide 5% by the number of comparisons, though fancier methods are available), we’d have to find very close to a 100% upper confidence limit for each adverse event — almost 100% unless you include millions of people in the study!

Clearly, closely adhering to the rules of statistics isn’t going to get anyone very far in drug safety analysis until we develop new methodology.

Fortunately, new methodologies are being developed to address these issues, such as Bayesian and graphical methods. However, they are still in the cooker and probably will not be in widespread use for some time. For now, we are stuck with thousands of lines of AE counts, laboratory measure averages, vitals averages, and, if we’re lucky, a few useful graphs for labs and vitals. (Admittedly, I think simple box plots, scatterplots, and line graphs should be used more.)

When I taught first-year statistics many years ago, I tried to impress on my students that because a hypothesis test fails to show a significant effect doesn’t mean no effect is there. However, it’s usually the more conservative option to say the effect isn’t there, if a decision has to be made on the basis of the test.

In drug safety, this argument doesn’t work. To make a claim that a drug is safe, we have to say that it does not cause more adverse events and does not cause unsafe laboratory or vitals findings. The more conservative statement is to say that a drug does cause an adverse event. However, this will essentially lead to the statement that the drug might be too unsafe to use. (How would you like to say that, while no incidence of torsade de pointes was observed, the clinical development program wasn’t robust enough to say that the drug doesn’t cause it?)

So the reality of the current situation, and the state of the art of drug safety analysis, is that we statisticians generate thousands of lines of results and the pass it off to one or more medical writers who try to make sense of it all. (And they usually do a good job, although the FDA has been known to require warnings on the label for events that have occurred only in one animal in only one preclinical study, even when it doesn’t occur at all in the clinical studies.) We statisticians can do better, and we are starting to do better, but right now the 1960s is where we are with safety analysis.

Incidentally, this is why I don’t hold statements like the following in high regard:

I want to be as clear about this as I can. There is no controversy
surrounding Thimerosal. There is scientific evidence and there is
hysteria. The scientific evidence suggests that there is no link
between thimerosal in vaccines and autism or any bad outcome whatsover!

By now you should know what what my response is: if Dr. Flea is going to make such a strong assertion, I expect a tractor-trailor full of CDs full of compressed PDFs with studies disproving any link between thimerosal in vaccines and “autism or any bad outcome whatsoever.” If you want to know the reason the thimerosal-autism story will not die, here it is. Because it’s darn near impossible to collect enough scientific evidence to disprove the link, so anecdotal evidence is going to keep the questions rolling. Which I say is a good thing for the most part, despite the recent (possibly valid, possibly invalid) allegations against two groups of researchers investigating the harmful effects of vaccines.

At any rate, this is why my recent interest has turned toward the analysis of drug safety. Because it’s a hard problem.

Advertisements

11 Responses

  1. Random John,

    As a statistician, surely you cannot be serious that a tractor-trailer full of evidence is necessary to disprove a link between Thimerosal and certain specified neurodevelopmental outcomes!

    How much evidence do you need? I’ll provide you with the refs.

    best,

    Flea

  2. Welcome Dr. Flea!

    You ask how much evidence to disprove a link between thimerosal and autism. Note that I’m not even claiming there is a link, but I am saying that that is very hard to disprove a link. This isn’t anything special about thimerosal or autism, but the fact that drug safety (or, in this case, preservative safety) is simply a hard problem. (And, incidentally, though I apply it to the thimerosal controversy and hopefully enlighten someone along the way why arguments on both sides are confusing and inconclusive, the primary point of the entry is essentially to state that, from a statistical point of view, drug safety is a hard problem.)

    Let’s say, for the sake of argument, we’re going to conduct a randomized controlled clinical trial to show the safety of thimerosal. Say we want an 80% power, 5% alpha study that shows that the incidence in autism in the non-thimerosal group is at least as high as the incidence in the thimerosal group. Also assume that the incidence of autism in the general population is 1/150 (and, if you consider that high, it turns out to result in a lower sample size). Finally, assume a 20% non-inferiority margin (e.g. show the incidence of autism in the nonthimerosal group is 1/150 – 20% of 1/150). Such a study requires over 46,000 people per group. And this is only part of the equation.

    Since we can only ethically do observational studies in this case, (matched control ones at best), we still have to observe a lot of people for weaker evidence. And this still does not address the hard questions that inevitably arise, such as subpopulation analysis, genetic profiles, and so forth, which clearly do have to be considered.

    Now, to be fair, proving such a link in this case (say, for the purposes of vaccine compensation) exists is hard, too. Not only do you have to show association, you have to establish a clear path to show how that not only is thimerosal (or its metabolites–complicating the picture) not cleared, but that it acts to damage the cells in such a way that autism is caused. The only other way I can think of is a challenge-dechallenge method, but that is simply ridiculous here. And again we have only observational data to rely on.

    However, proving something is safe remains a lot harder than proving something is unsafe. That’s why we have pharmacovigilance (post-marketing drug safety) guidelines that get stronger by the year. That’s one reason why the FDA has yanked marketing approvals for 10 compounds in the last decade or so. So yeah, I’m serious when I say you need a tractor-trailer full of evidence — not because I’m holding on to any hope of a link for whatever reason (I’m not), but because I know how hard the problem is.

    In the meantime, people like you have to negotiate these issues (especially in pediatrics, where drug companies are reluctant to test because of the different pharmacokinetics and pharmacodynamics, and so you have even less data on which to base important decisions). I don’t envy you.

  3. I don’t need a tractor-trailer full. I’d settle for a file drawer more than the evidence indicating a connection, if it didn’t include those very flawed epidemiological reports the IOM relied on. Of course, that’s a full file drawer more than you have now.

  4. John,

    46K patients? Easily done. In fact, DONE done:

    PEDIATRICS Vol. 114 No. 3 September 2004, pp. 584-591 (doi:10.1542/peds.2003-1177-L), retrospective cohort, n = 109 863

    PEDIATRICS Vol. 112 No. 5 November 2003, pp. 1039-1048, retrospective cohort, n = 124 170

    Pediatrics, Sep 2004; 114: 577 – 583; prospective cohort, n = 14 000

    best,

    Flea

  5. […] My earlier thesis is Clearly, closely adhering to the rules of statistics isn’t going to get anyone very far in drug safety analysis until we develop new methodology. The Vioxx controversy (accusations of the “head in the sand” approach aside) highlights this issue. […]

  6. Dr. Flea –

    I’ve only had a chance to briefly glance at these trials. I’ll be happy to post on them in more detail when I’ve had a chance to do them justice (at least for the ones that aren’t hiding behind subscription walls — I think one of them was).

  7. Fombonne’s study out of Canada looks quite solid. It confirms what the Danish studies documented, but in this case I don’t see any obvious confounds. Additionally, the California data is in my opinion very conclusive evidence that thimerosal is not the cause of the “autism epidemic”. I wrote about that recently here. So while I can’t tell you that thimerosal is 100% safe, it’s clear that it’s unrelated to autism.

  8. Joseph – I think you had a link that didn’t come through.

  9. For reference, the Fonbonne study is here:

    http://pediatrics.aappublications.org/cgi/content/abstract/118/1/e139?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=thimerosal&andorexactfulltext=and&searchid=1&FIRSTINDEX=0&sortspec=date&resourcetype=HWCIT

    The reference is Eric Fombonne, MD, Rita Zakarian, MEd, Andrew Bennett, PhD, CPsych, Linyan Meng, MSc and Diane McLean-Heywood, MA. “Pervasive Developmental Disorders in Montreal, Quebec, Canada: Prevalence and Links With Immunizations,” ??Pediatrics??, 118(1), p. e139 – e150, July 2006.

    I was able to download a pdf of the full text (rare these days, since I don’t have a subscription to ??Pediatrics?? or anything else. A brief read of the abstract certainly seems to put a huge dent in the thimerosal-autism hypothesis. A BBC story on the study is “here”:http://news.bbc.co.uk/go/rss/-/2/hi/health/5149670.stm, though I currently have little faith in media reporting of healthcare.

    Out of curiosity, why are all the negative studies in this thread coming from ??Pediatrics??? Is it coincidence or the fact that Dr. Flea simply keeps up with the journal?

  10. I’m subscribed to Pediatrics as a member of the AAP. That’s part of the explanation.

    John, what if Pediatrics, or some other journal, published positive studies? Would the findings I showed you be invalidated somehow?

    Flea

  11. Considering the journal question, I typically place lower credence in the journal especially if the exposition is transparent and open to evaluation. I then look at the study design characteristics.

    One thing I do appreciate about Pediatrics is that they seem to make more of their articles public. I hate having to rely on news reports and abstracts.

    Assuming that there was a study that gave results contradictory to the ones you cited — and the Fonbonne study that just came out — I’d be looking for the differences among the studies as a starting point. It doesn’t necessarily come down to one result is valid and the other not (though it might). The story that seems to be emerging is that the removal of thimerosal from vaccines is not contributing to a fall in prevalence of autism. Whether that’s because thimerosal had absolutely nothing to do with autism (maybe), or if it increased the risk of autism but many other factors came into play (maybe), or if it had a neuroprotective effect (no, I don’t believe that either) is not an easily answered question (though I have to look at these studies much more closely — and others now that my interest has become something more than personal). These studies may be getting at that question with covariates; again, I’ll have to look more closely when I have time. (I’ll also have to see if they address the “sensitive populations” hypothesis — I know an Australian study did and I want to find that one as well.)

    You seem to be baffled that I would have a high standard for establishing no link. This is an artifact of the statistics. You’re looking at an event with a prevalence of 1:150 or so (from the latest I’ve heard), and trying to study something with that sort of prevalence isn’t easy if you’re trying to go by the rules of statistics. Combine this with the fact that showing equivalence requires a large sample size. Finding a real association is hard, but not nearly as hard as proving none. Finding a causation relationship is the hard work there.

    So going back to your statement, which I’m guessing attracted your attention to this blog entry in the first place, I don’t place it in high regard because it’s a statement that requires a large amount of proof (“no link between thimerosal in vaccines and autism or any bad outcome whatsover”?!) not just of scrutinizing autism but the thousands of other bad outcomes that can happen. And that’s a problem that currently doesn’t have a good solution (which is the theme of this series of entries on studying drug safety).

Comments are closed.

%d bloggers like this: