Drug safety is hard to study. There are so many things that can go wrong with the human body. To statistically analyze every single possible thing that can go wrong is impossible. There are thousands of possible adverse events, a whole lot of laboratory measurements that have to be taken (so we can address, among other things, whether the drug is hurting the liver, heart, and kidneys), physical exam measurements, vitals (blood pressure, temperature, respiration).
Even if you don’t find an adverse event, you still are analyzing the thousands of possible ones. They all simply have a frequency of 0, which means that the upper 95% confidence limit on a single event is 3/n (3 divided by the sample size of the treatment group of the study). To adjust this number for multiple comparisons (essentially, we have to divide 5% by the number of comparisons, though fancier methods are available), we’d have to find very close to a 100% upper confidence limit for each adverse event — almost 100% unless you include millions of people in the study!
Clearly, closely adhering to the rules of statistics isn’t going to get anyone very far in drug safety analysis until we develop new methodology.
Fortunately, new methodologies are being developed to address these issues, such as Bayesian and graphical methods. However, they are still in the cooker and probably will not be in widespread use for some time. For now, we are stuck with thousands of lines of AE counts, laboratory measure averages, vitals averages, and, if we’re lucky, a few useful graphs for labs and vitals. (Admittedly, I think simple box plots, scatterplots, and line graphs should be used more.)
When I taught first-year statistics many years ago, I tried to impress on my students that because a hypothesis test fails to show a significant effect doesn’t mean no effect is there. However, it’s usually the more conservative option to say the effect isn’t there, if a decision has to be made on the basis of the test.
In drug safety, this argument doesn’t work. To make a claim that a drug is safe, we have to say that it does not cause more adverse events and does not cause unsafe laboratory or vitals findings. The more conservative statement is to say that a drug does cause an adverse event. However, this will essentially lead to the statement that the drug might be too unsafe to use. (How would you like to say that, while no incidence of torsade de pointes was observed, the clinical development program wasn’t robust enough to say that the drug doesn’t cause it?)
So the reality of the current situation, and the state of the art of drug safety analysis, is that we statisticians generate thousands of lines of results and the pass it off to one or more medical writers who try to make sense of it all. (And they usually do a good job, although the FDA has been known to require warnings on the label for events that have occurred only in one animal in only one preclinical study, even when it doesn’t occur at all in the clinical studies.) We statisticians can do better, and we are starting to do better, but right now the 1960s is where we are with safety analysis.
Incidentally, this is why I don’t hold statements like the following in high regard:
I want to be as clear about this as I can. There is no controversy
surrounding Thimerosal. There is scientific evidence and there is
hysteria. The scientific evidence suggests that there is no link
between thimerosal in vaccines and autism or any bad outcome whatsover!
By now you should know what what my response is: if Dr. Flea is going to make such a strong assertion, I expect a tractor-trailor full of CDs full of compressed PDFs with studies disproving any link between thimerosal in vaccines and “autism or any bad outcome whatsoever.” If you want to know the reason the thimerosal-autism story will not die, here it is. Because it’s darn near impossible to collect enough scientific evidence to disprove the link, so anecdotal evidence is going to keep the questions rolling. Which I say is a good thing for the most part, despite the recent (possibly valid, possibly invalid) allegations against two groups of researchers investigating the harmful effects of vaccines.
At any rate, this is why my recent interest has turned toward the analysis of drug safety. Because it’s a hard problem.
Filed under: Autism, Biostatistics, Lying with statistics, Mercury | 11 Comments »