Mercury has a use …

…inside a clock.

Statistical critique: where do we draw the line? (An application to drug safety analysis)

I just read an interesting entry (and thread) on Andrew Gelman’s statistical blog that goes along the lines of some questions I have been pondering lately. Specifically, these two paragraphs hit me (this is form an email to Gelman):

The whole spirit of your blog would have led, in my view, to a rejection of the early papers arguing that smoking causes cancer (because, your eloquent blog might have written around 1953 or whenever it was exactly, smoking is endogenous). That worries me. It would have led to many extra people dying.

I can tell that you are a highly experienced researcher and intellectually brilliant chap but the slightly negative tone of your blog has a danger — if I may have the temerity to say so. Your younger readers are constantly getting the subtle message: A POTENTIAL METHODOLOGICAL FLAW IN A PAPER MEANS ITS CONCLUSIONS ARE WRONG. Such a sentence is, as I am sure you would say, quite wrong. And one could then talk about type one and two errors, and I am sure you do in class.

So, let’s consider the drug safety problem in light of this. I’ve noted before that strictly following the rules of statistics in analysis of drug safety will in too many cases lead to an understatement of the risks of taking a drug. However, we have to say something with regard to the safety of a drug, and, given the readiness of the lawyers to file lawsuits for the adverse events of a drug, it had better be correct. We do have to do the best we can.

On the other hand, let’s look at the studies looking at the autism-thimerosal connection. Both those in the camp of suggesting such a connection and denying the connection all have their methodological flaws (which makes them all the more confusing), but some of them come to the right conclusion.

Ultimately, every study has its flaws. There is some factor not considered, something not controlled for, some confounding issue. Exactly when this invalidates the study, however, is not an easy issue.

Technorati Tags:

Lying with statistics: when statistics can’t tell the truth, or why I’m interested in the statistics of drug safety (and an application to the thimerosal-autism controversy)

Drug safety is hard to study. There are so many things that can go wrong with the human body. To statistically analyze every single possible thing that can go wrong is impossible. There are thousands of possible adverse events, a whole lot of laboratory measurements that have to be taken (so we can address, among other things, whether the drug is hurting the liver, heart, and kidneys), physical exam measurements, vitals (blood pressure, temperature, respiration).

Even if you don’t find an adverse event, you still are analyzing the thousands of possible ones. They all simply have a frequency of 0, which means that the upper 95% confidence limit on a single event is 3/n (3 divided by the sample size of the treatment group of the study). To adjust this number for multiple comparisons (essentially, we have to divide 5% by the number of comparisons, though fancier methods are available), we’d have to find very close to a 100% upper confidence limit for each adverse event — almost 100% unless you include millions of people in the study!

Clearly, closely adhering to the rules of statistics isn’t going to get anyone very far in drug safety analysis until we develop new methodology.

Fortunately, new methodologies are being developed to address these issues, such as Bayesian and graphical methods. However, they are still in the cooker and probably will not be in widespread use for some time. For now, we are stuck with thousands of lines of AE counts, laboratory measure averages, vitals averages, and, if we’re lucky, a few useful graphs for labs and vitals. (Admittedly, I think simple box plots, scatterplots, and line graphs should be used more.)

When I taught first-year statistics many years ago, I tried to impress on my students that because a hypothesis test fails to show a significant effect doesn’t mean no effect is there. However, it’s usually the more conservative option to say the effect isn’t there, if a decision has to be made on the basis of the test.

In drug safety, this argument doesn’t work. To make a claim that a drug is safe, we have to say that it does not cause more adverse events and does not cause unsafe laboratory or vitals findings. The more conservative statement is to say that a drug does cause an adverse event. However, this will essentially lead to the statement that the drug might be too unsafe to use. (How would you like to say that, while no incidence of torsade de pointes was observed, the clinical development program wasn’t robust enough to say that the drug doesn’t cause it?)

So the reality of the current situation, and the state of the art of drug safety analysis, is that we statisticians generate thousands of lines of results and the pass it off to one or more medical writers who try to make sense of it all. (And they usually do a good job, although the FDA has been known to require warnings on the label for events that have occurred only in one animal in only one preclinical study, even when it doesn’t occur at all in the clinical studies.) We statisticians can do better, and we are starting to do better, but right now the 1960s is where we are with safety analysis.

Incidentally, this is why I don’t hold statements like the following in high regard:

I want to be as clear about this as I can. There is no controversy
surrounding Thimerosal. There is scientific evidence and there is
hysteria. The scientific evidence suggests that there is no link
between thimerosal in vaccines and autism or any bad outcome whatsover!

By now you should know what what my response is: if Dr. Flea is going to make such a strong assertion, I expect a tractor-trailor full of CDs full of compressed PDFs with studies disproving any link between thimerosal in vaccines and “autism or any bad outcome whatsoever.” If you want to know the reason the thimerosal-autism story will not die, here it is. Because it’s darn near impossible to collect enough scientific evidence to disprove the link, so anecdotal evidence is going to keep the questions rolling. Which I say is a good thing for the most part, despite the recent (possibly valid, possibly invalid) allegations against two groups of researchers investigating the harmful effects of vaccines.

At any rate, this is why my recent interest has turned toward the analysis of drug safety. Because it’s a hard problem.

While anti-vaccine researchers are being charged with ethical lapses, the mercury issue marches on

Dr. Mercola links to an public service announcement about the presence of mercury in vaccines. This PSA(Public Service Announcement) contains one tidbit that I haven’t heard yet: that the EPA suggests that the amount of mercury still present in vaccines is safe only if you weigh over 500 pounds. Is there a source for this information?

Update: So, I’ve dug a little deeper. Here’s what I’ve found:

  • The EPA’s webpage on human exposure of mercury is here.
  • I found the following statement in mercury’s tox profile:

EPA and FDA have set a limit of 2 parts inorganic mercury per billion (ppb) parts of water in drinking water. EPA is in the process of revising the Water Quality Criteria for mercury. EPA currently recommends that the level of inorganic mercury in rivers, lakes, and streams be no more than 144 parts mercury per trillion (ppt) parts of water to protect human health (1 ppt is a thousand times less than 1 part per billion, or ppb). EPA has determined that a daily exposure (for an adult of average weight) to inorganic mercury in drinking water at a level up to 2 ppb is not likely to cause any significant adverse health effects. FDA has set a maximum permissible level of 1 part of methylmercury in a million parts (ppm) of seafood products sold through interstate commerce (1 ppm is a thousand times more than 1 ppb). FDA may seize shipments of fish and shellfish containing more than 1 ppm of methylmercury, and may seize treated seed grain containing more than 1 ppm of mercury.

  • John Hopkin’s University (Bloomberg School of Public Health) maintains a vaccine safety site. They included a table of thimerosal concentrations.
  • The FDA maintains its own thimerosal page. Of note, according to a 1999 review, “…, depending on the vaccine formulations used and the weight of
    the infant, some infants could have been exposed to cumulative levels
    of mercury during the first six months of life that exceeded EPA
    recommended guidelines for safe intake of methylmercury.” (The Hg-containing metabolite of thimerosal is ethylmercury.)

If you do the math on the 0.1µg/kg/day reference dose set forth by the EPA and compare it to the tables on the FDA page and Vaccinesafety.edu pages, at least for the pediatric vaccines, you don’t get 500 pounds. The worst case scenario that I found was the Fluvirin® that was before 9/28/2001 and Fluzone® flu vaccine that was before 12/23/2004, both of which contained 25 µg of thimerosal in a 0.5 mL dose before newer versions were approved. This does work out to a little over 550 pounds (25 µg / 0.1 µg/kg =250kg = 551 lbs), and this may very well be the source of the 500 pound message; however, the vaccines have been replaced with one with one less than 1 µg/0.5 mL dose or even thimerosal-free versions.

Mark and David Geiers’ IRB: If there is a story here, I’m disappointed

Activists asserting a connection between vaccines (or components of vaccines) and autism have had a hard time of late. First, it was Dr. Wakefield, proponent of the theory of the connection between the MMR(Mumps, Measles, Rubella) vaccine and both autism and irritable bowels, was formally charged with professional misconduct. Now, Kathleen Siedel of Neurodiversity has dug up some disturbing information on Dr. Mark and David Geier. This has to do with the creation of an IRB(Institutional Review Board) that oversaw the protocol that resulted in the recent manuscript A Clinical and Laboratory Evaluation of Methionine Cycle-Transsulfuration and Androgen Pathway Markers in Children with Autistic Disorders. This paper dovetails with their Lupronâ„¢ (i.e. chemical castration) strategy for treating children with autism.

Repeat readers of this blog know that I am agnostic on the thimerosal-autism connection hypothesis. I even have my doubts about the safety of the MMR(Mumps, Measles, Rubella) vaccine. The complexity of the mind is such that we simply don’t understand how these things work, and even running tests for mercury in the blood isn’t easy. And the vigor with which people on both sides of this controversy argue seems to leave little room for real understanding.

Continue reading

More on Ayurveda

So, not long after I read Orac’s hit piece on alternative medicine about a very old JAMA article about mercury in Ayurvedic herbs manufactured in India and sold in the US, I read this entry on how the government of India is testing Ayurveda and trying to see how it fits with the world of modern medicine. Part of this process, of course, is Good Manufacturing Practices designed to ensure that what you manufacture (say, herbs suitable for Ayurveda) is what you say you manufacture (as opposed to, say, mercury-laden herbs or bone powder). But there’s more to this story.

Continue reading

Statistical commentary on the Geiers’ latest paper, Part IV

In “Part I”:http://www.randomjohn.info/wordpress/2006/03/03/statistical-commentary-on-the-geiers-latest-paper-part-i/, “Part II”:http://www.randomjohn.info/wordpress/2006/03/03/statistical-commentary-on-the-geiers-latest-paper-part-ii/, and “Part III”:http://www.randomjohn.info/wordpress/2006/03/03/statistical-commentary-on-the-geiers-latest-paper-part-iii/ of this series I discussed the statistical methodologies in the recent paper by Mark and David Geier, who extracted data from the VAERS(Vaccine Adverse Event Reporting System) and the CDDS(California Department of Developmental Services) and tried to show that efforts to remove the compound thimerosal from vaccines have resulted in a decrease in new autism cases (and other neurodevelopmental disorders). In Part I, I concluded that their statistical methodology was invalid and unable to support their conclusions. In Part II, I suggested that they could make their point more soundly by employing better, time-series-related methodologies. In Part III, I briefly examined their CDDS(California Department of Developmental Services) data, and concluded that the methodology was invalid, and correct methodology did not back up their claims. In this final part, I examine data quality issues and wrap this series up by examining a few other criticisms of their work.

Continue reading