More autism news

Recently J&J’s Risperdal, a drug formerly approved to treat schizophrenia, was approved to treat irritability symptoms stemming from autism. A group has taken issue with the ruling, claiming that the risks (including the risk of tardive dyskinesia) outweigh the benefits.

TV and autism — let the debate begin

??Slate?? has an article on how TV might cause autism. It’s a good food for thought article, but should be read along with the comments.

Who knows whether TV has a role in autism. It probably does, to a small degree, just as thimerosal probably does, to a small degree (and probably in a relatively small subpopulation that’s especially sensitive). Autism is a complex disease that expresses uniquely in each individual, making it very challenging to study. That it seems to have a large genetic component doesn’t really make it any easier.

What I take away from this article is a reminder that babies’ brains are developing so much in the first few years that you need to consider carefully what you do that might influence the development.

Do you really want to have your toddler watching Barney?

Statistical critique: where do we draw the line? (An application to drug safety analysis)

I just read an interesting entry (and thread) on Andrew Gelman’s statistical blog that goes along the lines of some questions I have been pondering lately. Specifically, these two paragraphs hit me (this is form an email to Gelman):

The whole spirit of your blog would have led, in my view, to a rejection of the early papers arguing that smoking causes cancer (because, your eloquent blog might have written around 1953 or whenever it was exactly, smoking is endogenous). That worries me. It would have led to many extra people dying.

I can tell that you are a highly experienced researcher and intellectually brilliant chap but the slightly negative tone of your blog has a danger — if I may have the temerity to say so. Your younger readers are constantly getting the subtle message: A POTENTIAL METHODOLOGICAL FLAW IN A PAPER MEANS ITS CONCLUSIONS ARE WRONG. Such a sentence is, as I am sure you would say, quite wrong. And one could then talk about type one and two errors, and I am sure you do in class.

So, let’s consider the drug safety problem in light of this. I’ve noted before that strictly following the rules of statistics in analysis of drug safety will in too many cases lead to an understatement of the risks of taking a drug. However, we have to say something with regard to the safety of a drug, and, given the readiness of the lawyers to file lawsuits for the adverse events of a drug, it had better be correct. We do have to do the best we can.

On the other hand, let’s look at the studies looking at the autism-thimerosal connection. Both those in the camp of suggesting such a connection and denying the connection all have their methodological flaws (which makes them all the more confusing), but some of them come to the right conclusion.

Ultimately, every study has its flaws. There is some factor not considered, something not controlled for, some confounding issue. Exactly when this invalidates the study, however, is not an easy issue.

Technorati Tags:

Lying with statistics: when statistics can’t tell the truth, or why I’m interested in the statistics of drug safety (and an application to the thimerosal-autism controversy)

Drug safety is hard to study. There are so many things that can go wrong with the human body. To statistically analyze every single possible thing that can go wrong is impossible. There are thousands of possible adverse events, a whole lot of laboratory measurements that have to be taken (so we can address, among other things, whether the drug is hurting the liver, heart, and kidneys), physical exam measurements, vitals (blood pressure, temperature, respiration).

Even if you don’t find an adverse event, you still are analyzing the thousands of possible ones. They all simply have a frequency of 0, which means that the upper 95% confidence limit on a single event is 3/n (3 divided by the sample size of the treatment group of the study). To adjust this number for multiple comparisons (essentially, we have to divide 5% by the number of comparisons, though fancier methods are available), we’d have to find very close to a 100% upper confidence limit for each adverse event — almost 100% unless you include millions of people in the study!

Clearly, closely adhering to the rules of statistics isn’t going to get anyone very far in drug safety analysis until we develop new methodology.

Fortunately, new methodologies are being developed to address these issues, such as Bayesian and graphical methods. However, they are still in the cooker and probably will not be in widespread use for some time. For now, we are stuck with thousands of lines of AE counts, laboratory measure averages, vitals averages, and, if we’re lucky, a few useful graphs for labs and vitals. (Admittedly, I think simple box plots, scatterplots, and line graphs should be used more.)

When I taught first-year statistics many years ago, I tried to impress on my students that because a hypothesis test fails to show a significant effect doesn’t mean no effect is there. However, it’s usually the more conservative option to say the effect isn’t there, if a decision has to be made on the basis of the test.

In drug safety, this argument doesn’t work. To make a claim that a drug is safe, we have to say that it does not cause more adverse events and does not cause unsafe laboratory or vitals findings. The more conservative statement is to say that a drug does cause an adverse event. However, this will essentially lead to the statement that the drug might be too unsafe to use. (How would you like to say that, while no incidence of torsade de pointes was observed, the clinical development program wasn’t robust enough to say that the drug doesn’t cause it?)

So the reality of the current situation, and the state of the art of drug safety analysis, is that we statisticians generate thousands of lines of results and the pass it off to one or more medical writers who try to make sense of it all. (And they usually do a good job, although the FDA has been known to require warnings on the label for events that have occurred only in one animal in only one preclinical study, even when it doesn’t occur at all in the clinical studies.) We statisticians can do better, and we are starting to do better, but right now the 1960s is where we are with safety analysis.

Incidentally, this is why I don’t hold statements like the following in high regard:

I want to be as clear about this as I can. There is no controversy
surrounding Thimerosal. There is scientific evidence and there is
hysteria. The scientific evidence suggests that there is no link
between thimerosal in vaccines and autism or any bad outcome whatsover!

By now you should know what what my response is: if Dr. Flea is going to make such a strong assertion, I expect a tractor-trailor full of CDs full of compressed PDFs with studies disproving any link between thimerosal in vaccines and “autism or any bad outcome whatsoever.” If you want to know the reason the thimerosal-autism story will not die, here it is. Because it’s darn near impossible to collect enough scientific evidence to disprove the link, so anecdotal evidence is going to keep the questions rolling. Which I say is a good thing for the most part, despite the recent (possibly valid, possibly invalid) allegations against two groups of researchers investigating the harmful effects of vaccines.

At any rate, this is why my recent interest has turned toward the analysis of drug safety. Because it’s a hard problem.

While anti-vaccine researchers are being charged with ethical lapses, the mercury issue marches on

Dr. Mercola links to an public service announcement about the presence of mercury in vaccines. This PSA(Public Service Announcement) contains one tidbit that I haven’t heard yet: that the EPA suggests that the amount of mercury still present in vaccines is safe only if you weigh over 500 pounds. Is there a source for this information?

Update: So, I’ve dug a little deeper. Here’s what I’ve found:

  • The EPA’s webpage on human exposure of mercury is here.
  • I found the following statement in mercury’s tox profile:

EPA and FDA have set a limit of 2 parts inorganic mercury per billion (ppb) parts of water in drinking water. EPA is in the process of revising the Water Quality Criteria for mercury. EPA currently recommends that the level of inorganic mercury in rivers, lakes, and streams be no more than 144 parts mercury per trillion (ppt) parts of water to protect human health (1 ppt is a thousand times less than 1 part per billion, or ppb). EPA has determined that a daily exposure (for an adult of average weight) to inorganic mercury in drinking water at a level up to 2 ppb is not likely to cause any significant adverse health effects. FDA has set a maximum permissible level of 1 part of methylmercury in a million parts (ppm) of seafood products sold through interstate commerce (1 ppm is a thousand times more than 1 ppb). FDA may seize shipments of fish and shellfish containing more than 1 ppm of methylmercury, and may seize treated seed grain containing more than 1 ppm of mercury.

  • John Hopkin’s University (Bloomberg School of Public Health) maintains a vaccine safety site. They included a table of thimerosal concentrations.
  • The FDA maintains its own thimerosal page. Of note, according to a 1999 review, “…, depending on the vaccine formulations used and the weight of
    the infant, some infants could have been exposed to cumulative levels
    of mercury during the first six months of life that exceeded EPA
    recommended guidelines for safe intake of methylmercury.” (The Hg-containing metabolite of thimerosal is ethylmercury.)

If you do the math on the 0.1µg/kg/day reference dose set forth by the EPA and compare it to the tables on the FDA page and Vaccinesafety.edu pages, at least for the pediatric vaccines, you don’t get 500 pounds. The worst case scenario that I found was the Fluvirin® that was before 9/28/2001 and Fluzone® flu vaccine that was before 12/23/2004, both of which contained 25 µg of thimerosal in a 0.5 mL dose before newer versions were approved. This does work out to a little over 550 pounds (25 µg / 0.1 µg/kg =250kg = 551 lbs), and this may very well be the source of the 500 pound message; however, the vaccines have been replaced with one with one less than 1 µg/0.5 mL dose or even thimerosal-free versions.

Mark and David Geiers’ IRB: If there is a story here, I’m disappointed

Activists asserting a connection between vaccines (or components of vaccines) and autism have had a hard time of late. First, it was Dr. Wakefield, proponent of the theory of the connection between the MMR(Mumps, Measles, Rubella) vaccine and both autism and irritable bowels, was formally charged with professional misconduct. Now, Kathleen Siedel of Neurodiversity has dug up some disturbing information on Dr. Mark and David Geier. This has to do with the creation of an IRB(Institutional Review Board) that oversaw the protocol that resulted in the recent manuscript A Clinical and Laboratory Evaluation of Methionine Cycle-Transsulfuration and Androgen Pathway Markers in Children with Autistic Disorders. This paper dovetails with their Lupronâ„¢ (i.e. chemical castration) strategy for treating children with autism.

Repeat readers of this blog know that I am agnostic on the thimerosal-autism connection hypothesis. I even have my doubts about the safety of the MMR(Mumps, Measles, Rubella) vaccine. The complexity of the mind is such that we simply don’t understand how these things work, and even running tests for mercury in the blood isn’t easy. And the vigor with which people on both sides of this controversy argue seems to leave little room for real understanding.

Continue reading

Author of MMR and autism link paper charged with misconduct — a tale of two opinions

By now it’s no secret that Dr. Andrew Wakefield, a major impetus behind the original 1998 paper in ??The Lancet?? (now retracted) that showed a link between the MMR(Mumps, Measles, Rubella) vaccine and autism, has been charged with misconduct related to the research.

Everybody’s favorite skeptie writes about the topic here (and a more obnoxious take here), and Dr. Joseph Mercola’s equally obnoxious (but of opposite opinion) take is here.

This tells me that facts are nearly independent of opinion. But that’s beside the point. Here are a couple of questions I have:

  • What does this mean for the MMR(Mumps, Measles, Rubella) vaccine and autism link? My answer to this is that the link exists independently of Dr. Wakefield’s conduct. Either it is there, or it isn’t. Truth be told, I’d be done with the hypothesis entirely except that some support for it has come out recently. I haven’t been able to evaluate the recent support yet.
  • If the link exists, does it vindicate Dr. Wakefield? If the charges brought against him are true, then I say that Dr. Wakefield’s conduct is still unprofessional in spite of the truth. The charges state that Dr. Wakefield stood to profit directly from the results, and that he improperly conducted the research. Tobacco science is tobacco science, whether the results are correct or not. If Dr. Wakefield wanted to further his case and retained credibility, he should have used proper research techniques, obtained proper consent from ethics boards, and obtained proper, objective funding for the work. (I’ll admit the possibility that the charges against Dr. Wakefield are not true, but time will bear that out.)

I had a similar complaint (albeit not one serious enough to call misconduct) against David and Dr. Mark Geier: their methods simply did not match their research hypothesis. While I’m open-minded enough about the hypotheses (MMR and thimerosal) to ask tough questions to my doctor on vaccination day and consider alternative vaccination schedules, when you are doing science, you should use the methods of science properly. Only then can you get the credibility of having used the scientific method to prove your point.