Zinc – “Clinically proven”

Whether stated in an ad for conventional or alternative medicine, I typically take claims of “clinically proven” with a grain of salt. That’s because the statistical methodology used to “prove” these claims basically says “if we assume these claims aren’t true, then the results we have seen in studies would have been too bizarre.” This is even though advertisement language is regulated by the DSHEA(Dietary Supplement Health Education Act?), Food, Drug and Cosmetic Act, the Code of Federal Regulations Title 21 Part 101, and so forth. So, when I saw the words “clinically proven to cut your cold nearly in half” by the Cold-Eeze® product manufactured by the “Quigley Corporation”:http://www.quigleyco.com, I naturally got a bit curious.

Cold-Eeze is the homeopathic remedy Zincum Gluconicum along with inactive ingredients. The form I saw was a lozenge, and the dilution was 2X (i.e. a factor of 10^2^=100). The back of the box makes the following claim:

Two clinical studies have shown: Cold-Eeze proprietary formula reduces the duration and severity of cols by 42% or 3 to 4 days.

The independent double blind studies were conducted at the Cleveland clinic and Dartmouth College and published in peer-reviewed journals.

The two articles are as follows:
* Mossad, _et al._ ??Annals of Internal Medicine??. *126*:2, July 15, 1996. (“PubMed”:http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&dopt=AbstractPlus&list_uids=8678384&query_hl=1&itool=pubmed_docsum | “Full Text – free”:http://www.annals.org/cgi/content/full/125/2/81)
* Godfrey, _et al._ ??Journal of International Medicine Research??. *20*:3, June 1992. (“PubMed”:http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&dopt=AbstractPlus&list_uids=1397668&query_hl=3&itool=pubmed_docsum)

I don’t know that much about ??JIMR(Journal of International Medicine Research)??, but the ??AIM(Annals of Internal Medicine)?? is certainly one of the top-tier publications. (This means little in my book, but hey, we’re talking about one of the topics that James Randi has staked his $1 million prize on from my understanding.)

So, the Mossad article in ??AIM(Annals of Internal Medicine)?? does show a well-designed and well-controlled study of zincum gluconicum in which the severity of symptoms was reduced from a median of 7.6 days in the placebo group to a median of 4.4 days in the zincum gluconicum group. The full text is available, and this study makes it look like this formulation does reduce duration of symptoms. Do read the discussion section of the article to see the limitations of the study. No mention was made of the method of preparation of the active ingredient (i.e. dilution), and the discussion of mechanism was phrased in terms of clinical pharmacology rather than homeopathy. This may reflect a bias in the journal or the authors, or Quigley may have produced non-homeopathic zinc lozenges for the study. It’s also worth noting that on the box the company states that the active ingredient is a homeopathic cold remedy, but I didn’t see a mention of the Homeopathic Pharmacopoeia of the US.

The Godfrey article is not in full text online, but from the abstract it looks like zinc took 1.2 days or 4.9 days off the duration of symptoms, depending on when therapy was started. The formulation and treatment schedules were not discussed.

So, it does look like the Cold-Eeze product performed pretty well in those two studies, if in fact is was a similar formulation. However, what about other studies? Some seem to be more negative on zinc:
* Eby, GA and Halcomb, WW. ??Altern Ther Health Med.?? 2006 Jan-Feb;12(1):34-8. “We found no reason to recommend intranasal zinc gluconate or zinc orotate lozenges in treating common colds.” The measured number of patients free of symptoms after 7 days, and 10/16 (63%) in the zinc group compared with 9/17 (53%) in the placebo group. I actually find their conclusions bizarre in light of their sample size and measure. If they had wanted to detect a 20% difference between placebo and zinc, the study would have had 20% power. It’s pretty awful to have an underpowered study and then claim no difference when you can’t reject the null hypothesis. Heck, if we were allowed to do that, I could make it look like penicillin was ineffective. So I’d take the numbers from that study as a bit of information, but ignore the conclusions.
* Wintergerst, ES, _et al._ ??Ann Nutr Metab.?? 2006;50(2):85-94. Epub 2005 Dec 21. This is a review of done on zinc and vitamin C, basically as nutrients. The article does note that adequate amounts of zinc and vitamin C seem to shorten duration of symptoms.
* Arroll, B. ??Respir Med. 2005 Dec;99(12):1477-84.?? This Cochrane database review notes that zinc does seem to have some efficacy, and might be useful.

There are many others. A “PubMed search”:http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?CMD=search&DB=pubmed of “zinc treatment cold” (no quotes) returns 192 articles. Sorting through these studies and accounting for differences in formulation, dosing schedule, actual dose, delivery, and other factors that greatly affect drug efficacy is dizzying and daunting at best. It’s also important to remember that zinc lozenges do carry the potential for adverse events, such as bad taste and nausea.

Also not discussed in these articles are the issues of homeopathic dilution of zinc compounds. This is going to be hard to find because of the bias of journals, and because I still haven’t seen evidence that zinc is in the HPUS.

Finally, the long-term consequences of the suppression of common cold symptoms has not been discussed. Zinc has no effect on virus-shedding, so presumably it doesn’t help the body dump the cause of cold any faster. Whether reducing the severity of symptoms is, in the long run, useful, has not been answered.

For me, the jury is out on zinc as a cold remedy. It seems to do something. Exactly what, I want to understand a little bit better.

Technorati Tags: , ,

Advertisements

Other trends that affect clinical research

Lilly is “outsourcing”:http://pharmagossip.blogspot.com/2006/11/lilly-say-ta-ta-to-jobs-in-west.html jobs to India. This is nothing new; pharma companies often outsource their clinical trial operations and analysis, but what is interesting about this move (besides the fact it comes on the heels of a similar announcement by Novartis) is one of the reasons given:

bq. “The goal of our relationship with TCS has several dimensions beyond reducing cost and risk, including gaining access to a global talent pool, increasing flexibility and scalability of our resources, and maintaining a global workflow that is operational 24 hours a day,” said Dr. Steve Ruberg, Group Director for Global Medical Information Sciences at Eli Lilly, in a statement.

Not sure what Ruberg means by global workflow. It’s not like you can ask for a statistical analysis, have it performed overnight (like the analysis of X-rays like some Indian companies are doing), and have it sitting in your INBOX when you arrive in the morning. At least, not if you want some degree of confidence that your data is correct and your analysis doesn’t have any bugs.

However, access to global markets is a powerful motivator. Going with an Indian CRO gives Lilly immediate regulatory expertise in the region. Look for India and China to be hotspots for clinical research and development in the coming years.

Technorati Tags: ,

The evil twin brother of Number Needed to Treat

Some weeks ago I posted an entry on the NNT(Number Needed to Treat), which is essentially the expected/average number which you would have to give a treatment (surgery, pharmaceutical, or device) at the labeled dose/frequency to receive the labeled benefit.

When you are talking about adverse event risk, the number is NNH(Number Needed to Harm), which is the expected/average number who would have to take a treatment at the labeled dose/frequency to receive the noted adverse effect. You want these numbers to be large.

See “here”:http://www.jr2.ox.ac.uk/bandolier/booth/glossary/NNH.html for more info.

(h/t “Pharmagossip”:http://pharmagossip.blogspot.com)

Technorati Tags:

The New Statistics?

A group has “claimed to find a better way”:http://www.slaterfund.com/slaterfund/content_template.asp?file=newsdetail.asp&newsID=17 of doing statistics in clinical trials. It is based on a “pure likelihood”:http://64.233.161.104/search?q=cache:gmTfSWoOwSQJ:igitur-archive.library.uu.nl/dissertations/2004-0301-095707/c7.pdf+%22clinical+trials%22+%22pure+likelihood%22&hl=en&gl=us&ct=clnk&cd=4 approach, which, at first pass, seems to take the size of the p-value into account as well as whether it is below 0.05. I’ll have to investigate this a little more closely, but I’m a little skeptical at this point. I’ll have to see how they address such thorny issues as multiple comparisons.

Aside from that, we’ll have to see whether someone can run a business model based on this version of statistics. Bayesian statistics has been around longer, and I don’t see too many consultants, save the “Berrys”:http://berryconsultants.com/, able to base a business model on it. (I haven’t seen any more Bayesian clinical trial consultants with huge standing, at least.) This will change, of course, and the new methodology might catch on as well. I think it’ll take a long time, though.

The price of truth with statistics: Dr. Gottlieb’s statment at the conference on adaptive design

A while back I tried to express the opinion that, as good as statistic was at describing populations, it’s bad at predicting outcomes. Looks like the deputy director of medical policy at the FDA agrees:

Another problem with the empirical approach is that it yields statistical information about how large populations with the same or similar conditions are likely to respond to a treatment. But doctors don’t treat populations, they treat individual patients. Doctors need information about the characteristics that predict which patients are more likely to respond well, or suffer certain side effect. The empirical approach doesn’t tell doctors how to personalize their care to their individual patients.

Is there a way out? Maybe:

There are potentially better alternatives, by enabling more trials to be adapted based on knowledge about gene and protein markers or patient characteristics that can help predict whether patients will respond well to a new medicine.

I’m happy to try something other than the brute-force large trial for a treatment. I think that an approach that looks at individual differences and embraces them rather than averages them out as nuisance effects is not only going to further drug development by decreasing risk and development time, but will also untangle the scientific knots that comes with studying alternative therapies.

Of course, Dr. Gottlieb was specifically discussing adaptive clinical trials, where some characteristic of the trial may be altered (in a controlled way) while patients are still being recruited and studied. The statistics of this kind of design has been in development ever since the U.S. government was sweating over what to do when its first primary developer, Abraham Wald, could not get a security clearance due to his immigration status. (Sequential analysis was first developed to quality-control bombs, so was apparently classified for some time.) In the last decade, the field matured to the point where clinical trials could be run with it, but now there are the logistical issues associated with such a trial. I find it wonderful that such technology has been embraced by the FDA, and find their efforts encouraging:

To encourage the use of these newer trial methodologies, FDA leadership, including Drs. Doug Throckmorton, Bob Temple, Shirley Murphy, ShaAvhree Buckman, Bob O’Neill,  Bob Powell, and many others inside FDA’s drug center, are working on a series of guidance documents – up to five in all – that will help articulate the pathway for developing adaptive approaches to clinical trials.

The guidance documents we are developing include one to help guide sponsors on how to look at multiple endpoints in the same trial. This guidance document is currently being drafted and we hope to be able to discuss that work as soon as January. Another guidance document that we are also working on now deals with enrichment designs, designs that can help increase the power of a trial to detect a treatment effect, potentially with fewer subjects.

At first glance, this may not seem like a big deal, but it is. It’s a sign that the FDA understands the technical drawbacks of the way we do applied research and is chomping at the bit do something about it. With the mounting list of PR headaches and potentially fatal disasters that have come recently in the drug industry, we need this kind of regulatory leadership. Thanks, FDA!

When statistics can’t tell the truth, a followup application to the Vioxx controversy

I’ve avoided posting on the Vioxx controversy for a long time, but I would be amiss if I discussed drug safety without discussing the hot button issue of the day that has brought drug safety to the forefront.

My earlier thesis is

Clearly, closely adhering to the rules of statistics isn’t going to get anyone very far in drug safety analysis until we develop new methodology.

The Vioxx controversy (accusations of the “head in the sand” approach aside) highlights this issue.

https://i1.wp.com/photos1.blogger.com/blogger/4561/1608/1600/nytimes.gif

These graphs appeared in a recent New York Times<footnote>Yeah, Friday, I’m ripping you off. So, to assuage my heavy burden of guilt, I’ll tell people to visit regularly. Not that you need any traffic from this humble little blog.</footnote>.They are modified Kaplan-Meier graphs, which are designed to show the risk of an event (here, a heart attack) over time and to compare the difference in risk between two groups. Notice that the placebo group almost categorically has a smaller risk over time than the Vioxx group. However, note also the error bars, which indicate 95% confidence intervals in risk at a single point in time. The error bars do not separate until 36 months in either graph.

However, this is not a conservative statement. That we don’t have the resolution to discern differences in safety doesn’t mean we can conclude they are there. In fact, in this case, we might be better served to conclude from the first graph that the risk gets significantly higher at 18 months in the first graph, and at three or four months (with a huge jump at 18 months) in the second. This may not be borne out by the strict statistical evidence as shown in the graphs, but, if we are to err on the side of caution, we can’t go out to where the error bars separate.

There is even more to this story from the statistical perspective. These two graphs are from two different studies, and two grossly different conclusions were drawn (i.e. from the first, that the risk from Vioxx increases after 18 months, and from the second, that the risk from Vioxx increases after 4 months). Repeatability is an important principle of science, but as it turns out studies are notoriously hard to repeat. This is why the FDA usually requires two separate confirmatory trials for a marketing application to be approved. In this case, we had two relatively large studies with wildly different conclusions about an important safety characteristic of a major blockbuster drug. Clearly, there’s more to statistical analysis than getting a p-value.

A serious problem here is where the rubber meets the road in the courtroom. We biostatisticians understand the problems (and the few that will listen to us when we explain multiple comparisons) with trying to prove a drug safe, and anyone spending any amount of time analyzing pharmaceuticals can tell you that no drug is safe. And yet our standards understandably are high. When the FDA reviews a drug, they look at risk and benefit. The public, and indeed juries and judges are trying to do the work of FDA statisticians, medical reviewers, toxicologists, regulatory and legal experts, and have to weigh a considerable amount of information that’s really hard to digest (or, if the plaintiff’s lawyer’s are good, just weigh the ‘unsafe and they knew it’ info higher than the defense’s).

Technorati Tags: , , ,

Return on investment for clinical trials

A paper in ??The Lancet?? (and reported here) noted that the ROI (return on investment) for 28 clinical trials done by the NINDS(National Institutes of Neurological Disease and Stroke) resulted in a calculated societal benefit of $15 billion. That’s billion with a B. And if you don’t like that measure of societal benefit, try an estimated 470,000 years of life. Compare this to a $335 million (with an M) budget.

Now, this is a news report on an medical article. The issues of selection bias (perhaps NINDS(National Institutes of Neurological Disease and Stroke) have the highest ROI(return on investment) on the planet), methods of calculating estimated years of life saved, and so forth have not really been covered in any detail. I’ll also admit the possibility, as well, that ROI(return on investment) is a lot higher in NINDS(National Institutes of Neurological Disease and Stroke) research than in private research, where I work.

However, with the pharmaceutical (rightly, wrongly, or different) taking a beating in recent years, it’s very important to remember why we do what we do. Yes, we do need more transparency in how we conduct private medical research. Yes, we need to keep ethics at the forefront. Yes, we need to market and prescribe drugs appropriately after they come to market. There have been several high-profile instances in the recent past where these have come into serious question. But the thousands or perhaps millions of people involved in clinical research are still here, working hard to save lives.

Technorati Tags: , ,