Slate has a good article on some good consumer critical thinking about statistics

Slate has a good “article”: on how to think about whether you should take a drug. In the confusing world of “relative risk vs. absolute risk”:, it’s really hard to know the effect of a drug.

Enter the NNT(Number Needed to Treat). The idea behind this number is the _expected_ number of people that you would have to treat so that _one_ person would realize the benefit of the treatment. For example, if the NNT is 3, then you would expect one out of every three people to benefit from the treatment.

Let’s take the Pravachol (a statin, like Lipitor) example from the article. In a 1995 study in ??NEJM(New England Journal of Medicine)??, researchers reported a 31% reduction in the risks of heart attack in men who took one Pravachol every day for five years. 7.5% in the placebo experienced a heart attack vs. 5.3% in the Pravachol group — a 31% relative reduction in risk or a 2.2% absolute reduction. The NNT (see more “here”: is 1/2.2% = 45.5. So you would expect to have to give over 45 men Pravachol once a day for five years to prevent one heart attack. Turned around, we expect that over 44 of them would not avoid a heart attack (either would not experience one any way, or would not be prevented).

I’ll leave all commentary aside about whether drug companies want you to think that way. The data coming from premarketing approval has to be made public (as a certain company just found out), and anyone with a calculator and absolute risk in hand can calculate an NNT.

Slate has a few interesting NNTs:

|cortisone|painful shoulder|3|
|amoxicillin|shorten fever for ear infection|20|
|Proscar – 4 yr|Avoid surgery for enlarged prostate|18|
|Aspirin|Avoid heart attack|208|

Think about it. Think about how much you spend each year on some of these drugs, and think about what the chance is they help.

(h/t “insider”:

Technorati Tags: ,

Lying with statistics in the news

How did I miss this one? You can find many examples of lying with statistics at, which seems to be a non-profit associated with George Mason University (the Tar Heel in me says boo-hiss). Given they are a non-profit associated with a university (even if they are a small operation), they have much greater resources dedicated to debunking bad statistics in the media than I do. Of course, my scope is much narrower as well.

Update: via Gelman’s post on “Using numbers to persuade?”, I found this as well.

I find these sites valuable, though I’ve found several arguments that I can’t agree with. For example, in “one of their articles”:, I’ve found the following:

* _Thus the EPA saw only “suggestive evidence of carcinogenicity.” Seed also noted that “Studies have not shown any effects directly associated with PFOA exposure.”_ Again, this isn’t a conservative statement, and any drug that goes to the FDA with “suggestive evidence of cardinogenicity” would get a much more thorough scrutiny. For something that isn’t a drug, and serves more of a convenience, shouldn’t we give this the same scrutiny?
* _ In other words, the real news in this story is that the EPA and the chemical companies have decided to take an extremely risk averse position on PFOA because of its presence in the environment and blood, but not because there is any evidence as yet to suggest that there might be a genuine risk to humans._ When it comes to preventable disease, who says risk averse is a bad position? If this risk applies only to three people in the United States out of everybody who gets exposed, through non-stick cookware, microwave popcorn, or otherwise, to PFOAs, is our risk averse position unjustified?
* And in “this article”: Trevor Butterworth comments the following: _One case of deformity from one person (among thousands) who worked with PFOA is an association that is scientifically meaningless, especially when there isn’t a single health study that has ever shown any such association. This was tabloid journalism at its worst._ This was based on a CBS news story about a woman who worked at a plant with higher than average exposures to PFOAs and who happened to have a birth defect. While not proof of association (and no one with a stats or science degree would make this claim on the basis of one person), it is worrisome and certainly worthy of further investigation. DuPont certainly thought so, as well.

The EPA(Environmental Protection Agency) has information “here”:

So yeah, I do find that makes a lot of inane pseudoskeptical arguments such as the ones found above. (And they make a lot of good ones as well.) However, they provide a valuable service, and that is to counterbalance a lot of inane misrepresenting/confounding of statistical arguments found in the media.

Bad statistics and science at MDS

The FDA sent a “warning letter”: to a company called MDS Pharma late last month (posted to the agency’s website yesterday) basically saying that MDS Pharma lied with statistics. (Please note that I have not seen or review the company’s response.) Apparently, the agency found the following issues:

* Studies weren’t appropriately auditible
* The company failed to investigate outliers and other anomalous results.
* Some outliers and other anomalous results were deemed not to matter, but sufficient reasoning was not given.
* They failed to account for differences in some test-retest situations of aberrant results.
* The company used inclusion and exclusion criteria that biased their results.
* The company inappropriately documented their review of studies.
* Calibration points for their standard controls were included and excluded in a biased way and did not follow their procedures, and apparently not standard quality control procedures. (There is a Society for Quality Control for a reason.)
* Some measurement methods they used did not give repeatable results (repeatable means that the results should be very similar under very similar conditions). They took a long time to discontinue the method, but failed to inform the right people that the results coming from the method were unreliable.
* All of the above problems were widespread, over many studies of many products over several years.

And these are _nonclinical_ bioequivalence studies, not even clinical trials. (Bioequivalence studies are designed to confirm that two different formulations of the same active drug reach the sites of action in similar concentrations after they enter the body, and are “proof in the pudding” of generic drug applications.) In clinical trials any of these infractions can be very serious. According to an “article”: in ??The Globe and Mail??,This company apparently performs bioequivalence studies for generic drug makers. This sort of behavior will have these consequences:

* You will not be able to determine if drugs are bioequivalent, if you don’t appropriately follow up anomalous responses. This endangers the regulatory submission for the generic drug maker clients.
* “We conclude that you failed to systematically investigate contamination and anomalous results” — inability to appropriately detect and handle contamination can have obvious serious effects if this is done in a clinical trial or post-marketing setup and, in a premarketing nonclinical setup, can skew results so that inappropriate or dangerous dosages are given to human subjects.
* “demonstrate that your retrospective review is capable of discriminating between valid and invalid study data,” — need I go further?

Furthermore, the company left out whole studies when communicating with the agency.

Now, what does this mean for MDS Pharma? I’ll let a securities analyst speak:

bq. UBS Securities Canada analyst Jeff Elliott said in a report that he was “concerned about the timing of a resolution of the FDA issues at MDS Pharma Services and a potential recovery of this unit.”

The markets also speak:

bq. Shares of MDS fell 64 cents or 3 per cent to $19.86 on the Toronto Stock Exchange yesterday.

Shall I issue my own “sell” recommendation (boring disclosure: I don’t own stock in pharma or life sciences companies except perhaps through mutual funds)?

My comment on this is that it doesn’t take a Ph.D. or a familiarity with standard quality control procedures to see the serious nature with these problems. I hope for the sakes of their clients that MDS can salvage some of the studies they’ve done in the last few years, and that no one has suffered due to these problems.

Technorati Tags:

Significance, statistical and otherwise

In chasing down the perfect p-value (p<0.05), we can sometimes forget the overall objective of performing a clinical trial. You can have the best-designed study in the world with all the statistical issues carefully considered, and the trial can succeed, but if your trial result isn’t _important_, or _clinically significant_, then the statistical significance means _nothing_.

Genta, Inc. unfortunately got bit by this fact as the FDA’s ODAC(Oncologic Drugs Advisory Committee) “voted against their Genasense”: product on the basis that their statistically significant difference wasn’t large enough to be clinically significant. (Apparently their drug also increases the toxicity of chemotherapy.)

Of course the FDA is not bound to the opinion of the ODAC(Oncologic Drugs Advisory Committee), but this certainly isn’t good news for Genta. It should be enough to give pause to anyone developing a drug whose effect isn’t huge.

Gelman posts his chapter on ‘Lying with statistics’

Andrew Gelman, a prominent Bayesian statistician, has posted a chapter ‘”Lying with Statistics”:; from his book.

A good read, with good examples.

Statistical critique: where do we draw the line? (An application to drug safety analysis)

I just read an interesting entry (and thread) on Andrew Gelman’s statistical blog that goes along the lines of some questions I have been pondering lately. Specifically, these two paragraphs hit me (this is form an email to Gelman):

The whole spirit of your blog would have led, in my view, to a rejection of the early papers arguing that smoking causes cancer (because, your eloquent blog might have written around 1953 or whenever it was exactly, smoking is endogenous). That worries me. It would have led to many extra people dying.

I can tell that you are a highly experienced researcher and intellectually brilliant chap but the slightly negative tone of your blog has a danger — if I may have the temerity to say so. Your younger readers are constantly getting the subtle message: A POTENTIAL METHODOLOGICAL FLAW IN A PAPER MEANS ITS CONCLUSIONS ARE WRONG. Such a sentence is, as I am sure you would say, quite wrong. And one could then talk about type one and two errors, and I am sure you do in class.

So, let’s consider the drug safety problem in light of this. I’ve noted before that strictly following the rules of statistics in analysis of drug safety will in too many cases lead to an understatement of the risks of taking a drug. However, we have to say something with regard to the safety of a drug, and, given the readiness of the lawyers to file lawsuits for the adverse events of a drug, it had better be correct. We do have to do the best we can.

On the other hand, let’s look at the studies looking at the autism-thimerosal connection. Both those in the camp of suggesting such a connection and denying the connection all have their methodological flaws (which makes them all the more confusing), but some of them come to the right conclusion.

Ultimately, every study has its flaws. There is some factor not considered, something not controlled for, some confounding issue. Exactly when this invalidates the study, however, is not an easy issue.

Technorati Tags:

Lying with statistics: pretty pictures

Statistics show that 88% of people now get their statistics education from! What, you don’t believe me? Here’s the proof! With my new parter, I will strive to bring you the most impressive statistics ever!

(h/t Insider aka FRIDAY!)

PS. Don’t read the fine print. Never EVER read the fine print.

Technorati Tags: