Lying with statistics


A bunch of statisticians (Bayesians, no less!) over at Columbia University have written an “article”:http://www.stat.columbia.edu/~cook/movabletype/mt-tb.cgi/36 about how to lie with statistics in clinical trials. I’m not going to refute any of their points, because these things happen a lot. However, I think there are a few points to consider:

# A lot of people are involved in each clinical trial, including the FDA and ethics committees at each of the sites where the trial occurs. The effects of this are mitigated somewhat when a trial is used as a source of information for publishing a paper post-marketing (leading to the so-called “Weber effect” where only successful trials get published). These papers are important in that they can be distributed to doctors even if they are about “off-label” uses of the compound. However, it is still the case that even if you tip the odds in your favor through dosing (whether to show your good side or their bad side), you’ll have to do some “creative writing” to downplay your intentional flaws.
# You lie with dogs, you wake up with fleas. You pick a narrow or highly targeted outcome, and you can’t say any more, at least in a drug development program. You’re going to be stuck with it unless you bite the bullet and study the harder indication. (Some exceptions are made for this in unmet medical need, serious illness, or bioterrorism.) Now, there is some strategy to this. If you have a new molecular entity, you might tune your outcome to get it approved in a narrow population (perhaps even such a small one you can get orphan drug status which leads to a shorter development cycle and faster approval) and then do post-marketing studies to widen the labeled uses. This kind of thing happens, and it does introduce bias, but it ain’t free.
# “Creative writing” as it were occurs both with and without statistics. The problem with statistics is that people tend to believe ’em. I have an idea. After Algebra I, or even perhaps as a segment of it, we have a basic statistics class. Bite the bullet and teach the hard concepts of null and alternative hypotheses. Just get the basic ideas and instill a sense of need to think critically even in the face of fancy-sounding statistics. See an estimate? Ask for a range or a measure of accuracy. See a p-value? Ask what the hypotheses were.
# Does everybody in the pharma industry engage in unethical behavior, slaving to find ways to make the public take snake oil so their profits can be lined? Of course not. Bias is present in every study, and often it’s in favor of the tested drug. By contrast, drug companies are, um, “encouraged” to be conservative in their risk minimization plans (yes, this has to be addressed for every study in a clinical development program), especially in the traditional sore spots such as missing data and dropouts analysis. And safety reporting. Etc. Treatment codes are masked during the study from everybody possible (if the design of the study makes it possible). So there is behavior going on to try to mitigate the effects of unconscious bias.

I guess the bottom line is that judgment about the validity of a study should wait until after all the information has been taken in and the issues such as those in ??How to Lie with Statistics?? and its extensions are addressed.

*Update:* as noted in the comments, the article cites and links to the Carlat report.

Advertisements
%d bloggers like this: