The price of truth with statistics: Dr. Gottlieb’s statment at the conference on adaptive design


A while back I tried to express the opinion that, as good as statistic was at describing populations, it’s bad at predicting outcomes. Looks like the deputy director of medical policy at the FDA agrees:

Another problem with the empirical approach is that it yields statistical information about how large populations with the same or similar conditions are likely to respond to a treatment. But doctors don’t treat populations, they treat individual patients. Doctors need information about the characteristics that predict which patients are more likely to respond well, or suffer certain side effect. The empirical approach doesn’t tell doctors how to personalize their care to their individual patients.

Is there a way out? Maybe:

There are potentially better alternatives, by enabling more trials to be adapted based on knowledge about gene and protein markers or patient characteristics that can help predict whether patients will respond well to a new medicine.

I’m happy to try something other than the brute-force large trial for a treatment. I think that an approach that looks at individual differences and embraces them rather than averages them out as nuisance effects is not only going to further drug development by decreasing risk and development time, but will also untangle the scientific knots that comes with studying alternative therapies.

Of course, Dr. Gottlieb was specifically discussing adaptive clinical trials, where some characteristic of the trial may be altered (in a controlled way) while patients are still being recruited and studied. The statistics of this kind of design has been in development ever since the U.S. government was sweating over what to do when its first primary developer, Abraham Wald, could not get a security clearance due to his immigration status. (Sequential analysis was first developed to quality-control bombs, so was apparently classified for some time.) In the last decade, the field matured to the point where clinical trials could be run with it, but now there are the logistical issues associated with such a trial. I find it wonderful that such technology has been embraced by the FDA, and find their efforts encouraging:

To encourage the use of these newer trial methodologies, FDA leadership, including Drs. Doug Throckmorton, Bob Temple, Shirley Murphy, ShaAvhree Buckman, Bob O’Neill,  Bob Powell, and many others inside FDA’s drug center, are working on a series of guidance documents – up to five in all – that will help articulate the pathway for developing adaptive approaches to clinical trials.

The guidance documents we are developing include one to help guide sponsors on how to look at multiple endpoints in the same trial. This guidance document is currently being drafted and we hope to be able to discuss that work as soon as January. Another guidance document that we are also working on now deals with enrichment designs, designs that can help increase the power of a trial to detect a treatment effect, potentially with fewer subjects.

At first glance, this may not seem like a big deal, but it is. It’s a sign that the FDA understands the technical drawbacks of the way we do applied research and is chomping at the bit do something about it. With the mounting list of PR headaches and potentially fatal disasters that have come recently in the drug industry, we need this kind of regulatory leadership. Thanks, FDA!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: