Lying with Statistics V: The very rare, and not found, adverse event


First, let me put in a plug for “Steve’s Attempt at Teaching Statistics”:http://www.childrens-mercy.org/stats (STaTS). Steve Simon, Ph.D. is a research biostatistician at Children’s Mercy hospital, and often fields questions about statistics. On more than one occasion I’ve run across his site when looking up my own questions about statistics.

On with the entry. Note that this case is usually not “lying with statistics” but really a misunderstanding of statistics.

One of the things we do in statistics is estimate. We might estimate the mean height of a group of people, or the impact of air pollution on peak flow in asthmatics (a lung function measure), or the frequency of an adverse event associated with a drug. When we estimate, we know our estimate is probably wrong, so we go a step further and construct an interval and say, “we are confident that the true state of nature (e.g. true height, true impact, true frequency of adverse event) is in this interval.” (Thus the name _confidence interval_.) Of course, if we constructed an interval in which we were _absolutely certain_ the true state of nature fell, the interval would be useless because it would probably be way too wide (theoretically infinitely wide). So, then, we compromise a little and make it shorter and say “we are 95% confident that the true state of nature falls in this interval.” We construct these intervals according to the rules of probability, or perhaps convenient approximations. (You can also construct 90% and 80% confidence intervals, or whatever you need for your project.)

So, what happens when we construct a 95% confidence interval on the frequency of a rare adverse event? In any given clinical trial, you may not observe such an adverse event, and the conventional ways of constructing confidence intervals say that a 0% incidence is a certainty. Yet, a rare adverse event may occur outside the context of a clinical trial, which suggests that 0% is wrong.

The problem is, in the case of the unobserved event, the conventional way of constructing a confidence interval is wrong, as Dr. Simon tell us. In fact, to construct an easy approximate confidence interval for the unobserved event, you take out your simple four-function calculator and divide 3 by the sample size. For example, if you tested 1000 patients with your drug, and didn’t observe any strokes (and expect there’s a chance that your drug will cause a stroke), the 95% confidence interval is between 0 and 0.003 (3/1000).

Advertisements

2 Responses

  1. […] In the setup of the rare occurrence (say, of an adverse event), the standard way of computing approximate confidence intervals doesn’t work very well. In the extreme case of the possible, but unobserved, occurrence, the standard way of computing gives a confidence interval of 0 to 0—i.e. the occurrence is considered impossible. The apparent paradox is resolved by realizing that the standard way of computing approximate confidence intervals for a rare event is incorrect and far off the mark. For the unobserved occurrence, a better confidence interval for the rate of occurrence is between 0 and 3/n, where n is the sample size of the study. This is called the “Rule of 3.” (More info at the link.) […]

  2. […] Even if you don’t find an adverse event, you still are analyzing the thousands of possible ones. They all simply have a frequency of 0, which means that the upper 95% confidence limit on a single event is 3/n (3 divided by the sample size of the treatment group of the study). To adjust this number for multiple comparisons (essentially, we have to divide 5% by the number of comparisons, though fancier methods are available), we’d have to find very close to a 100% upper confidence limit for each adverse event—almost 100% unless you include millions of people in the study! […]

Comments are closed.

%d bloggers like this: