Prove it!


An “annoying conversation”:http://www.randomjohn.info/wordpress/2005/12/30/as-i-drink-a-glass-of-water/#comments I’ve had with a commentator about whether “homeopathy works” got me to thinking about our biases when we conduct science (i.e. carry out the scientific process on some conjecture or hypothesis). At least when statistical reasoning is involved, there are generally two competing hypotheses: the null, which states that the conjecture is false and the _status quo_ is true, and the alternative, which states that the conjecture is true and the _status quo_ is false. Then, in an ideal situation, we construct our study in such a way that the probability of the success of the study is very small if our conjecture is really false (this is the famous “alpha” and is usually set at 5%, though other values are used — the FDA usually requires 2.5%). Furthermore, we often calculate a study size (e.g. number of patients in a clinical study) such that the probability of failure of the study is modest (10% or 20%) if our conjecture is really true. If the study is successful, we say that we “reject the null hypothesis” and the study backs up the alternative.In clinical studies, further protections are made against detecting spurious effects. Two popular ones are a placebo control, which involves a group taking an inactive compound, and blinding/masking, which involves disguising the identity of the treatments given.

What I want to focus on is the bias toward saying a theory is false (e.g. saying a new treatment doesn’t work). When a study is successful, we control the circumstances so that we are wrong 1 times out of 20. (I.e. “random chance” accounts for the positive result only 1 time in 20.) When a study is unsuccessful, if we control the power at all, it is so that we are wrong 1 out of 5 or 1 out of 10 times. So, an unsuccessful study is usually weaker evidence for a negative result than a successful study is for a positive result.

In addition, it is possible to design equivalence studies (rare) and non-inferiority/non-superiority studies. These typically require much more data to be successful than standard superiority studies such as those described above.

What I’m getting at here is what I told my introductory statistics students in our section on hypothesis testing: when a study is unsuccessful we have not “proven” a negative result. The results are simply inconclusive. In the same vein, we don’t “prove” a positive result through one successful study, though such a study, when successfully repeated, is strong evidence.

We have a third option here: “unproven.” This means our theory (or compound, or course of treatment) is in limbo, it neither has been shown to “work” nor “not work.” Perhaps the commentator is not comfortable with the third option, given that he’s trying to make me say that I believe “homeopathy works.” (In reality, it probably works wonderfully for some conditions and not for others. Or perhaps wonderfully for some populations with certain conditions. Or it may be the “purely” a placebo effect, as “Dr. Edzard Ernst”:http://www.randomjohn.info/wordpress/2005/12/30/as-i-drink-a-glass-of-water/ says it is. Nothing I’ve seen has strongly convinced me any direction.)

However, the bias goes even deeper. I’ve seen many occasions where pharmaceuticals have been rescued out of the jaws of negative studies. Perhaps this occurs all the time in the study of alternative medicine, but this occurs in pharmaceutical development as well. I don’t think this practice is dishonest, because further studies will either confirm or deny the sanguine outlook on the compound.

But you get a negative study”??New England Journal of Medicine?? (2005;353:341–8)”:http://content.nejm.org/cgi/content/short/353/4/341 on echinacea, and you have scathing editorials and “blog entries by pseudonymous opponents of alternative medicine”:http://oracknows.blogspot.com/2005/07/another-one-bites-dust.html on how previous studies on the subject are wrong and this one is right. You have calls to halt scientific study of alternative medicine and veiled calls to disband organizations such as NCCAM. This despite “some”:http://bastyrcenter.org/content/view/798/ “serious”:http://altmedicine.about.com/od/herbsupplementguide/a/echinacea.htm “criticisms”:http://content.nejm.org/cgi/content/extract/353/18/1971 of the study (it takes more than double-blind and placebo/active controls to make a well-designed study).

Or, you have one meta-analysis of homeopathy and an accompanying editorial declaring “The End of Homeopathy”??The Lancet, August 2005??.

I was taught that an honest scientific scrutiny involved impartiality and curiosity. In these cases, we have hostility and a desire to stop learning. I find these attitudes unfortunate and counterproductive.

Advertisements

10 Responses

  1. You’re not doing a very good job of presenting statistical theory, especially for someone who purportedly teaches the subject and is looking for a ‘balanced’ presentation.

    If you have done 10 studies on dirt as an acne cure, and they all showed that eating dirt cured acne, the ‘status quo’ is that dirt cures acne. Though of course, the null hypotheses for a test will still be ‘dirt doesn’t cure acne’. The status quo is irrelevant.

    You can then do a SINGLE study which fails to find any effect of eating dirt.

    That single study, depending on relative size and experimental design, can replicate the other studies. and it can easily REVERSE THE CONCLUSION that ‘dirt works’.

    You should know that. That is basic statistics.

    And as such, your general protest about the echinacea study is illogical. With 437 subject’s it’s entirely possible that it overrode the prior studies — not through dishonesty, but through statistics.

    If your students made a faulty conclusion like that, you’d probably (hopefully?) call them on it.

    And again (are you really a statistics professor?) you seem to be improperly attacking a metaanalysis. A properly done metaanalysis can be an extraordinarily powerful tool. Changing a position I’ll certainly admit that some reasearchers stretch ‘relevance’ a little far. But was this one of those cases? You don’t say.

    My general posts about the merits of proof vs. disproof are in the other thread.

    But in any case this brings me back to my original questions, though slightly modified to include your latest post. Let’s go for honesty here on your part, as you seem to talk a lot about honesty. Even better, lets go for CLARITY. These are all yes/no questions, so you shouldn’t have to type much:

    Do you believe that all compounds and/or hypotheses should be assumed to be INeffective and only adopted once proven?
    (yes / no)

    Do you believe that all compounds and/or hypotheses should be assumed to be effective and only discarded once DISproven?
    (yes / no)

    Do you believe any aspects of homeopathy should be assumed to be effective and only discarded once DISproven?
    (yes / no)

  2. Seriously, Erik, for calling me out on a poor presentation of statistics, you’ve made some serious errors of your own here.

    Either you’ve left out a few steps in your strange dirt/acne example, or you are carrying out an unheard of kind of statistics/scientific investigation. (Is that what you mean by “basic statistics”?) If 10 studies are negative and one positive, I’d want to know why, rather than simply “reverse the conclusion.” And even if this is a well-controlled (not sure how you’d blind dirt, but assume you can) study with alpha of 5% and power of 90% or whatever and others were open-label, don’t you think you’d need to repeat the study to verify? Don’t you think you’d need to do at least a little homework to make sure that the efficacy shown in the other 10 studies is due to spurious/placebo/other effect unrelated to dirt? Or are you simply going to knee-jerk accept your latest study?

    Of course, in the course of clinical development, there may be non-scientific reasons to simply yank the compound out of development, though this is usually done on the basis of several nonclinical as well as clinical studies. But if we’re going for furthering knowledge (and have deep pockets), I’d compare the previous studies to the current one and run the current one again. Or, if current one had flaws, I’d try to correct them and run a new study. Simply throwing up our hands and saying “Yep! The statistics told me so!” at least here is a case of selection bias.

    The reality is that the echinacea and homeopathy series of studies are much more complex. You have at least three different species of echinacea, different indications, different dosages, and so forth. In the case of homeopathy, you have different types of practices of homeopathy (e.g. reaching for the arnica vs. going through the process of homeopathy), different indications, different remedies for the same indication. These complicating issues are not touched by your acne/dirt example.

    bq. And as such, your general protest about the echinacea study is illogical.

    Statistically, the echinacea study was well-designed, with a control group, blinding, randomization, and a challenge. The criticisms directed at the study had to do with the non-statistical components, such as the dosage and preparations of echinacea. Some people have criticized the challenge method, and I’m not prepared to offer any original comment on that simply because it is beyond my scope of knowledge. I offered no “general protest” of the study.

    However, using this study as a springboard to complain about alternative medicine, call for an end to NCCAM, and call for an end to studying alternative medicine altogether shows a biased hostility. This is one study, well-designed but still having flaws, on one compound, and yet the editorial makes far-reaching claims.

    bq. With 437 subject’s it’s entirely possible that it overrode the prior studies—not through dishonesty, but through statistics.

    Given the number of studies on echinacea, I find that doubtful. Given that the criticisms of the study were non-statistical, no amount of care with the statistics is going to make this study override anything unless those criticisms are adequately addressed. Finally, saying that a study with valid criticisms overrides anything is ridiculous. It may call into question some beliefs, call the _status quo_ of knowledge into question, and force people to think twice, but it doesn’t prove anything or put “the last nail in the coffin” of anything (as our famous pseudonymous blogger would have it).

    bq. you seem to be improperly attacking a metaanalysis. A properly done metaanalysis can be an extraordinarily powerful tool.

    Please detail how I attack a meta-analysis. Here are my words again, so you don’t have to scroll up or use Control-F this time:

    bq. Or, you have one meta-analysis of homeopathy and an accompanying editorial declaring “The End of Homeopathy”.

    While I do bring up others’ criticisms of the meta-analysis elsewhere, I don’t attack it here. What I do attack is the sensationalistically-titled article “The End of Homeopathy.”

    You do not seem to acknowledge the bias of these over-reaching editorials. In fact, you keep putting forth the incorrect idea that I’m simply attacking the studies. I am noting others’ criticisms of the studies, but that’s not the point here. Fortunately, the ??NEJM?? published a criticism of the study (and I say fortunately because the study had valid criticisms, publishing an invalid criticism just for the sake of publishing a criticism is bad practice).

    bq. Changing a position I’ll certainly admit that some reasearchers stretch ‘relevance’ a little far. But was this one of those cases? You don’t say.

    I have said. Many times, in many ways. I point out two examples above. (Let me be fair to the authors of the studies: I don’t remember whether the authors of the editorials are the same as or a subset of the authors of the studies, or if they are people who simply read the results of the studies.)

  3. First, how many times am I going to have to ask simple yes/no questions before you deign to answer them? I have been upfront and have answered your issues with my posts, even the mildly offensive ones. Why are you avoiding the questions? I’ll post them again for you, just in case you keep missing them.

    Second, can you stop the facade that you’re “just repeating what is said by others” yadda yadda…. Obviously, selectively choosing who to quote, or what to quote, is establishing a viewpoint. Every time I make a comment “you seem to be attacking ___” and you respond “No, no, I’m just quoting OTHER PEOPLE’S attacks on _____, I haven’t said anything!” it gets more ridiculous. Especially in light of your refusal to take a clearly defined personal stand (see above paragraph).

    Nonetheless, I’ll even go so far as to answer your statistics question: 1) Perform 5 dose/response studies on compound X with 5 mice for each study. 2) Conclude based on the results that X has a negative effect on heart rate. 3) Perform a wide dose/response study of X with 1000 mice. Conclude that X has no effect on heart rate. 4) now perform a meta analysis to see whether, in light of ALL the studies, there is any basis for believing X has an effect on heart rate. 5) Conclude that the answer is no. Can you repeat it? Sure. Might you be tempted to disbelieve the last study, if you REALLY wanted X to be effective? Sure. That’s why we use statistics.

    That’s not so hard, is it?

    Now, as you put it, of course you wouldn’t want to “knee-jerk accept” your latest study.

    But of course you wouldn’t want to “knee-jerk accept” your EARLIER studies either. Like this, for example, in response to the possibility the echinacea study was accurate “Given the number of studies on echinacea, I find that doubtful”.

    We all know (at least those of us who do or have done studies know, and i HOPE statistics peopl;e know) that the NUMBER of studies isn’t that relevant. It’s also sample size, quality, etc etc. Knee jerk reaction? Can we hear from the audience?

    That’s why we have STATISTICS. Numbers, on their own, are not biased. When you say “Simply throwing up our hands and saying “Yep! The statistics told me so!â€? at least here is a case of selection bias.” you are exhibiting a common misconception. YOU cannot tell which numbers are good or bad; the numbers simply “are”. Otherwise, why run the study at all? And hopefully you don’t expect that YOU can decide when “at least here” applies; when it’s biased and when not.

    As for meta-analysis, your comment came in the context of a point regarding bias on the part of scientists. You open with a comment about bias and close with a call for “honesty” again. In the middle of that is your meta-analysis comment. G Go reread your post again. If you want to claim that you’re not saying anything or making a point, well, sigh… I’m starting to believe you.

    But back to the interesting stuff. I really do think we could have a more fruitful conversation if you would try really hard to answer these. just use your first three words of your next lengthy response. Then we can discuss something relevant. Honestly.

    Do you believe that all compounds and/or hypotheses should be assumed to be INeffective and only adopted once proven?
    (yes / no)

    Do you believe that all compounds and/or hypotheses should be assumed to be effective and only discarded once DISproven?
    (yes / no)

    Do you believe any aspects of homeopathy should be assumed to be effective and only discarded once DISproven?
    (yes / no)

  4. Erik, there are two commonalities in all your posts:

    1. The tendency to disagree with me. Sometimes you call my understanding of statistics into question and even my practice of science. Is this because I happen to be a little more open-minded than you on the subject of alternative medicine?

    2. You ask the same loaded questions over and over again. Why don’t I answer them, you wonder? Because they’re loaded. By answering them, I have to accept the assumptions on which they’re based (e.g. compounds/homeopathy/your favorite compound/your least favorite compound has to be assumed to be ANYTHING before this is proven/disproven). I don’t accept those assumptions. However, I’m beginning to believe that you are completely unable to see those assumptions, which means that you are never going to be satisfied with my answer.

    But, since you seem to take time out of your day, I’ll answer all three of your questions again: I don’t assume anything. That’s why we run studies. That’s we do estimation and hypothesis testing. Maybe, just maybe, later on as in Phase III studies or other types of confirmatory studies, the people with an investment in the compound start to assume something and get the marketing application rolling when the positive results roll in or console ourselves down at the corner pub when the negative results roll in.

    3. Once again, you are twisting my words, which, placed in the context of questioning my knowledge of biostatistics really makes you look bad.

    For example, here are my words:

    With 437 subject’s it’s entirely possible that it overrode the prior studies—not through dishonesty, but through statistics.

    Given the number of studies on echinacea, I find that doubtful.

    vs. your characterization

    Like this, for example, in response to the possibility the echinacea study was accurate “Given the number of studies on echinacea, I find that doubtful�.

    And this whopper:

    bq. As for meta-analysis, your comment came in the context of a point regarding bias on the part of scientists. You open with a comment about bias and close with a call for “honesty� again. In the middle of that is your meta-analysis comment. G Go reread your post again. If you want to claim that you’re not saying anything or making a point, well, sigh… I’m starting to believe you.

    It was in the context of a point regarding bias on the part of the part of the EDITORIALIST. Big difference, one that either seems to be lost on you or is simply discarded.

    In case you haven’t noticed, I haven’t dismissed any studies here on statistical grounds. I haven’t commented on inability to do statistics, or whether studies were well-designed. (Except I do say that the echinacea study reported in the July 21 edition of ??NEJM?? seemed well-designed from statistical point of view.) The point of this blog entry is about the interpretation of statistics and the editorializing that occurs after the tables are run and the report is written and published. I don’t need your commentary on why we do statistics, what they mean, or what happens when studies show contradictory results.

    bq. But of course you wouldn’t want to “knee-jerk accept� your EARLIER studies either.

    You’re the one who’s bringing that up out of the blue. I state something entirely different:

    bq. If 10 studies are negative and one positive, I’d want to know why, rather than simply “reverse the conclusion.� And even if this is a well-controlled (not sure how you’d blind dirt, but assume you can) study with alpha of 5% and power of 90% or whatever and others were open-label, don’t you think you’d need to repeat the study to verify? Don’t you think you’d need to do at least a little homework to make sure that the efficacy shown in the other 10 studies is due to spurious/placebo/other effect unrelated to dirt? Or are you simply going to knee-jerk accept your latest study?

    Inquiry is the opposite of knee-jerk acceptance.

  5. 1) Thank you for (sort of) answering the questions. As to whether or not they are “loaded” I’d have to disagree: they are reasonably simple to answer and are inoffensive. They are “loaded” only because you may not like the way a blunt response looks on paper. Politicians hate yes/no answers for the same reasons. But generally, scientists love them.

    Watch, here are MY answers:

    Do you believe that all compounds and/or hypotheses should be assumed to be INeffective and only adopted once proven?
    yes

    Do you believe that all compounds and/or hypotheses should be assumed to be effective and only discarded once DISproven?
    no

    Do you believe any aspects of homeopathy should be assumed to be effective and only discarded once DISproven?
    no

    Quick. Honest. To the point. Loaded? Maybe to you. The reason I have a tendency to disagree with you is that, well, I think you’re being disingeneous or making points which are incorrect. So I disagree. Ain’t the Internet grand?

    I also disagree because I don’t want people to be fooled into taking an ineffective treatment for a problem.

    2)
    You next say:
    “By answering them, I have to accept the assumptions on which they’re based (e.g. compounds/homeopathy/your favorite compound/your least favorite compound has to be assumed to be ANYTHING before this is proven/disproven). I don’t accept those assumptions.”

    Wow. It must be a real pain in the ass to design a study to prove or test assumptions when you have no assumptions at all. Null hypothesis and all that.

    Humor aside, I’m curious to know whether you think your worldview of assumptions vs. proof meshes with that of most scientists. My personal experience has always been that science tends to operate on an “assumed no action” or “assumed doesn’t work” principle, which we then try to disprove in searching for an effect. Try as I might, I can’t think of a single study or experiment which I know well, and none in which I have been a participant, which tested the reverse for any novel compound. Negative studies (which assume X works from the outset) are far more rare, at least in my experience, and tend to be limited to situations where they challenge a large body of already existing high quality work in support of a claim.

    As a result, the idea that there are no assumptions which should be made, IMO, call into question the scientific method. Maybe you could explain how your views fit in, and how the ‘no assumptions’ concept fits into deciding WHAT gets tested, and HOW it gets tested?

    3)
    “However, I’m beginning to believe that you are completely unable to see those assumptions, which means that you are never going to be satisfied with my answer.”
    Actually, I see the assumptions just fine. Unless you’re talking about different ones from me.

    4)
    “It [the meta analysis comment] was in the context of a point regarding bias on the part of the part of the EDITORIALIST. Big difference, one that either seems to be lost on you or is simply discarded.”
    You are being ridiculous again, or perhaps trying to distance yourself from something.

    Let me give you an example:
    an editorial headline says “Recent studies on water show homeopathy works!”

    and in response you post “this editorial is unbiased!” and I post “this editorial is clearly biased!”

    Only a fool, or someone trying to hide their words, would fail to interpret our comments as imparting different viewpoints.

    5)
    “Inquiry is the opposite of knee-jerk acceptance.”
    I’d say “considered rejection” is the opposite, but I get your point. Nonetheless, you seem to have failed to get mine:

    When you decide that a study is “different enough” to require extra-special attention, and maybe even more careful examination of results, you are engaging in selection bias. Or, to be precise, confirmation bias. You might call it “avoiding knee-jerk acceptance”, though you would be wrong.

    Why? Because by subjecting a study to extra-careful scrutiny, you are raising the probability of finding a problem. That’s all well and good… BUT if the _reason_ you decided to scrutinize it was that it was in disagreement with OTHER studies, and you haven’t scrutinized those OTHER studies equally, that’s biased. in essence, you have “knee-jerk accepted” the OTHER studies bu subjecting only the contradictory study to scrutiny.

    For echinacea, for example, you don’t seem to have posted on whether the other studies were run properly. Whether they were blind, or double blind. How many subjects were in each. What the credentials and research experience were of the people running them. You only seem to know there was one largish study, published in a decent journal, that has received protests regarding the challenge method.

    Yet even WITHOUT that data, you’re willing to assert that it’s “doubtful” the results of the recent study act to change our overall conclusion regarding echinacea’s effectiveness.

    That is selection bias. That is a “knee-jerk reaction”. This is the type of thing on which we disagree.

    “I don’t need your commentary on why we do statistics, what they mean, or what happens when studies show contradictory results.”

    Apparently you do; see above. You will note, I hope, that my posts don’t claim the recent study is right. I merely claim the recent study may be right; that it is “entirely possible” it is correct.

    You know, the entire concept of a blind or double-blind study was designed to AVOID just the sort of thing you’re talking about here. It seemed that too many people were, when finding results which seemed to disagree with what they believed, unconsciously changing the results to match. An interesting question which can’t be answered now by either of us is whether the protests regarding the echinacea challenge would have been (or were) identical had people believed the study SUPPORTED echinacea. As you seem to be claiming significant bias on the part of many people who oppose homeopathy, would you suggest it’s equal on the pro- side?

  6. Erik,

    If it is an anathema to you that I can simply hold acceptance of a hypothesis in abeyance (or any of my beliefs for that matter) while correctly applying the Neyman-Pearson hypothesis testing framework, we’ll have to simply agree to disagree, your apparent need to lecture notwithstanding.

    bq. Loaded? Maybe to you. The reason I have a tendency to disagree with you is that, well, I think you’re being disingeneous or making points which are incorrect. So I disagree.

    You think I’m dishonest? Thanks at least for admitting that. If you don’t like my explanation of why your questions were loaded, that’s your prerogative. Ain’t the Internet grand?

    bq. Wow. It must be a real pain in the ass to design a study to prove or test assumptions when you have no assumptions at all. Null hypothesis and all that.

    That’s why the Neyman-Pearson framework of hypothesis testing was developed. That’s why we list our assumptions — so studies can be run and assumptions revisited. That’s why we run diagnostics when the analysis is done.

    On your fourth point. Consider the following words of the editorialist:

    bq. NCCAM, if it is to justify its existence, must consider halting its search for active remedies through clinical trials of treatments of low plausibility.

    Presumably the editorialist believes that echinacea has “low plausibility,” and, given that previous studies are “unclear”:http://www.mayoclinic.com/health/echinacea/NS_patient-echinacea, at least as far as the scientists and clinicians at Mayo are concerned, he might be excused. Another review of studies is “here”:http://www.herbs.org/current/echinmix.html.

    So, my “doubtful” comment is based on the fact that I “doubt” that one study “overrides” (your word) the whole Echinacea scientific literature. So if you want evidence of former studies, there they are. I am not “without data.” Next time, you can simply ask for data without the lecturing and accusation of dishonesty.

    When you decide that a study is “different enough� to require extra-special attention, and maybe even more careful examination of results, you are engaging in selection bias. Or, to be precise, confirmation bias. You might call it “avoiding knee-jerk acceptance�, though you would be wrong.

    Why? Because by subjecting a study to extra-careful scrutiny, you are raising the probability of finding a problem. That’s all well and good… BUT if the reason you decided to scrutinize it was that it was in disagreement with OTHER studies, and you haven’t scrutinized those OTHER studies equally, that’s biased. in essence, you have “knee-jerk accepted� the OTHER studies bu subjecting only the contradictory study to scrutiny.

    I subjected this study to “extra-special scrutiny,” as you put it, because of the editorial based on it and the claims that Echinacea “bites the dust.” Selection bias? Yes, but not my selection.

    bq. You will note, I hope, that my posts don’t claim the recent study is right. I merely claim the recent study may be right; that it is “entirely possible� it is correct.

    Hey, it may be right. We agree. The issues raised by critics of the study may not affect the results one way or the other. Who knows, unless we investigate further, which the author of the editorial seems disinclined to do.

    bq. As you seem to be claiming significant bias on the part of many people who oppose homeopathy,

    I’ve claimed bias on the part of the people who take the results of a study and claim “The End of Homeopathy” and “Another one bites the dust.” If you notice, I don’t criticize the statistical methods used, or their application. I point out non-statistical criticisms (apparently you think I “attack”) of the studies, because, as you mention, all studies have criticisms and assumptions that may be wrong. (As a side note, the dose criticism seems to be rather serious. I wonder if anyone has done a dose-ranging study on echinacea. Most studies I’ve seen have tried to address the question of “does it work.”)

    Contrary to the picture you painted, I did not go off blithely looking for a study that I didn’t agree with, and then subject it to microscopic treatment looking for flaws. I found the study through the rather fantastic editorials that claimed the study as a basis. What I found was that the study did not support the editorials and that the dose used in the study was called into question. Again, selection bias, but not my selection. Confirmation bias, but not my confirmation. (Again, I couldn’t tell you whether echinacea “works.”)

    So, now I ask you, and it’s a yes/no question that doesn’t require you to accept my values. To compare, the editorials I attack answer in favor of ending scientific inquiry on the basis of the accompanying study.

    bq. Should homeopathy be investigated further, or should investigation of homeopathy end now?

    We can amend that:

    Should echinacea as a cold prophylaxis or remedy be investigated further, or should such investigation end now?

    Those are the core issues I’ve tried to address here, and, unfortunately, I’ve let you carry the discussion into areas of whether we accept hypotheses before running studies, which I consider only marginally relevant to these core issues. (Unless you believe that investigation should end because you personally accepted the hypothesis of no effect, and I am going to give you more credit than that.)

  7. “Should echinacea as a cold prophylaxis or remedy be investigated further, or should such investigation end now?”

    That’s certainly a fair question. Problematically, I don’t know the exact details of the contrasting studies well enough to come to a conclusion. I don’t believe (though I may be wrong) that the prior studies were extroardinarily rigorous, though I do not have first hand knowledge of that. My scanning of the echinacea literature some time ago did not stick in my mind, sadly, though I remember being unconvinced. And it’s certainly possible there was a challenge issue in the recent study.

    So as I don’t know, really, either way, where does that leave me? I’d have to say:

    -Yes, it CAN be investigated further (“should” is trickier, but I doubt you were intending to argue that aspect)
    -Until it’s investigated further, it’s not conclusive whether it works (you, or others, may disagree, of course).
    -Until it’s conclusively shown to work, I think scientists are morally obligated to avoid implying that it works, or suggesting that it works.

  8. Hey, it looks like I pretty much agree with you on this last comment, down to the evidence of echinacea’s efficacy being inconclusive.

  9. […] I’ve believed for a while that self-described “skeptics” are not really skeptical. See also doubt, especially the noun form: 1 a : uncertainty of belief or opinion that often interferes with decision-making b : a deliberate suspension of judgmentUnsurprisingly, I’ve found some similar opinions on the ‘net: * * I’ve ranted before about the logical errors make in this brand of skepticism, which I now call pseudosketpicism, especially in the area of alternative medicine. And let’s check some of the views on these sites I linked to … (On James Randi’s comments about Penta water) Randi could not be more wrong. Water is not simply “water- burned hydrogen, no more no less”. It is a highly anomalous substance, and its fundamental properties are still the subject of basic research. and the following The existence of scientific evidence for water clusters does of course not imply that “Penta” and similar products have any merit, but it does caution against outright dismissal of these kinds of product. Randi’s sweeping negative statements betray lack of knowledge on the subject and qualify him as a blundering pseudo-scientist. His petty, adolescent criticism of a simple typographic inaccuracy on the “Hydrate for Life” web site and his use of ridicule (he asserts that “Penta” is “magically-prepared” and works “miraculously” while the manufacturer simply states that the process is “proprietary”) support that impression. And yet, Randi rhetorically assumes an air of scientific authority, even infallibility. Exactly. The opposite of (pseudo)skepticism is not credulity, but an open mind. I find this particular section interesting because it makes the Avogadro’s number argument against homeopathy not so watertight. Again, who knows how the science will play out, but claiming that homeopathy effectiveness is physically impossible because of the Avogadro’s number argument is to assume scientific closure to something that is not scientifically closed. (And whether homeopathy’s mechanism of action can be explained from this line of reasoning still has yet to be explored, or at least discovered by this blogger.) […]

  10. […] Contrast this with a statement made by an editorial accompanying a June 2005 negative report on echinacaea, where the author said that studies should be restricted to “biologically plausible” candidates and falls just short of calling NCCAM an illegitimate organization. Well-known high priest of skepticism Orac makes his own dig. Granted, I wish that NCCAM would do smaller dose-ranging studies before launching into larger confirmatory trials to avoid dosing people with unsafe or ineffective doses of a natural product, but I think that the above quote shows exactly why we need such an organization. We don’t know, at least in a solid, repeatable sense, what these herbs are doing and how they interact with other interventions. (It’s known that echinacaea does interact with some cancer interventions.) […]

Comments are closed.

%d bloggers like this: