The Tragedy of Lou Lasagna

April, 9, 2013 | 12 Comments

Comments

  1. Apart from latter views of his position, the comment by Lasagna on due process is a fascinating one, especially when taken to its logical conclusion. In the US, Due Process has a very specific meaning related fundamentally to the Constitution. If we take the criteria as outlined by the US Supreme Court and interpret them with respect to release of drugs into the market, I suggest we find something that ought to resemble the following:
    Procedural due process:
    An unbiased tribunal. (FDA)
    Notice of the proposed action and the grounds asserted for it.(Approval of a drug)
    Opportunity to present reasons why the proposed action should not be taken. ( Reasons for not approving a drug based on valid evidence.)
    The right to present evidence, including the right to call witnesses.(The right to inspection of raw data.)
    The right to know opposing evidence. (As above)
    The right to cross-examine adverse witnesses.(The right to question the researchers and others responsible.)
    A decision based exclusively on the evidence presented. (Assuming validity after all of the above conditions are met.)
    Opportunity to be represented by counsel. (Examination conducted by unbiased experts.)
    Requirement that the tribunal prepare a record of the evidence presented.(Publication of all results.)
    Requirement that the tribunal prepare written findings of fact and reasons for its decision. (As above).
    There is no equivalent to US Due Process in the UK, the Magna Carta notwithstanding, but in Canada the Charter of Rights and Freedoms is roughly equivalent. The legal and ethical question then is this: If the above criteria are not met, are patients prescribed these drugs not being put into danger with threat to life and health? Is this not a constitutional matter for the courts and if not, why not? Class action anyone?

  2. It’s amazing to think that the placebo effect didn’t enter medicine until the 1950’s. I’d always thought it began with the “Hawthorne Effect” discovered in Chicago around 1930, by Elton Mayo from the Harvard Business School, who was just trying to make General Electric a little richer. Mayo’s original goal was to study the effect of physical variants such as lighting changes on the productivity of assembly-line workers at GE’s massive Hawthorne Works. He recruited a test group of young immigrant women in their teens and early twenties. To his befuddlement, however, he couldn’t determine which lighting condition was best. Everything worked – no matter which way he varied the group’s production environment, they out-performed their peers back on the line. The experiment itself was more powerful than the official “dependent variable” being studied.

    The “Hawthorne Effect” is often reduced to the tendency of people to change their behavior simply as a result of being watched. But as Mayo recognized, the experiment meant more than that to the young women at GE. It brought them the camaraderie of working in a close-knit group, the novelty of actually being listened to by their superiors, and a little something new each day. The productivity increase generated by these “human factors” outstripped an earlier study of bonus pay for extra production in a shop staffed by more experienced workers.

    Where Medicine Avenue might have seen the Hawthorne Effect as just an obstacle – a noise drowning out the main signal – Madison Avenue (and Wall Street) was fascinated by it. Selling the sizzle, not the steak, had always been a virtue to them. And if you can get people to do “steak” work for “hamburger” pay plus a little sizzle, that’s a triumph. The study of “human factors” in business was off to the races, and has been used both for good and for ill ever since. I wonder if drug researchers ever took note of the Hawthorne Effect, especially those studying psychotropic medications – and did it ever spark anyone’s interest in studying those “human factors” in their own right?

    There’s a great archive on the Hawthorne Effect from Harvard here:
    http://www.library.hbs.edu/hc/hawthorne/

  3. I have long been puzzled by the seemingly very late development of the randomized controlled trial. Why was it a novelty in the middle of the Twentieth Century? The statistics of probability were no longer new. The term “placebo” had long been in use, so the placebo effect must have been well known for a long time.

    I wonder if the key factor was the invention of the electrically powered adding and multiplying machine, which drastically reduced the time consuming labor of statistical analysis.

    Any observations from historically minded readers would be welcomed.

    • Edward – its not about placebo or statistics. The key point is skepticism. An RCT is really about rejecting a claim. A willingness to jettison hope. As in the Womens’ Health Initiative HRT study. We still haven’t got there even now – most RCTs now are used to fuel therapeutic bandwagons rather than stop them in their tracks.

  4. Dr. Healy,
    Thank you for your recent talk at Queen’s University on this topic, which I enjoyed.

    My question relates to your assertion that regulatory efforts centred on RCTs has done the public a disservice.

    Past experiences suggest that patients who are suffering from a desperate medical conditions are vulnerable to minimizing harms and exaggerating benefits of treatment. Autologous bone marrow transplants for solid tumours, Zamboni procedure for MS, Acetylcholinesterase inhibitors for Alzheimer’s, etc were so fervently embraced by patients and all proved to be of little to no efficacy. Population-based RCTs continue to provide proof that a therapy is expected to be of sufficient effectiveness in a wide enough range of population to warrant authorization to enter the medical market. Medical therapy rests on balance of risk and benefit; to advocate providing investigational drugs without RCT clearance to patients based on the assertion that they may have sufficient variation from the “average” patient is irresponsible and dangerous.

    RCTs were never meant to be designed to detect rare adverse events, nor effect of the drugs outside its strict environment, though they do provide an idea. The former is the role of post-marketing pharmacosurveillance studies. Relating thalidomide’s positive RCT results to Kefauver-Harrison amendment’s failure is frankly wrong.

    The presence of the Kefauver-Harris amendment protects the public by requiring demonstration of efficacy and having “dispassionate” physicians blinded to the treatment allocation prescribe the drugs in its investigational phase. It serves public interest. Senator Kevaufer originally wanted comparative effectivness as condition for drug approval and patent limitation. Rather than dismantle his efforts, should we not be strengthening this through open access to trial data, demanding for active placebos in RCTs and reducing Conflict of Interest of physicians?

    • Yan

      Agree completely that patients and doctors are biased toward benefit and missing harms. But RCTs as used to regulate treatments don’t temper our bias toward benefit and risk of missing harms – they do the opposite – as with cholinesterase inhibitors and antidepressants they give the impression there is effectiveness when at best there is an effect and likely increased mortality and because of the focus on primary outcome measures they enable companies to say that no harms have in fact been demonstrated in these trials that are supposedly the definitive word on what the drugs does.

      I agree with you that patents are a key issue, and that access to the data from company RCTs is important but not crucial (See Marilyn’s Curse next week) – I don’t agree that conflict of interest is, don’t think that active placebos help, believe comparative effectiveness cannot be established in RCTs except when there are hard outcomes like mortality. But beyond that I think RCTs are broken. See Marilyn’s Curse when posted.

      David

      • Thank you, Dr. Healy, for your comments. I assume by “RCTs as used to regulate treatments”, you mean use of surrogate end points, and I agree with you completely on that point.

        However, if you are alluding to confounding by indication, whereby harms are attributed to a therapy due the effect it’s trying to prevent (e.g., the bias that concludes “people on aspirin for MI prevention die earlier than the population not on aspirin, so aspirin must be causing them to die” while forgetting that people taking aspiring for MI are taking them because they have higher CVD risk factors to begin with), we have already devised a solution to this in the form of randomization, which distributes baseline risks equally between the intervention and control arms.

        You mention on another blog post that “[f]or instance, where superficially similar problems can be caused by both drug and illness, such as suicidality on antidepressants, RCTs may perversely show that drugs that clearly cause a problem seemingly don’t cause it.” This conundrum can easily be prevented by matching the baseline risk factors of the placebo and active arms of the study, discussed above. For example, let’s say an SSRI at end of a trial achieves less suicide-related mortality compared to placebo, then there is no other explanation than the conclusion that its suicide-preventing effects (efficacy) must have outweighed its ability to cause suicide (side effect), within the characteristics/risk profile of the patients in which it was tested.

        In light of what I have discussed, I would appreciate an elaboration of your statement from your LRB review, “RCTs tell us almost nothing about cause and effect. They discover nothing. They likely block the discovery of many treatments.” Of course, RCTs are not the be-all and end-all; Bradford Hill criteria tell us that there needs to be other accompanying factors (ie. biological plausibility, dose-response) to prove causality. But RCTs are part of that answer. There is distortion of RCTs to suit pharma needs (ie. surrogate end points, trials using selective patient groups, etc); we can change that. But let there be no confusion that RCTs as a methodology remains robust. Let’s not throw the baby out with the bathwater.

        Looking forward to your post next week.

        • Yan

          Thanks for this. It will be clearer in Marilyn’s curse. This is not confounding by indication. Many people hearing the issue for the first time think it is confounding by indication when in fact its not. There are two issues here – one is the use of the word cause. I agree some antidepressant trials can show a preservation of lives and some a loss of lives but this has only an indirect implication for whether the drug causes suicide or not. The other aspect is the illness and the drug, where both the dose of the illness and dose of the relevant action of the drug can vary and in varying can lead to different outcomes. These are not issues that can be sorted by randomization. This will be hopefully laid out in more detail in MC

          David

    • Ah, yes, but some supporters like plcoiies that are supported by evidence ;-)A point there, though: if you encourage people to look at the evidential support for plcoiies, you get the impression they tend to comment on that. The folic acid supplementation issue, for example, is not a bad example. Peter Dearden has raised .

    • Michael – thanks for this – mistake on my part re Lenz.

      Re Frances Kelsey, the usual story is that she single handedly saved the US from thalidomide for which she got a Congressional Medal of Honor from JFK. But in fact Morton Mintz’s piece on july 15th 1962 made it pretty well impossible to licence thalidomide. This was ironically called “heroine” of FDA keeps bad drug off the market. In this he pretty well created the narrative that gave Kelsey the credit.

  5. Appreciate this article. Would you have the references for:
    1. Lasagna’s double blind study of thalidomide

    2. Lasagna’s intervention with the FDA on behalf of Merrill-Richardson

    3. The Lancet review of “The Doctors Dilemma”

    4. Harry Beecher’s review of “The Doctors’ Dilemma”

    Many thanks!

Leave a Reply