This is the second of 3 posts laying out the philosophical basis for RxISK.org which will be live in the next few weeks. The others are Cri de Coeur & the Unbearable lightness of being.
In Cri de Coeur, I outlined a scenario in which a treatment that causes suicide when put into good trials without any manipulation of the data, any statistical artifice, or any ghostwriting might give rise to a relative risk of suicide that is less than 1.0.
This poses a real problem for anyone who thinks RCTs provide evidence of cause and effect in general or that RCTs are the way to investigate adverse effects. How could a drug that does one thing in real life do exactly the opposite in an RCT?
Given what we now know about antidepressants and suicide, we can construct studies to make suicide on antidepressants appear or disappear. We could produce almost any relative risk between 0.1 and 10.0 (see Heads we win, tails you lose, Psychotic doubt, Cri de coeur).
Companies know exactly how to use RCTs to hide risks without any fraud at all. The surprise is that they got caught out in the case of antidepressants and suicide. There may be other risks they have worked out how to conceal for ever.
We can map out the dynamics in the case of antidepressants and suicide because this problem is now well understood. Comparable scenarios can be constructed for some arrhythmias on some anti-arrhythmics, or respiratory problems following beta agonists given for asthma, and of course everyone believes certain vaccines can cause brain damage but that controlled studies would show a lower incidence of brain damage in the vaccinated group.
In principle companies can deliberately use RCTs to hide problems in every case in which both an illness and its treatment give rise to at least superficially similar problems. Where the problems are not understood the way the antidepressant and suicide issues are, RCTs risk accidentally becoming a means to hide rather than reveal the problem.
If the adverse event is not well understood, RCT results are impossible to interpret with confidence, despite any number of confidence intervals. We should only say that these are the data that emerged from this particular assay. We should also say that for adverse events it may be a serious mistake to give RCT data primacy over other data.
With adverse events that stem from both an illness and its treatment, the question is what weight to put on observations from controlled trials that have not been designed to investigate the issue versus good observations from clinicians staring the problem in the face, who have an opportunity to investigate the link by means of challenge, dechallenge and rechallenge (CDR) relationships, along with evidence of dose-responsiveness, and reversal by antidote.
We might discount a report from one doctor reporting a patient develop an adverse event on treatment who because of CDR, dose response and other relationships links a drug to the problem. But if a thousand doctors make the link (and even more so if each knows there are 999 other reports) the field will believe the outcome.
Where do we cross the credibility threshold for believing clinical reports like these? The antidepressants and suicide offer a good test case because so few people naturally believe this could happen. It is now clear that the original set of 6 cases from Teicher, Glod and Cole were spot on the mark. The later addition of reports from 5 further centers provided powerful corroboration regardless of what the RCT data might have shown. FDA and everyone else should have gone with these reports as evidence of cause and effect.
But it’s not just FDA who have dug themselves a hole on this one (see Cri de Coeur).
I wrote a version of this post for the Lancet 12 years ago. Before I got the reviews back I had feedback that the Lancet would “buckle” and the article would not be accepted for “political” reasons. (I put the article and its reviews on Healyprozac.com a decade ago).
The reviews of my paper were in fact longer than the paper itself. The clearest point at which a reviewer lost their cool was when the statistical reviewer was faced with the suggestion that a relative risk greater than 0.5 for a problem in a trial that could stem from both an illness and its treatment might be grounds for concern. He went into orbit, branding this as “completely bizarre” and lacking in “any statistical sense”.
He went on to say that it would be completely unethical to run a trial designed to look at the issue of suicide – or by implication any other hazard. In fact FDA and Lilly had designed just such a trial but dropped it when the public relations heat cooled down.
It is this attitude that delivers Evidence Based Medicine straight into a drug company’s pocket.
Using exactly the same thinking, GlaxoSmithKline’s Ian Hudson argued that even though in scores of cases company employees had categorized a suicide or other problem as caused by their drug, because there was no statistically significant RCT data showing an increased relative risk on the drug these judgments were meaningless (Psychotic doubt).
In 2006, when GlaxoSmithKline’s data for suicidal acts became statistically significant, did these prior judgments of causality in individual cases magically change from wrong to right?
Now is a time for those supporters of Evidence Based Medicine who spend their time bravely challenging the charlatans of complementary medicine to step up to the plate and sort out this critical problem within orthodox medicine.
They need to say that a relative risk of 1.0 or less can be consistent with a drug causing a problem, or else explain why this is wrong.
The acknowledgement needs to be as specific as this. There are lots of generic statements to this effect in books like Rothman’s Modern Epidemiology but generic statements cut little mustard with the lawyers working for pharmaceutical companies or with doctors in general.
Why would anyone with good intentions fail to step up to the plate?
Well here’s the dilemma. Saying that RCT data from a drug that causes a problem might show a relative risk less than 1.0 concedes that RCTs are not some sacrament that purifies but are rather an assay system and that the results may have little meaning outside the assay. It also entirely undercuts FDA’s current position (see Cri de coeur).
Conceding this point concedes that the results of an RCT may be deeply misleading for variables other than the primary outcome measure (and even on this score may mislead).
There is a way forward even for secondary outcome measures that embraces RCTs – it would call for all RCTs conducted in healthy volunteers to be registered with the data made fully available (see Zoloft Study: Mystery in Leeds).
Anyone who accepts the overall argument here about the role of RCTs in determining adverse events but remains silent becomes to some extent party to drug induced deaths in people who do not deserve to die, to suicides, violence and inappropriate incarcerations on psychotropic drugs. This happens because your silence is being used every week of the year by drug companies to deny plaintiffs justice and by doctors every day of the year to deny patients recognition.
There is a second linked problem here. The relative risk of a suicidal act on an SSRI compared to placebo is in fact 2.0 or thereabouts. If this doesn’t mean that SSRIs cause suicide, what does it mean?
The only thing it can mean is that when it comes to suicidal acts the risk of harm in these studies exceeds the likelihood of a benefit. Regulators, doctors and others are reluctant to warn about the risks of antidepressants on the basis that the adverse publicity will mean that some who might benefit from treatment will be deterred from seeking treatment (See Pills and the man).
Regulators and others “feel” that somehow there are more people benefitting from antidepressants than being harmed. This is not an evidence based position.
Before stepping up to the plate, anyone faced with this dilemma can ask two questions, the first of which is how did we get it so badly wrong?
As Daniel Kahneman and others have shown over the years, the simple repetition of mantras like RCTs are the gold standard and case reports are just anecdotes, and “once is never”, produces a sense of familiarity that induces agreement. The Golden Rule of propaganda is that once is never – repetition is all.
It takes a critical effort to pull ourselves out of the hypnosis, to wake up, to stop being good Germans. Academics who cannot recognize propaganda are like salt that has lost its flavor. Proper science should be the antithesis of propaganda, with a Golden Rule when you hear the words the Gold Standard… wake up!
The second question is if RCTs are not the Gold Standard for determining whether a drug causes an adverse event or not, is there an evidence-based alternative?
See The unbearable lightness of being.Share this:
Copyright © Data Based Medicine Americas Ltd.
Bravo! The mantras “RCT is the gold standard, “Case reports are just anecdotes”& “Once is never”are like slogans- a good one can stop progress for 50 years! I’d like to see a 4th article, or the Epilogue, titled : “A conclusions just marks the place where one has decided to STOP thinking”
I think you have been kind enough in suggesting a rationale for the ease by which pharmaceutical companies have misled doctors. I happen to believe that it is mostly due to a gross lacking in the understanding of and having no experience in the practice of the “scientific method”- or “medical model” that produces MDs who are capable of calling a toxic side effect from a drug a symptom of severe mental illness that the drug has “unmasked”.
You use analogies from baseball, like “step up to the plate”, which are visually stimulating. I think football (as it is played in America) offers another equally motivating visual. The FDA and doctors alike should have been throwing “Red Flags” on PHARMA’s playing field 20 years ago. A Red flag means : Play stops. Penalty announced along with punishment- like “15 yards and loss of down”- An early instituted behavioral modification program would have firmly established the rules of play- and inspired PHARMA to think more about patients if they were dreaming of big profits.
One more thing. When RCTs are relying a great deal on subjective data that is taken by an interviewer, using a standardized scale for assessing , say- depression (The Beck’s scale, for instance), how many factors of personal bias- that could influence the result are taken into consideration- ? I am thinking about the main aspect of ‘truth’ telling- just because that seems to come up throughout the analysis and publication of the data on adverse effects. It could account for any seemingly positive result as well. There are three opportunities for lying :the subject in the RCT, the interviewer who rates the symptom that is targeted and the data analyzer. The other truth tampering occurs in places we have already examined.
Time to spread the word to academic medical centers- EVERYWHERE-
Good to see you here, Katie.
You’re right — Where are the umpires?
I’ve lost faith in the umpires. At every major academic medical center, the department of psychiatric is part of the medical school. Any first year resident should be able to employ rudimentary scientific analysis to establish a link between a new variable and the emergence of a new symptom. If the new variable is a drug, the automatic response is to withdraw the drug. If the new symptom happens to be a documented adverse effect of the drug, withdrawing the drug would be sufficient treatment of the symptom. The unmitigated gall of any academic affiliated MD to begin a course of new drugs , calling this treatment of a disorder, which amounts to calling an adverse drug effect a disorder, should warrant the attention of any professor in that medical school who values the reputation of his/her medical school/institution. This practice was publicized in medical journals, raved about in the elite circles of psychiatrists at HMS- and not a single professor of Medicine at HMS bothered to scrutinize this bizarre means for creating a new disorder. There has been no oversight of psychiatry within medicine even amidst their mockery that pervaded the early years of biomedical psychiatry. You’d think this would have been a top priority as psychiatrists back in the 70’s were notoriously medical knowledge compromised.
I can’t contain my outrage enough to appropriately engage the ‘umpires’ on these matters, but I am inspired to engage those still in training and newly graduated from their psychiatric fellowships at Harvard affiliated hospitals. Dr. Healy’s work is of paramount importance for this endeavor. A rather good fit at the most crucial moment, I’d say. The upcoming generation of psychiatrists are of the age to use the eleventh hour atmosphere pervading psychiatry today as a spring board for their professional development.
The task at hand is devising strategies for slipping Dr. Healy’s work past the umpires and the under the noses of the ruling class of full professors. I have no doubt that the critical thinking skills and the passion to revitalize the ethical honorable aspects of the medical profession are ripe in this youthful group. They need a mentor to the same extent that Dr. Healy needs successors.
I was pondering the extremely common practice of prescribing an antidepressant and, when adverse effects demonstrating excessive stimulation emerge — sleep disturbances, akathisia, jitters, etc. — a benzo is added, masking the adverse effects of the antidepressant.
These people then go on to years of misery on the drug combination — which never seems to work very well — and then have difficulty tapering off both drugs, the iatrogenic perturbation of normal functioning having done its work below the surface, not to mention inadvertent addiction to the benzo.
Certainly many PCPs are involved, but psychiatrists don’t seem to know any better. Clinicians are using the antidepressant-benzo combination widely as a panacea to keep people on antidepressants when, if the world worked in a reasonable way, they should be discontinued due to obvious adverse effects.
I admit at the outset to having a very strong bias against using “tests” of various kinds for reasons that are too numerous to describe here. The reference above, to the Beck (not favourable I infer) reminded me of my particular objections to this “Inventory”.
The standardization sample was inadequate and potentially misleading. The average age of the outpatients in the sample was 37.20 years, however, the range was from 13-86years. Caucasians made up ninety-one percent of the sample, while African-Americans and Asian-Americans made up only four and one percent, respectively and were the only minority groups included at all. Together, they comprise only five percent of the total sample. Containing only 500 individuals, the standardization sample is too small. There is no information regarding socioeconomic status or residential location (urban, suburban, rural) compared to US census data. A brief scan of the reported means and standard deviation raise some concern about the variation of a client’s scores on the BDI and clinical severity estimates. For example, the manual reports that clients clinically diagnosed as severely depressed obtained a mean of 32.96 (SD = 12.0) on the BDI. The manual, however, states the cutoff for the severely depressed range is 29-63. These data really only lead the clinician to conclude that higher scores on the BDI serve to indicate that a significant level of depressive symptoms is being reported.
A crystal ball would be much less expensive.
Dr. Healy, what about tying in the adverse events databases of countries other than the US?
I believe the Netherlands has one, and the UK has its Yellow Card system http://yellowcard.mhra.gov.uk/. I know this would be a substantial undertaking but the more data points the better; this would be a way to accumulate them fairly quickly.
I was looking at the FDA results and found them kind of sparse, which is no surprise because doctors don’t take reporting adverse events seriously and patients don’t know it’s there.
Rxisk when launched in the next few days will take reports globally. We hope to recruit teams of patients and doctors on a global basis to report.
I looked at the Yellow Card report on duloxetine and couldn’t make head or tails of it. Seems those reports are keyed to body system (“Cardiac disorders”) rather than symptom (tachycardia), which makes them hard to interpret.
There were 52 “Fatal ADR reports,” though.
I fear this is going to sound pedantic but it is important to distinguish between an adverse drug reaction (ADR) ( any undesirable effect of a drug beyond its anticipated therapeutic effects occurring during clinical use) and an adverse drug event (ADE) an untoward occurrence after exposure to a drug that is not necessarily caused by the drug. An example would be a medication error in wrong dosage, wrong drug, etc. A review of the literature shows that often the terms are confused or are used as synonyms.
The reasons that physicians don’t report adverse reactions include, but are not limited to the following:
1.The perception that it would appear that he/she has made a treatment error.
2. Failure to believe the patient unless the adverse event is something like a rash that can a) be identified and b) written off as an idiosyncratic response.
3. Time involved in filling in forms, the bane of any physician’s life.
4. Belief that it is the manifestation of a separate disorder.
Many adverse events are not reported because they involve mainly giving the wrong drug or the wrong dosage which may result from confusion of drug names or accident in writing the order by the physician e.g. 0.5mg instead of 0.05mg or misreading by the nurse. Name confusions are common e.g risperdal vs ropinerole and, of course, there is the notoriously poor handwriting to deal with – of which I must confess, I am so guilty that it is not unknown for me to be unable to read what I have written myself.
Fear of malpractice litigation governs many decisions especially in locales that are particularly litigious.
In Canada, manufacturers of medical devices and drugs must report adverse reactions under the Food and Drug Act.
Consumers may fill in a form (that is postage paid) and mail it to the requisite authority.
Of course, none of the above carries any guarantee of appropriate action.