Coincidence a fine thing

Coincidence can be a fine thing. No sooner had I finished The tricks that drug companies do live after them, asking for examples of maneuvers to add to a generally available repository of tricks, than up pops Robert Gibbons’ paper, Suicidal Thoughts and Behavior With Antidepressant Treatment, with not one but two maneuvers and reminders of others.‎

Dangerous liaisons

First off, the reminders. Gibbons has previously produced data showing apparent rises in juvenile suicide rates alongside falls in antidepressant prescription rates. This was an ecological fallacy maneuver taken to an extreme.

What is the ecological fallacy?

Well, there was a general fall in suicide rates in the 1990s and 2000s in line with rising antidepressant prescription rates; but claiming there is a link ignores the rising suicide rates that went hand in hand with rising prescription rates for antidepressants in the 1960s and 1970s when these were more likely to be given to suicidal patients and should — if antidepressants help — have made a difference.

But as I’ve argued elsewhere, these data from the 1960s and 1970s should not be taken as evidence that antidepressants cause suicide.

The link between suicide and autopsy rates is much tighter than the link between antidepressant prescriptions and suicide rates.

Why? There are too many intervening steps. In this case autopsy rates are one. These rose as suicide rates rose in the 1960s and 1970s. These fell in the 1980s as suicide rates fell (before SSRIs were launched) and continued to fall in the 1990s and 2000s. If autopsies aren’t done, many suicides (and homicides) are missed. (See Reseland, Le Noury, Aldred, & Healy, Psychotherapy & Psychosomatics, 2008 and a better paper by Kapusta et al 2011 Arch Gen Psychiatry doi:10.1001/archgenpsychiatry.2011.66.) The link between suicide and autopsy rates is much tighter than the link between antidepressant prescriptions and suicide rates.

Cherry picking

Despite this, after the 2004 warnings on antidepressants Dr.Gibbons found one country where he claimed a minor blip in pediatric suicides figures in one year (not replicated the following year) showed the warnings were leading to suicides by putting people off treatment (Gibbons RD et al Am J Psychiatry, 2007, 164, 1356-1363). And he is quoted in the LA Times on 7 February 2012 (Study questions antidepressant link to suicide in kids) as saying, ‘The impact of the ‘black box’ warning…was to reduce antidepressant prescriptions to kids — which was correlated with an increase in suicide rates in subsequent years.”

Gibbons continues to link black box warnings on antidepressants with an increase in children’s suicide rates, even though his 2007 paper was later described in the BMJ as “astonishing,” “misleading,” and “reckless.”

He says this, even though his 2007 paper was later described in the BMJ as “astonishing,” “misleading,” and “reckless,” and one of its researchers, Ron Herings, later said the study’s findings are “not right” and that it “doesn’t follow from the data, it is not true and serves just to scare people. It is hard to admit this, as I am one of the authors of the article and I attached my name to it …”

Fellow researcher Ron Herings later said, “… it is not true and serves just to scare people. It is hard to admit this, as I am one of the authors of the article and I attached my name to it …”

Hard to beat this, you might think, but the latest Gibbons’ study manages to do it.

Republish a discredited approach

In the latest paper, Gibbons uses a drop in scores on item 3, the suicide item, of the Ham-D rating scale in patients on antidepressants that he implies is so substantial a benefit it outweighs any conceivable harms — none of which were detected. On top of this he throws in a lot of largely irrelevant language about statistical modeling and purports to be analyzing a bigger database than the database on which warnings are based.

The first trick is that this study, shorn of irrelevant statistical modeling, had essentially been published in the BMJ 21 years earlier.

The first trick is that this study, shorn of a lot of irrelevant statistical modeling, had essentially been published in the BMJ 21 years earlier in September 1991 by Beasley et al.  FDA used this publication to justify not putting warnings on Prozac in 1991. This was the approach that Lilly took to hide the excess of suicidal acts on Prozac outlined in Drug companies use studies the way a drunk uses a lamppost and Psychotic doubt. ‎

But here’s the rub:

“Item 3 of the Hamilton scale for Depression ratings which provide the data for the analysis is an insensitive measure of suicidality: a rating does not entail the asking of any standard questions and the anchor points for scoring aren’t well defined. Furthermore in interviews characteristic of clinical trials, clinicians noting an improvement in depression will tend by virtue of a halo effect, and the counterintuitive nature of the emergence of suicidality in such circumstances to rate scores down.”

This quote from a 26 October 1991 letter in the BMJ, in response to the Beasley meta-analysis of Prozac and suicide in 1991, is polite academic speak.

What it really says is this:

  • Some of the doctors running clinical trials and doing these ratings are pretty incompetent and just in it for the money.
  • Others are competent but rushed and have their junior doctors or research assistants do the ratings.
  • Others are rushed and fill the rating scale hours (or maybe days) later based on general impressions and a rushed question to which a seriously suicidal patient might well have responded ‘I’m fine doctor’ score the suicide item down.

What I didn’t know then was that in some cases the patients in these studies may not have existed.

Non-existent patients

Non-existent patients are particularly useful in that their rating scales always go the right way and remain obediently hand in hand with their suicide scores: They can’t die, unlike real patients whose ratings might go the right way but who engage in suicidal acts.

Eyes wide shut

To spot patients getting worse you have to be looking for the problem, and these raters weren’t; some possibly suspected they were likely to have been dropped by Lilly or Wyeth if they showed any signs of spotting a problem. But it’s harder to ignore suicidal acts and it was this that ultimately gave rise to FDA’s discomfiture in 2004.

These raters may have suspected they were likely to be dropped by Lilly or Wyeth if they showed any signs of spotting a problem.

While David Graham of FDA also made this argument from early on, it took Tom Laughren (FDA’s point man on the issue) a suspiciously long time to concede in public that “looking at items from the rating scales… turned out not to be very helpful….[This method] did not detect a signal in these trials…and was not particularly productive.”  (February 2, 2004 PDAC, pages 342-343)

How bad can it get?

Well, very bad. At the 2004 hearings, John March, a lead investigator on the TADs study that Gibbons and colleagues say they have incorporated in this paper, defended the use of Prozac for children. This use is defensible but for reasons of profound self-interest doctors should not be party to a defense that involves hiding problems.

For reasons of profound self-interest doctors should not be party to a defense that involves hiding problems.

Göran Hogberg from Stockholm recently tracked down the full data (unpublished) on suicidal acts in the TADs study, which are laid out in Table 1.

Table 1

Treatment

N

Children with a suicidal event

%

Fluoxetine alone

126

27

22

CBT and fluoxetine

107

9

8

All fluoxetine

227

36

16

CBT alone

103

5

5

Placebo

103

3

3

All non fluoxetine

206

8

4

There were two other pediatric suicide trials of Prozac used by Gibbons and colleagues in this paper. These employed maneuvers 7 and 8 from the less commonly listed strategies table in my post The tricks that drug companies do live after them. That is, patients responding to placebo were dropped from one study (7), and patients doing poorly on Prozac were dropped from another (8).

Despite these steps, there was still an excess of suicidal acts on Prozac. Dr Gibbons and colleagues apparently “found no evidence that fluoxetine increased the risk of suicidal thoughts or behavior in youths.”

They can’t have been looking very hard.

Or else the wording is critical – you drown one signal (suicidal acts) with background noise in a more general signal (suicidal ideation).

FDA became party to a myth that somehow Prozac was ok where other antidepressants given to children weren’t.

Because FDA had licensed Prozac for depression before the 2004 suicide controversy blew up, they became party to a myth that somehow Prozac was ok where other antidepressants given to children weren’t. Prozac in fact shows no more efficacy than other antidepressants for children and has just as bad a suicidality profile, along with a range of other harms such as sexual dysfunction, inhibited growth, and other problems, as other antidepressants.

This is not an argument against Prozac. Suicidality can be anticipated and forestalled by warning patients. I once thought that an appeal to patient safety would get doctors on board.

This is not an argument for not using Prozac. As I also mentioned in the 1991 letter to the BMJ:  “The significance of the emergence of suicidality.. is that it can be anticipated and forestalled by warning patients.” I once thought that an appeal to patient safety would get doctors on board – but apparently not.

Professional suicide

The final sentence on Gibbons study in the LA Times is, “I hope that the warnings will not prevent depressed children and adults from getting treatment for depression… The greatest cause of suicide is untreated or undiagnosed depression. It’s very important that this condition be recognized and appropriately treated and not discarded because doctors are afraid to be sued.”

This is a recipe for professional suicide.

If the drugs are wonderfully effective and come with no problems, it would be much less expensive to have non-medical prescribers bring the benefits/salvation that drugs can bring. This is perhaps not the kind of thing Dr Gibbons can be expected to be sensitive to — as he is not a doctor.

If the drugs are wonderfully effective and come with no problems, it would be much less expensive to have non-medical prescribers bring the benefits/salvation that drugs can bring. 

Gibbons ends his article by setting up my next set of posts beautifully when he notes that there are limitations to RCTs when it comes to studying the risk of suicide on antidepressants. There are indeed – but not the ones he suggests.  I will explore some limitations over the next 2 weeks under the heading of Spin & Data.


RxISK: Research and report prescription drug side effects on RxISK.org.

Search. Report. Contribute.


You and your meds. Give the real story. Get the real story.

Pharmageddon

Pharmaceutical companies have hijacked healthcare in America, and the results are life-threatening.

 

Dr. David Healy documents a riveting and terrifying story that affects us all.

 

University of California Press (2012)

 

Available on Amazon.com

 

Comments

  1. A few additional points: patients who are suicidal before treatment are generally excluded from anti-depressant drug trials, so emerging suicidality in a an RTC might be somewhat more likely to indicate a drug effect than a reflection of the underlying pathology. What that means for the data, one can only guess. Second, the Ham D and the Beck are in my opinion worthless measures for a study, because they don’t address the level of persistance and pervasiveness of the depressive symptoms. Without follow-up questions, there is no way to know if the patient really understands what the question is trying to get at. Not to mention that with contract research organizations there are financial incentives for both the doctor and the patient to exaggerate initial symptomatology, meaning you’re getting a garbage in – garbage out phenomenon.

  2. Significant i and ii by 1Boring Old Man provide another great analysis of the problems with Dr Gibbons’ methods and conclusions. This is significant all right; the research of people like Dr Gibbons is how doctors all over America remain complacent about misleading parents about the safety and value of certain drugs. I wish I had access to these insights in time to save my son, who committed suicide because of a drug he would have been much better off without. I had questioned his doctor and was told things that I now know were completely wrong. The challenge is how to get parents tuned in to these messages before disaster strikes.

Trackbacks

  1. [...] This led Irish psychiatrist David Healy, who has investigated this issue at length, to write a blog in which he categorized the various statistical tricks that Gibbons had employed to come to his [...]

  2. [...] in February, when Gibbons’ “Suicidal Thoughts” article first appeared online, David Healy wrote a blog detailing—as he said—the many “tricks” that Gibbons had employed to make the case that [...]

Leave a Comment

*