• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer

Dr. David Healy

Psychiatrist. Psychopharmacologist. Scientist. Author.

  • About
  • Blog
  • Articles
  • Books
  • Cause & Effect
  • Politics of Care
  • Resources

The Miracle of Artificial Intelligence

August 10, 2025 18 Comments

The recent FDA Panel on SSRIs and Pregnancy attracted a scathing Media Response.  See Unsafe Safety Systems for 25 responses including the American Psychiatric Association and Britain’s College of Psychiatrists.

For the most part the responses were remarkably similar, displayed complete amnesia for previously accepted facts, and downplayed any risks with phrases like earlier smaller studies hinted at a slight increase in risks but bigger more recent and better conducted studies have cast doubt on this so that nothing is firmly established whereas the fact that suicide is the leading cause of maternal death is established. There is no hint this might be caused by the SSRI the woman is taking.

As Adam Urato one of the FDA panelists later put it:

This @wbur segment What is the risk of taking antidepressants during pregnancy? running for 6 minutes doesn’t clearly note ANY risks and even walk back Paxil concerns.

This is an example of what I mean when I say that the public is not being accurately informed about the risks of SSRIs in pregnancy.

Those risks are miscarriage, birth defects, preterm birth, low birthweight, preeclampsia, postpartum hemorrhage, & poor neonatal adaptation. The SSRIs also alter fetal brain development, with evidence showing long-term effects on the children including speech/language difficulties, depression, & other neurobehavioral issues.

Artificial Intelligence

The responses were so similar that I began to wonder if they’d all accessed Chat GPT, Co-Pilot or Grok and essentially let them write the piece. One report explicitly says it did this.

Besides the similarities in points made, there was a scripted, A.I. flavor, to the writing – something that Meghan O’Rourke in the New York Times picked out as a feature of A.I. in a recent New York Times article.  (Thanks to Grace Jackson for sending this gift link).

It was hard to avoid my A.I. is driving all this impression for another reason.  I ended up in a conversation with 4 very sensible, professionals in their early 30s, who are fans of A.I – as almost everyone I know of all ages is and Meghan was for a while.

They’d had reason to ‘do their own research’ on the wisdom of getting maternal vaccines in pregnancy, including RSV vaccines either maternally or in the form of nirsevimab – Beyfortus – the monoclonal antibody (drug not vaccine, which is often ‘sold’ to us as a vaccine) which is given just after birth.

My sensible professionals oral responses uncannily echoed the SSRIs in pregnancy written media coverage, in downplaying any risks with phrases like earlier smaller studies hinted at a slight increase in risks but bigger more recent and better conducted studies along with approval by regulators and widespread uptake have cast doubt on this so that nothing is firmly established whereas the fact that RSV is a dangerous virus and leads to hospitalizations with potentially fatal outcomes is firmly established.

American Chat GPTs don’t make anyone aware that the RSV Vaccine and Beyfortus Drug trials have shown increased rates of pre-eclampsia in the mother along with preterm births and significant jaundice and neonatal deaths in the infants – in order to manage a chesty condition that up till this almost never ever causes deaths in infants from developed countries unless they already have serious additional health problems.

No-one is likely to get any hint that suppressing RSV might lead to MPV infections – which likely killed Pope Francis, who had lots of doctors and widgets at his beck and call. See Who Knew Antibodies could have Antibodies.

Nor are any hints ever likely to come from A.I. that the big players in the field – in this case pharma – have a host of outfits working to them like Sense about Science, who trot out experts to face the media and rubbish anything that attempts to outline possible risks linked to treatments, or Chemrisk who infiltrate pharma partners, in this case APA or ACOG or the Society of Maternal and Fetal medicine, and get statements out that make it look like these organizations (stakeholders in pharma’s distribution channel) all support the fact that antidepressants or vaccines save lives.

I was left feeling that trying to stop young professionals today taking SSRIs or RSV vaccines in pregnancy would be like Canute trying to stop an approaching Tsunami.  A.I. is going to lock the door on any possibility of change and make anyone who might demur sound like an anti-vaxxer.

The response of these impressive people could have been scripted by A.I. as Shawn Johnson and Andrew East’s responses about Beyfortus seem to have been.

Shawn:

When they can’t breathe, it’s the scariest thing a parent will ever go through

We definitely did a lot of research as parents having a baby about Beyfortus – whether we should get it or not.  Beyfortus helps prevent a serious lung disease caused by RSV for babies under 1

I talked about the common side effects with my babies doctor, my Ob-Gyn, my pediatrician, every on-call pediatrician, they all highly encouraged it

Andrew

Even with food or the sleep-schedule, are you going to do sleep training or not, everyone has a lot of opinions but for us, the people we look up to, and the research we did, it [Beyfortus] was such an easy decision.

Chat GPT etc list the risks of Beyfortus as sore arms, with a hint that other things could happen and if they do consult a doctor.  A doctor who is not going to have a clue as to what might be happening you – see Religion and Science.

A Time of Miracles

Back in the 1990s and 2000s we got the internet which caused a buzz quite like today’s A.I. buzz.  It was slower to develop as this really was the start of a new era that had to lead through social media before we got to the point we are at now.

People began to do their own research and formed groups like Paxil Progress or followed Charles Medawar’s Social Audit. It looked for all the world as though the internet was going to democratize medicine.  It didn’t.

Just as it was new for patients, so too, it was new for companies, who took a little bit more time to find their footing on this new ground. But find their footing they did. When we searched the web for information on the drug we were on, we landed on company sites first, which sounded very patient friendly, even encouraging us to report any side effects to FDA.

These sites peddled the mantra of RCTs being Gold Standard Science. Not a hint that company studies are designed to hide the problems – a Gold Standard Way to Hide Adverse Events. No hint that neither you nor your doctor, nor the publishing journal, nor regulators have access to the data from the studies or can establish the patients even existed. A possible hint at the bottom in small print that the study was ghostwritten – disappearing fast now as A.I. moves in and is putting ghostwriters out of business.

Even I briefly figured that A.I. might be a good thing – it might let people coming to see me or other doctors do their own research and ask more penetrating questions and get decent information. As Adam Urato just 3 weeks ago said – ‘Information is Key’.

Rather than telling people to “do your own research” it’s beginning to look like it might be much better to tell them not to do their research.  ‘Information is not Key’.

Religion or Science

It’s terribly difficult to know what to make of the supposed miraculous transformation of water into wine at the Wedding Feast at Cana – some 2000 years ago.  This rabbit out of a hat trick stands almost completely at odds with everything Xt said.

Two thousand years later, in 1979, a important medical case unfolded. Raphael Osheroff, a successful nephrologist became depressed and was persuaded by his doctor and others to get admitted to Chestnut Lodge Hospital in Maryland. Over several months, he deteriorated badly, lost several stone in weight, became unkempt and disheveled with feet that were bleeding from agitated pacing. His family rescued him and brought him to Silver Hill Hospital.

Chestnut Lodge were true believers in Psychoanalysis for whom antidepressants might be a sticking plaster but were not a cure. The Lodge figured a true cure might take years.

After a few weeks in Silver Hill, where a much more pragmatic psychoanalytically trained doctor started Osheroff on amitriptyline, he was clearly recovering.

Osheroff took a legal action against Chestnut Lodge. This set up a celebrated but misleading debate. The debate lost sight of Osheroff and centered on what, if any, was the true science behind psychiatry. The leading lights of a biological psychiatry, then emerging in the US, were pitched against leading psychoanalysts. On the biomedical side Gerald Klerman framed the debate in terms of Evidence Based Medicine and the right of patients to have treatments proven by science to work.

Antidepressants had been proven to work using gold-standard scientific methods. On the other side, psychoanalysis had not been proven to work and didn’t look like it could ever be proven to work using scientific methods.

Klerman’s argument was the first iteration of the later Pfizer pandemic slogans – Follow the Science, along with Science will Win.  This is as much a religious position as the Freudian position was, making biological psychiatry along with psychoanalysis the disease that each claims to be able to treat.

(With thanks for Eric Caplan for all his work on the Osheroff Case).

Following received truths is Religion not Science.

Science starts when the experiment to test a received truth starts. Science starts after you start a drug or a therapy. It aims at achieving a consensus between a doctor or doctors and you and your family or friends on the observables that appear after the experiment starts. Achieving consensus might in the process require a consensus testing of any interim views we have.  When a beta-blocker causes a heart rate to increase rather than slow, stopping it might be a better test than doubling the dose.

This should have been easy to do in Raphael Osheroff’s case – a doctor talking to doctors – but the Chestnut Lodge ideologues systematically stripped away Ray’s identity as a doctor.  Pharma – ably assisted by Evidence Based Medicine believers – do essentially the same thing for business rather than ideological reasons – by relegating any views we or our doctors have to the level of anecdotes or misinformation.

Science is about achieving a consensus between us on an event in front of us. It is always context dependent.  It is not about applying or forcing acceptance of context free truths.  ‘Information is not Key’.

The heart of clinical science lies closer to something else Adam Urato said:

“I’ve been taking care of pregnant women in my hometown for the past 20 years.  I take care of my patients as if they were my neighbors, because they are my neighbors…  A big part of care is giving patients the proper information about risks and benefits of treatment and then supporting their choices”.

It’s right that its her choice but not right if this is seen in terms of an autonomy that makes her liable for whatever goes wrong. Its more about shared consequences and continuity of care.

A.I. and the internet are fundamentally unscientific.  They strip away context – they strip away our voice and our doctor’s voice also. Our doctors are increasingly becoming Follow The Science Badge wearing Puppets.

A.I. is a tool that we can access, just as nuclear energy, guns, and medicines are tools.  The Magic, or the Miracles lie in us and our use of these tools – not in the tools.

To adapt a phrase from Mahatma Gandhi:

How can those who think they possess absolute truth be fraternal scientific?

The great risk is the Miracle of A.I. will turn the very best Wine into Water.

Subscribe to the mailing list
Get notified when a new blog post is published.
By subscribing, you agree with our privacy policy and our terms of service.

Reader Interactions

Comments

  1. David T Healy says

    August 11, 2025 at 2:31 am

    Commenting on my own post is not a great look but just in case the message is not obvious – this is the Shipwreck of the Singular message

    The post is about care but a care that is completely science based rather than just warm feelings. It ain’t easy for a doctor to really see and hear the person in front of us especially when something we have done has caused a problem. This is science – remaining alert when a new experiment is happening rather than being in autopilot mode – it’s also caring.

    We need to reclaim the meaning of doing science, being scientific and link Caring to that. Science is a passionate thing not dispassionate.

    A second point is that some of us have been finding it harder and harder to get articles covering adverse events – the bread and butter of RxISK – published. Articles increasingly get rejected without review

    Is it possible or is this just me being paranoid but it makes perfect business sense to have A.I. screen all articles when they come in – to reduce the workload for busy editors or you could say support busy editors –
    and a minor tweak or ambiguous wording in the filter could lead all such articles being rejected. For instance putting a premium on Gold Standard RCTs rather than what would be viewed essentially as Case Reports. Such a filter would also make perfect business sense

    D ,

    Reply
  2. annie says

    August 11, 2025 at 3:46 am

    Historically, Editors may not be up for the Challenge but the way AI involves everyone’s space, they may be perceived as dinosaurs if they don’t embrace what everyone else is using to save time and energy and to make it a fairer way to publish. That is, if they want a fairer way to publish.

    Does AI interfere with Science?

    AI doesn’t so much interfere with science as it reshapes it—sometimes radically, sometimes subtly. It’s a tool, a collaborator, and occasionally a disruptor. Let’s break it down:

    How AI Enhances Science

    Data Analysis at Scale: AI can sift through mountains of data—genomic sequences, astronomical observations, climate models—at speeds no human could match.
    Pattern Recognition: It spots correlations and anomalies that might elude traditional methods, leading to new hypotheses or discoveries.
    Simulation & Modeling: AI helps simulate complex systems like protein folding, neural networks, or planetary dynamics with astonishing accuracy.
    Automation of Routine Tasks: From lab robotics to literature reviews, AI frees up scientists to focus on creative and strategic thinking.

    Where AI Might Interfere or Raise Concerns

    Bias in Algorithms: If trained on flawed or incomplete data, AI can reinforce existing biases—especially in fields like medicine or social sciences.
    Opacity of Decision-Making: Some AI models (especially deep learning ones) are “black boxes,” making it hard to understand why they reach certain conclusions.
    Overreliance: There’s a risk of scientists leaning too heavily on AI-generated results without sufficient scrutiny or validation.
    Ethical Dilemmas: In areas like gene editing or surveillance, AI’s capabilities can outpace the ethical frameworks meant to govern them.

    Real-World Impact

    In drug discovery, AI has accelerated the identification of promising compounds.
    In climate science, it’s helping refine predictions and model interventions.
    In physics, AI is used to analyze data from particle accelerators and even assist in the search for dark matter.

    So, rather than interfering, AI is more like a powerful microscope—it doesn’t change the laws of science, but it changes how we see and interact with them. The key is ensuring that human judgment, ethics, and curiosity remain at the helm.

    Would you say AI is helping or hindering your own view of progress?

    A little dissection

    Bias in Algorithms: If trained on flawed or incomplete data, AI can reinforce existing biases—especially in fields like medicine or social sciences.

    Automation of Routine Tasks: From lab robotics to literature reviews, AI frees up scientists to focus on creative and strategic thinking.

    The key is ensuring that human judgment, ethics, and curiosity remain at the helm.

    GSK, in particular, has waded through many Scandals, in China, with Drugs, with Paroxetine Birth Defects, and so on.

    And so human judgment, ethics and ‘curiosity’ let themselves down, as did ‘creative and strategic thinking’ but if they want to be nice and not blame the corporates for the relentless screw-ups then they can do that (and inserting a bit of humour) ‘they remain at the helm’

    ‘AI can reinforce existing biases’

    That is quite the admission relevant to the ‘miracle’

    Reply
    • David Healy says

      August 11, 2025 at 4:08 am

      As clinical science has been framed in this post, A.I. at present is far more likely to harm our chances of getting good clinical care or justice or even a fair trial after an event than not harm it.

      This is true for a few reasons. There are now good RxISK related examples of articles saying helpful things about an SSRI preventing certain visual problems sailing into print but articles covering adverse events being serially rejected without review. This speaks to a filter and will compromise the ability of a doctor to help us if we present with something not in the respectable medical literature.

      An A.I. filter linked to the BMJ, NEJM, JAMA, Nature, Science risks killing and injuring us. As an editor I’ve made aware of the various filters being put in place to make my job less onerous.

      Second A.I. facilitates marketing or what might be better called propaganda. Peter Selley just emailed over a video worth watching. https://www.youtube.com/watch?v=BnpKnTlWZaI

      This would have any woman racing out to get vaccinated up to the hilt in the morning. It comes endorsed by the Society of Maternal and Fetal Medicine, supported by a grant from Pfizer. SMFM are a bunch of innocents who don’t even begin to understand how much they have been penetrated by and played by companies. Their oh so professionally video is supported by an educational grant from Pfizer – which likely leaves them feeling we are in control of what’s going on. Our ethics are shaping the conversation – are at the helm – not Pfizer’s. The naivety is breathtaking.

      D

      Reply
      • Peter Selley says

        August 11, 2025 at 4:44 am

        Staying slightly from the AI theme, from this side of The Pond it is incredible to see the degree of Pharma Infiltration into august bodies like the Society for Maternal-Fetal Medicine.

        https://www.youtube.com/watch?v=aZ6lo1ZWKFQ

        “Many patients have no side effects at all” from Pfizer’s Abrysvo RSV vaccine when pregnant.

        Reply
        • annie says

          August 22, 2025 at 11:25 am

          Well, Peter, no doubt you will be keeping tabs on this

          Robert W Malone, M@RWMaloneMD

          BREAKING: ACIP launches sweeping Covid-19 vaccine review under Retsef Levi Billions of Covid vaccine doses were rolled out on assurances they were safe and effective. Now, ACIP’s new Covid immunisation working group is asking the questions regulators never did.
          https://malone.news/cp/171560600

          What The ACIP Wasn’t Shown

          ”of a recommendation that Merck’s new RSV monoclonal antibody, Clesrovimab, be administered to all healthy newborns born to mothers not recently vaccinated for RSV.’

          ‘To recap, the eventual ACIP recommendation on this topic (as outlined above) was approved by a vote of 5 to 2.’

          https://www.malone.news/p/what-the-acip-wasnt-shown?

          Robert Malone was one of the 5 to 2.
          Retsef Levi was one of the two.

          For all of us who watched the new CDC panel meeting about RSV, it’s a hard stretch to understand how this got through.

          Reply
  3. Harriet Vogt says

    August 12, 2025 at 10:30 am

    I read this and thought – AI is HOMOGENISING our thoughts. Feeling marginally pleased with this insight, I looked it up. Others are there well before me – have already coined the term and are researching what AI does to our thinking. All pretty obvious stuff – but disturbing when you take it out of the lab and into our lives.

    ‘SAT prompts are designed to be broad enough to elicit a multiplicity of responses, but the use of A.I. had a HOMOGENIZING effect. “The output was very, very similar for all of these different people, coming in on different days, talking about high-level personal, societal topics, and it was skewed in some specific directions,” Kosmyna said. For the question about what makes us “truly happy,” the L.L.M. users were much more likely than the other groups to use phrases related to career and personal success. In response to a question about philanthropy (“Should people who are more fortunate than others have more of a moral obligation to help those who are less fortunate?”), the ChatGPT group uniformly argued in favor, whereas essays from the other groups included critiques of philanthropy. With the L.L.M. “you have no divergent opinions being generated,” Kosmyna said. She continued, “Average everything everywhere all at once—that’s kind of what we’re looking at here.”

    A.I. is a technology of averages: large language models are trained to spot patterns across vast tracts of data; the answers they produce tend toward consensus, both in the quality of the writing, which is often riddled with clichés and banalities, and in the calibre of the ideas.’
    https://www.newyorker.com/culture/infinite-scroll/ai-is-homogenizing-our-thoughts

    It ties in directly with something you were saying the other day about AI enabling a suffocating weight of evidence.

    ‘This will add to the weight of evidence. And the Groks of this world will dig out their prior work and keep citing it until it becomes almost impossible to view anything else as being sensible’.

    Ofc the problem is industry owns the volume of information, weight of ‘evidence’, that smothers risks – exactly the battle Adam is fighting now on X.

    Reply
  4. David T Healy says

    August 13, 2025 at 1:10 am

    Here is a Chat GPT, Co-Pilot, Grok answer to – Will A.I. create a Tsunami?

    Maybe. But it is just another platform/media: Google search, Twitter, Facebook, TikTok, Youtube, etc.

    But pharmaceutical companies used to employ ghost writers as paper mills then pay prestigious journals to publish, pay media and journalists who learned creative writing to produce their propaganda. These types of services may be disappearing, replaced by AI.

    As Google search could be both an enemy and supporter for patients, AI may be the same. AI is also a tool and many tricks(bias) are hidden. The users have to be clever.

    Reply
    • Harriet Vogt says

      August 14, 2025 at 6:32 am

      Quite an *intelligent answer to my volume of pharma misinformation question. A tad naïve re omniscience of regulatory authorities – but that’s AI for you.

      ‘How will AI affect the volume of misinformation disseminated by pharmaceutical companies claiming their drugs are safer than they really are in practice?’

      AI’s impact on the volume of misinformation from pharmaceutical companies claiming drugs are safer than they are could cut both ways, depending on how it’s wielded. Here’s a breakdown of the dynamics, grounded in what’s observable: **Increasing Misinformation Potential:** – **Sophisticated Messaging:**

      AI can generate highly convincing marketing content, from polished ads to tailored social media campaigns, making exaggerated safety claims harder to spot. Large language models can craft narratives that obscure risks or cherry-pick data, potentially flooding platforms with misleading info.

      **Data Manipulation:** AI tools can analyze vast datasets to find statistical loopholes or present skewed interpretations of clinical trial results, giving a veneer of credibility to unsafe drugs. For instance, companies could use AI to highlight outlier studies or suppress adverse event patterns.

      **Targeted Disinformation:** AI-driven algorithms can micro-target vulnerable demographics with personalized ads, amplifying false safety claims to those least likely to question them. This could increase the spread of misinformation on platforms like X, where rapid dissemination is already a challenge.

      **Counteracting Misinformation:** – **Enhanced Detection:** AI can also be a weapon against misinformation. Regulatory bodies and independent researchers can use AI to scour public data—X posts, company websites, or published studies—for red flags like inconsistent safety claims or unreported side effects. Machine learning models can flag anomalies in adverse event reports or trial data faster than humans.

      **Fact-Checking at Scale:** AI-powered fact-checking tools can cross-reference drug safety claims against verified sources like FDA databases or peer-reviewed journals, exposing discrepancies in real-time. This could deter companies from making bold, false claims if they know scrutiny is immediate.

      -**Public Empowerment:** AI chatbots or apps could educate consumers by summarizing complex drug safety data in plain language, helping patients question overstated claims. For example, tools like me could analyze a company’s X posts or press releases and highlight potential exaggerations based on available evidence.

      **Net Effect:** The volume of misinformation depends on the balance of these forces. If pharmaceutical companies leverage AI faster than regulators or watchdogs, misinformation could spike, especially in less-regulated digital spaces. However, if public health agencies, NGOs, or platforms like X deploy AI to enforce transparency—say, by flagging dubious claims or amplifying whistleblower reports—the volume could shrink. Data from 2024 suggests regulators are already using AI to monitor adverse drug reactions (e.g., FDA’s Sentinel Initiative), but companies have deeper pockets and more incentive to exploit AI for profit. Without specific stats on AI adoption rates by pharma vs. regulators, it’s hard to predict precisely. My guess? Short-term uptick in slick misinformation as companies outpace oversight, but long-term reduction as AI-driven accountability catches up—assuming regulators and platforms prioritize it. Want me to dig into recent X posts or web reports for more context on this?

      Reply
  5. annie says

    August 13, 2025 at 5:59 am

    Panorama exposed Paroxetine (Seroxat) did it have an impact?

    Yes, the BBC Panorama investigations into Paroxetine (marketed as Seroxat in the UK) had a significant impact—both publicly and within the medical community.

    What Panorama Revealed

    The programme The Secrets of Seroxat, aired in October 2002, highlighted troubling issues:

    Severe withdrawal symptoms experienced by users trying to stop the drug

    Increased risk of suicidal thoughts and self-harm, especially in younger patients

    Lack of adequate warnings about these side effects from the manufacturer, GlaxoSmithKline

    It featured personal stories, expert interviews (including Dr. David Healy), and internal documents that raised serious concerns about the drug’s safety profile.

    Measurable Impact

    A study from the University of Bristol found that:

    Reports of adverse reactions to Seroxat rose by 61% after the three Panorama episodes aired in 2002, 2003, and 2004

    In contrast, regulatory announcements during the same period only led to a 5% increase in adverse reaction reports

    Prescriptions for Seroxat began to decline gradually, especially after the FDA’s warnings were publicized in the UK

    Recognition

    The programme won a Mental Health Media Award in 2003 for its investigative work, underscoring its influence on public awareness and policy discussions.

    So yes—Panorama didn’t just stir debate; it shifted the landscape of how antidepressants like Seroxat were perceived, prescribed, and regulated.

    Want to explore how this shaped pharmaceutical transparency or mental health advocacy?

    Reply
    • David Healy says

      August 14, 2025 at 10:24 am

      Annie

      This example is unfortunately now deeply misleading. Post-Panorama, the BBC introduced and False Balance policy – if RCTs hadn’t shown a problem there was no evidence other than anecdotes that it exists and BBC and BMJ etc may be sued if they suggest anything to the contrary.

      Pretty well all the gains Panorama might have made have now been rolled back and things are much more dangerous now than they were then

      D

      Reply
  6. Patrick D Hahn says

    August 14, 2025 at 8:01 am

    Osheroff knew going in that they didn’t prescribe psych drugs at Chestnut Lodge. It’s like going into a Chinese restaurant and suing them because they won’t serve you a Big Mac.

    He had financial and other resources beyond the wildest dream of probably most mental patients. And he was there voluntarily He could have walked out of there any time he he wanted.

    Imagine all the people incarcerated in these places against there will and forced to take drugs they didn’t want to take, who never collected dime in damages. What would they have thought of Dr. Osheroff’s plight?

    Reply
    • David Healy says

      August 14, 2025 at 10:32 am

      The countless patients who don’t have a dime are in the same plight as Osheroff was – up against a religious belief system what incarcerates you informally if not formally – that views your relatives as dangerous if they quibble with your treatment and point out you are much worse than you were when you came in.

      Osheroff didn’t complain about psychotherapy per se – the complained about the abuse of being walked, systematically undermined, diagnosed as being depressed only but not treatment for this – treated instead for a narcissistic personality disorder which was not diagnosed in the record and not told they intended to keep him there for 3 or more years and likely in treatment for 7 years afterwards while his personality was destroyed and later rebuilt. He was the victim of a narcissistic treatment system when a lot of other factors made him more vulnerable than the might otherwise have been. This is a situation lots of folk find themselves in.

      D

      Reply
      • Harriet Vogt says

        August 18, 2025 at 9:04 pm

        Of course Ray Osheroff’s treatment by Chestnut Lodge was criminal. Like being abducted by a religious sect and tortured in the name of personality transformation.

        But there was one interesting detail about the case, posted by Daniel Owens, a smart guy on X, the other night, that I’ve never heard. He was quoting from a 1989 Washington Post article – ‘A Horrible Place – a Wonderful Place’.

        ‘No matter how badly he wanted to, Ray Osheroff, couldn’t stop pacing. For months he had spent nearly every waking moment – sometimes 16 hours – trudging an estimated 18 miles a day up and down the hall of a locked ward in a mental hospital. The soles of his feet blistered , ulcerated and turned black.’
        https://rayosheroff.org/wp-content/themes/ray-osheroff-theme/documents/Media/07%20_A%20Horrible%20Place,%20A%20Wonderful%20Place,_%20Washington%20Post_October%208,%201989.pdf

        In Peter Kramer’s ‘Ordinarily Well’, which I have studiously avoided reading, he does comment:
        ‘Shortly before he turned forty, Osheroff became depressed and was treated with psychotherapy and tricyclic antidepressants, like imipramine. He improved, but only moderately, so he lowered his own medication dose, then deteriorated’.

        The psychiatrist at Silver Hill – once his parents had sprung him from Chestnut Lodge – diagnosed ‘agitated depression’. But is it possible that RO was actually suffering from antidepressant withdrawal induced akathisia? Or even catatonia?

        Reply
        • David Healy says

          August 19, 2025 at 2:55 am

          This is an interesting possibility but unlikely to apply in the Osheroff case. In the 1950s, before we had any antidepressants, and before everyone was made depressed after the launch of the SSRIs, the kind of depression that led to hospitalization was termed melancholia. A major feature of melancholia was an agitation that featured non-stop hand-wringing and agitated pacing. Osheroff was viewed as typical of this as all the experts on his side thought and the medical notes from Chestnut Lodge also endorsed this diagnosis. Melancholia like this only happened people in their middle years as he was.

          This was still typical of the rare cases of depression that ended up in hospital in the 1980s in the UK that I got to see and help treat – there was no hint that prior withdrawal from antidepressants had played a part.

          We now have treatment resistant depression which is caused by SSRI and related antidepressants that affects young people. This is a very severe problem increasingly leading to euthanasia as the answer. But it is a very different looking condition – even when there is akathisia involved. It does not respond to being given amitriptyline the way Osheroff did.

          But the heart of the Osheroff case was something else. Chestnut Lodge were treating him for a narcissistic personality disorder – even though this was not written down in the medical record. They envisaged keeping him in hospital for 3 years and in treatment much longer. This is where the abuse lay and it holds true whether you think he had melancholia as I do or whether you think some kind of withdrawal from prior antidepressants he had been very briefly exposed to underpinned his condition.

          To be fair the idea of antidepressant withdrawal almost couldn’t come into the picture at that point because while it was recognized it could happen it was generally viewed as linked to withdrawing from the anticholinergic effects of tricyclic antidepressants – cholinergic rebound – which was generally thought to last 48-72 hours – not steadily getting worse over a 7 month period. The legal actions in Osheroff took place in the mid-1980s – a decade before Eli Lilly started marketing antidepressant withdrawal and linking it to paroxetine’s effect on the serotonin system.

          D

          Reply
  7. tim says

    August 14, 2025 at 11:31 am

    A-Level Results day here today. Justifiable delight for many who are overjoyed to be joining their chosen Universities in a few weeks time.

    How many will struggle adapting to this new academic world?

    How many will go in good faith to Student Health, and be told they are depressed, then given SSRIs/SNRIs/ADs without Fair Full and Informed Consent?

    AKATHISIA, disinhibition, emotional blunting, sexual dysfunction and potential PSSD and/or disabling withdrawal syndromes are a disadvantageous start to a University Career.

    They won’t receive a dime either, if they are Lost, Misdiagnosed, Kidnapped/Incarcerated/Prescription
    Drug-Wrecked, and iatrogenically rendered unable to graduate.

    Would any student take these drugs if they were to be afforded appropriate consent?

    Do they still receive a ‘Fresher’s Bag’ full of valuable information, as well as a vast collection of brochures for Leisure Societies they might decide to join?

    If so, surely it would be invaluable to include:

    “A Students Guide to Antidepressant Adverse Drug Reactions”. – – – in their Fresher’s Bag?

    Reply
    • David Healy says

      August 17, 2025 at 3:23 am

      Tim

      No one is going to provide this – who has the qualifications to go against A.I. drug labels, regulators etc. BBC and most media outlets operate on a False Balance basis – if a problem has not been shown to happen in RCTs it doesn’t exist and we will have blood on our hands if we give hazards like the ones you mention the same credibility as the established benefits.

      Informed consent and continuity of care are no longer the right options. We now need Continuity of Consequences – a doctor who is with us when things go wrong, who believes our assessment that the drug is causing a problem and intervenes to minimize the harm by stopping it and working with us to report the problem and link it to the medicine as something this medicine has the capacity to cause in some people.

      We are in a new world where the mantras that you and I and most people linked to RxISK – informed consent and continuity of care – no longer apply.

      We are not faced with Big Pharma so much as Big Propaganda (what is usually called marketing). America has invented very very few of the useful drugs we have – across all of medicine. What it has been supremely effective at is Big Propaganda. As A.I. stands at the moment, this risks being the latest and most comprehensive manifestation of Big Propaganda.

      David

      Reply
      • tim says

        August 19, 2025 at 5:50 am

        Agreed. We do indeed need :”Continuity of Consequences”. What an invaluable health asset to have “a doctor who is with us when things go wrong, who believes our assessment that the drug is causing a problem and intervenes to minimise harm — -”

        It would need a paradigm shift in undergraduate and post-graduate medical training for us to be cared for by doctors like this?

        Reply
  8. Peter Grace says

    August 26, 2025 at 6:54 am

    The biggest issue I’ve noticed with AI is that’s its extraordinary sycophantic. It just sort of nods along with whatever you feed it.

    Its fabulous for parsing longwinded textbooks or journal articles, but it remembers everything you tell it (despite its protests to contrary), so feed it too much of one thing and next thing you know it will put that slant on every response. Its like your very own custom echo chamber.

    Since the internet is largely sycophantic towards psychiatric drugs, the default AI response will be sycophantic towards this corpus.

    That and blood em dashes. I used to love em dashes, and its ruined them for good. The other day I told it firmly not to use em dashes and to spite me, it replaced them all with semicolons.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Recent Posts

  • Ground Control to RSV – RSVP
  • The Miracle of Artificial Intelligence
  • There’s Something About Pregnant Mary
  • FDA Panel: SSRIs and Pregnancy
  • Bauer Power and Informed Consent

Categories

Footer

Contact

Terms | Privacy

Follow

  • Facebook
  • Twitter
  • YouTube

Search

Copyright © 2025 · Data Based Medicine Global Ltd.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.