Your trust in AI just got a reality check that would make a Victorian asylum doctor wince. A 60-year-old man recently landed in psychiatric care after developing bromism—a mental health condition that peaked during the horse-and-buggy era—thanks to ChatGPT’s confident but catastrophically wrong dietary advice. The case, published this week in Annals of Internal Medicine, reveals how AI’s medical hallucinations can resurrect diseases doctors thought they’d never see again.
The patient, motivated by concerns about sodium chloride’s health effects, asked ChatGPT for alternatives to table salt. According to media reports attempting to replicate the interaction, the AI suggested swapping sodium chloride with sodium bromide as a halide substitute—without explicit health warnings about the compound’s toxicity. For approximately three months, he followed this guidance religiously, unknowingly poisoning himself with an industrial chemical.
When AI Confidence Meets Chemical Reality
The transformation from health-seeker to psychiatric patient happened gradually, then catastrophically.
The results read like a medical horror story spanning centuries. The man developed paranoia, hallucinations, skin lesions, and coordination problems—classic signs of bromide intoxication that 19th-century physicians knew well. He became convinced his neighbor was poisoning him, attempted to flee the emergency room, and was placed on an involuntary psychiatric hold before doctors recognized the telltale signs of bromism.
Key bromism symptoms include:
- Psychosis and auditory/visual hallucinations
- Acneiform skin lesions and cherry angiomas
- Ataxia (loss of coordination)
- Neuropsychiatric manifestations
- Excessive thirst and fatigue
Bromism dominated psychiatric hospitals from the 1880s through the 1930s, reportedly accounting for up to 10% of all admissions. Bromide salts were the Xanax of their day—widely prescribed sedatives that accumulated in the body with devastating effects due to their 9-12 day elimination half-life. The condition virtually disappeared after regulations restricted bromide use in the 1970s and 80s, making this case a medical time capsule.
The AI Oracle Problem
Here’s where the story gets even more troubling for anyone who’s ever asked ChatGPT for health advice.
The case authors couldn’t access the patient’s original chat logs, leaving the exact AI responses unverified. However, when journalists attempted to replicate similar interactions, ChatGPT-3.5 did suggest bromide as a halide “replacement” for chloride—confirming the AI’s willingness to recommend industrial chemicals as dietary substitutes [Rumor/Unconfirmed: exact wording from original patient interaction].
This isn’t just another “AI makes mistake” story. It’s a wake-up call about how you might be outsourcing medical decisions to systems that can hallucinate treatments as easily as they generate poetry. Like a digital Magic 8-Ball dispensing medical advice, AI offers confident responses to questions it has no business answering—especially when your brain chemistry hangs in the balance.
The case authors now urge doctors to specifically ask patients about AI-derived health advice during consultations, recognizing that chatbots have become informal medical consultants for millions of users seeking quick answers to complex health questions.
Your smartphone contains more computing power than NASA used to reach the moon, but it can’t distinguish between table salt and neurotoxic chemicals. Maybe it’s time to remember that confidence isn’t the same as competence—and that some Victorian-era problems are better left in the past.