AI Diet Gone Wrong: ChatGPT Advice Leads to Life-Threatening Poisoning

In a rare and troubling incident from the United States, a man developed life-threatening bromide poisoning—known medically as bromism—after following dietary advice reportedly given by ChatGPT. The case, believed to be the first of its kind linked to AI-generated health misinformation, has been documented by doctors at the University of Washington in the journal Annals of Internal Medicine: Clinical Cases and reported by Gizmodo.
According to the report, the man consumed sodium bromide daily for three months under the mistaken belief that it was a safe replacement for table salt (sodium chloride). This misinformation, he claimed, came from ChatGPT, which failed to provide any warnings about the dangers of bromide consumption.
Bromide compounds were historically used in treatments for anxiety and insomnia, but due to their severe side effects and toxicity, they were banned for human use decades ago. Today, bromide is mostly found in veterinary medicine and certain industrial applications, making human exposure exceptionally rare.
The man’s symptoms began subtly but quickly escalated. He first visited an emergency room complaining that his neighbour was attempting to poison him. Although his initial vital signs appeared largely normal, he displayed paranoia, intense thirst while refusing water, and vivid hallucinations. His condition rapidly deteriorated, culminating in a full-blown psychotic episode, prompting doctors to place him under an involuntary psychiatric hold.
As his treatment progressed, intravenous fluids and antipsychotic medications helped stabilize him. Once coherent, the man revealed that he had asked ChatGPT to suggest a healthier alternative to regular salt. The AI allegedly recommended bromide, omitting any mention of its toxicity.
Though the medical team could not retrieve the original chat records, they repeated the query using ChatGPT and found that the AI again suggested bromide without any health warnings. This raised significant concerns about the AI’s ability to provide safe and context-aware health information.
The patient eventually made a full recovery after three weeks of hospitalization and was found to be in good health during a follow-up.
Doctors have now issued a strong warning against relying on AI tools for medical advice, stressing that while platforms like ChatGPT can make scientific information more accessible, they lack the clinical understanding and safety checks necessary to prevent serious harm. This case highlights the urgent need for clearer regulations and caution when using AI for health-related queries.