ChatGPT advice âinfluencedâ man into psychosis, medical journal claims
Earlier this year, an uplifting story detailed how a mother turned to ChatGPT and discovered that her son was suffering from a rare neurological disorder, after more than a dozen doctors had failed to identify the real problem. Thanks to the AI chatbot, the family was able to access the required treatment and save a life.
Not every case of ChatGPT medical evaluation leads to a miraculous outcome. A report now claims ChatGPT doled out misleading medical advice ended up giving a person a rare condition called Bromide Intoxication, or Bromism, that leads to various neuropsychiatric issues such as psychosis and hallucinations.Â
Trust ChatGPT to give you a disease from a century ago.Â
A report published in the Annals of Internal Medicine describes a case involving a person who landed himself in a hospital due to bromism after seeking medical advice from ChatGPT regarding their health. The case is pretty interesting because the 60-year-old individual expressed doubt that their neighbour was discreetly poisoning them.
The whole episode began when the person came across reports detailing the negative impact of sodium chloride (aka common salt). After consulting with ChatGPT, the individual replaced the salt with sodium bromide, which eventually led to bromide toxicity.

âHe was noted to be very thirsty but paranoid about water he was offered,â says the research report, adding that the patient distilled their own water and put multiple restrictions on what they consumed. The situation, however, soon worsened after being admitted to a hospital, and evaluations were conducted.Â
âIn the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability,â adds the report.Â
Donât forget the friendly human doctor
The latest case of ChatGPT landing a person in a pickle is quite astounding, particularly due to the sheer rarity of the situation. âBromism, the chronic intoxication with bromide is rare and has been almost forgotten,â says a research paper.
The use of bromine-based salts dates back to the 19th century, when it was recommended for curing mental and neurological diseases, especially in cases of epilepsy. In the 20th century, bromism (or bromide toxicity) was a fairly well-known problem. The consumption of bromide salts has also been documented as a form of sleep medication.
Over time, it was discovered that the consumption of bromide salts leads to nervous system issues such as delusions, lack of muscle coordination, and fatigue, though severe cases are characterized by psychosis, tremors, or even coma. In 1975, the US government restricted the use of bromides in over-the-counter medicines.Â

Now, the medical team that handled the case could not access the individualâs ChatGPT conversations, but they were able to obtain similar worryingly misleading answers in their test. OpenAI, on the other hand, thinks that AI bots are the future of healthcare.
âWhen we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do,â the team reported.
Yes, there are definitely cases where ChatGPT has helped a person with health issues, but we can only expect positive results when the AI is provided detailed context and comprehensive information. But despite that, experts suggest that one should exercise extreme caution.Â
âThe ability of ChatGPT (GPT-4.5 and GPT-4) to detect the correct diagnosis was very weak for rare disorders,â says a research paper published in the Genes journal, adding that ChatGPT consultation canât be taken as a replacement for proper evaluation by a doctor.Â
LiveScience contacted OpenAI around the issue and got the following response: âYou should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice,â and also highlighted that âOpenAIâs safety teams aim to reduce the risk of using the companyâs services and to train the products to prompt users to seek professional advice.â
Indeed, one of the big promises with the launch of GPT-5, the companyâs latest ChatGPT model, was to have fewer moments of inaccuracy or hallucinations, and be more focused on delivering âsafe completionsâ, where users are guided away from potentially harmful answers. As OpenAI puts it: â[This] teaches the model to give the most helpful answer where possible, while still maintaining safety boundaries.â
The biggest hurdle, obviously, is that the AI assistant canât reliably investigate the clinical features of a patient. Only when AI is deployed in a medical environment by certified health professionals can it yield trusted results.Â
RECOMMENDED NEWS

The hottest new ChatGPT trend is disturbingly morbid
2025-10-21

Metaâs new AI app lets you share your favorite prompts with friends
2025-10-21

Clinical test says AI can offer therapy as good as a certified expert
2025-10-20

Microsoft finally adds missing Copilot+ AI tools to Intel and AMD PCs
2025-10-20

Gemini might soon drive futuristic robots that can do your chores
2025-10-20

Apple is late to Siri revolution, so Microsoft brings you Copilot for Mac
2025-10-17
Comments on "ChatGPT advice âinfluencedâ man into psychosis, medical journal claims" :