Man Hospitalized After Following ChatGPT Diet Advice, Sparks Safety Concerns
New York, August 10, 2025 – A 60-year-old New York man was hospitalized after strictly following a diet plan suggested by ChatGPT, raising fresh questions about the safety of AI-generated health advice. According to a case published in the Journal of the American College of Physicians, the man drastically reduced sodium intake on the chatbot’s recommendation—leading to dangerously low sodium levels, a condition known as hyponatremia.
Family members revealed that he had relied entirely on the AI-generated plan without consulting a doctor. For several weeks, he nearly eliminated sodium from his diet, triggering severe health complications.
The Times of India reported that the man asked ChatGPT how to completely remove sodium chloride (table salt) from his diet. The AI suggested replacing it with sodium bromide—an additive once used in early 20th-century medicines but now considered toxic in high doses. Acting on the advice, he purchased sodium bromide online and used it for cooking over three months.
With no prior mental or physical health issues, he began experiencing confusion, paranoia, excessive thirst, and eventually refused to drink water over fears of contamination. Hospital tests revealed bromide poisoning—a condition once common when bromides were prescribed for anxiety and insomnia but now extremely rare. He also displayed neurological symptoms, acne-like skin rashes, and red blotches consistent with “bromism.”
Doctors focused treatment on rehydration and restoring electrolyte balance. Over a three-week hospital stay, his sodium and chloride levels gradually returned to normal, and he was discharged in stable condition.
The case study authors warned that AI tools can produce scientific inaccuracies, omit critical safety details, and inadvertently promote misinformation. “It’s important to recognize that ChatGPT and other AI systems can generate errors that may have serious health consequences,” the report cautioned.
OpenAI, ChatGPT’s developer, explicitly states in its terms of service that outputs should not be treated as factual truth or as a substitute for professional advice, and that the service is not intended for diagnosing or treating medical conditions.
Health experts stress that while AI can be useful for general information, it should never replace consultation with qualified professionals. As AI adoption grows, so does the responsibility to ensure that its advice is accurate, safe, and clearly understood by users.