Lifestyle

Man Hospitalized with Psychosis After AI Health Advice

For as long as there have been people talking to one another, there has been bad health advice. We frequently use this blog as a platform to say, “Don’t take health advice from Instagram/TikTok/Twitter.” People can say whatever they want to on the internet without any proof. You need to do research. That’s why we look into so many health trends to explain the science and the facts behind viral moments to help you know what’s really going on.

Now, an even crazier thing is happening. People are getting health advice from chatbots. Companies prioritize people engaging with chatbots over their being safe or accurate. They are often agreeable and don’t offer any pushback when a person says something wrong. And the information they give you can be incorrect.

A 60-year-old man in Seattle arrived at the hospital believing his neighbor was poisoning him. He developed hallucinations and paranoia. After looking into it, doctors learned that he has bromide toxicity. He wanted to improve his health by removing chloride. ChatGPT told him he could substitute bromide. He bought some and began consuming it regularly. No doctor would recommend this. After getting it out of his system and treating him for toxicity, his psychosis subsided, but it never should have happened. No one should take bromide normally. Only a chatbot would tell someone to.

For three months, this man had been aiming to improve his health, doing what he thought was right by taking bromide. But Bromism, while common in the early 20th century, isn’t normal anymore. Medications using bromide haven’t been used since the 1980s.

In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability,” his doctors explained.  

While he had started out perfectly healthy, just looking for ways to get healthier, he ended up in the hospital. He needed to be treated for three weeks to be mentally well again. While it may sound dramatic, AI health advice stole three weeks of this man’s life. He spent three weeks in the hospital unnecessarily. If a doctor gave a patient advice that bad, that flippantly, we would be outraged. But everyone keeps praising the wonders of ChatGPT. While it might be fun to talk to, please take this as a sign that nothing it says should be trusted at face value and can potentially be deadly.

Chatbots often “hallucinate” answers, meaning they just make things up. When experts tested it, they found that ChatGPT generates “false clinical details that pose risks when used without safeguards.” It cobbles together clinical notes from real doctors into entirely new, fictional things it made up. So, please, ask a medical professional for advice.    

Banner image: Airam Dato-on via Pexels

Related Posts

Thank you! Your submission has been received!
Please check your email to confirm your subscription.
Oops! Something went wrong while submitting the form
By clicking the "Subscribe" button you agree to our newsletter policy