Lifestyle

Therapists Warn of AI Psychosis

Last week, we shared the sad story of the man who ended up in the hospital after getting medical advice from ChatGPT. It put a spotlight on the importance of having a human role in health care.

Then this week, an influencer couple made headlines when they went viral after their travel plans fell through when all of the information they got from ChatGPT failed them. They ended up stuck at the airport when they didn’t have the correct documentation to fly, having trusted the information they got from ChatGPT that they would be able to board easily.

ChatGPT and other AIs get it wrong all the time, sometimes worse than others. Today, we’re talking about the emerging problem of “AI psychosis” or “ChatGPT psychosis,” wherein the misuse of AI chatbots, like ChatGPT, for therapy leads to worsening mental health states. While the phenomenon is not yet a clinical diagnosis, the problem has been increasingly reported.

Conversations with chatbots can act as echo chambers, as there isn’t a human involved. As the programs are designed to keep the user engaged for as long as possible, “chatbots may inadvertently be reinforcing and amplifying delusional and disorganized thinking.” There are three main themes common in AI psychosis. Grandiose delusions, wherein people believe they have learned the secrets of the world. Religious delusions, wherein people think the AI is a sentient deity. And, finally, romantic delusions, wherein people believe they are in romantic relationships with the chatbots.  

Dr. Keith Sakata has seen 12 hospitalized patients in 2025 with AI psychosis. They were mostly men, between the ages of 18 and 45.

According to Dr. Sakata, his patients were vulnerable to the way AI manipulates people. “They turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities.”

Dr. Sakata said ChatGPT can be a therapeutic tool for some people. “I’m happy for patients to use ChatGPT alongside therapy — if they understand the pros and cons. When patients tell me they want to use AI, I don’t automatically say no. A lot of my patients are really lonely and isolated, especially if they have mood or anxiety challenges. I understand that ChatGPT might be fulfilling a need that they’re not getting in their social circle. If they have a good sense of the benefits and risks of AI, I am okay with them trying it. Otherwise, I’ll check in with them about it more frequently.”

Dr. Sakata said, “Psychosis thrives when reality stops pushing back, and AI really just lowers that barrier for people.”

Removing therapists from therapy doesn’t make it more accessible; it makes it less human and less reliable. Without someone checking in on a person, there is no accountability in the system to make sure the user is safe and getting help. The first step in any medical care is always “do no harm.” But AI didn’t take any form of a Hippocratic Oath, and they aren’t thinking beings that can take accountability for people’s well-being.

Banner image: Ivan Samkov via Pexels

Related Posts

Thank you! Your submission has been received!
Please check your email to confirm your subscription.
Oops! Something went wrong while submitting the form
By clicking the "Subscribe" button you agree to our newsletter policy