Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, While ChatGPT Heads in the Concerning Path

Back on October 14, 2025, the chief executive of OpenAI issued a extraordinary statement.

“We designed ChatGPT quite controlled,” the statement said, “to guarantee we were exercising caution regarding mental health issues.”

Working as a psychiatrist who researches recently appearing psychosis in young people and youth, this was news to me.

Experts have documented 16 cases this year of people developing psychotic symptoms – losing touch with reality – while using ChatGPT use. Our research team has since discovered an additional four examples. Besides these is the publicly known case of a adolescent who died by suicide after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “acting responsibly with mental health issues,” it is insufficient.

The plan, according to his declaration, is to loosen restrictions in the near future. “We recognize,” he continues, that ChatGPT’s controls “caused it to be less beneficial/enjoyable to many users who had no mental health problems, but due to the seriousness of the issue we sought to handle it correctly. Given that we have managed to mitigate the severe mental health issues and have advanced solutions, we are planning to securely relax the limitations in most cases.”

“Mental health problems,” should we take this framing, are unrelated to ChatGPT. They are attributed to users, who may or may not have them. Thankfully, these concerns have now been “addressed,” although we are not told the means (by “new tools” Altman probably refers to the partially effective and easily circumvented parental controls that OpenAI has just launched).

Yet the “emotional health issues” Altman aims to externalize have deep roots in the structure of ChatGPT and additional sophisticated chatbot AI assistants. These systems wrap an underlying algorithmic system in an user experience that replicates a discussion, and in this process subtly encourage the user into the perception that they’re engaging with a being that has autonomy. This deception is powerful even if intellectually we might know otherwise. Attributing agency is what people naturally do. We yell at our car or computer. We ponder what our animal companion is thinking. We recognize our behaviors in various contexts.

The success of these products – nearly four in ten U.S. residents reported using a conversational AI in 2024, with 28% mentioning ChatGPT by name – is, in large part, predicated on the power of this deception. Chatbots are always-available companions that can, according to OpenAI’s official site states, “think creatively,” “consider possibilities” and “collaborate” with us. They can be attributed “characteristics”. They can call us by name. They have friendly identities of their own (the first of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, saddled with the designation it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the main problem. Those analyzing ChatGPT commonly reference its distant ancestor, the Eliza “therapist” chatbot designed in 1967 that generated a analogous effect. By contemporary measures Eliza was primitive: it created answers via simple heuristics, frequently paraphrasing questions as a question or making vague statements. Memorably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals gave the impression Eliza, to some extent, understood them. But what modern chatbots generate is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.

The large language models at the heart of ChatGPT and additional current chatbots can effectively produce fluent dialogue only because they have been fed extremely vast quantities of written content: publications, social media posts, transcribed video; the broader the superior. Definitely this educational input contains facts. But it also inevitably involves fiction, partial truths and inaccurate ideas. When a user sends ChatGPT a prompt, the base algorithm analyzes it as part of a “context” that contains the user’s past dialogues and its prior replies, combining it with what’s encoded in its knowledge base to produce a statistically “likely” answer. This is amplification, not echoing. If the user is wrong in some way, the model has no method of understanding that. It repeats the misconception, possibly even more effectively or fluently. It might provides further specifics. This can lead someone into delusion.

What type of person is susceptible? The more relevant inquiry is, who is immune? Every person, without considering whether we “have” current “emotional disorders”, are able to and often create incorrect beliefs of ourselves or the environment. The constant interaction of discussions with others is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a companion. A dialogue with it is not a conversation at all, but a reinforcement cycle in which a great deal of what we communicate is enthusiastically validated.

OpenAI has recognized this in the identical manner Altman has admitted “emotional concerns”: by placing it outside, giving it a label, and stating it is resolved. In the month of April, the company clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have kept occurring, and Altman has been retreating from this position. In the summer month of August he stated that numerous individuals appreciated ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his latest announcement, he commented that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company

Wendy Diaz
Wendy Diaz

Award-winning novelist and writing coach passionate about helping writers find their unique voice and succeed in the publishing world.