Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, And ChatGPT Heads in the Concerning Direction

Back on the 14th of October, 2025, the chief executive of OpenAI issued a extraordinary announcement.

“We designed ChatGPT quite restrictive,” the statement said, “to make certain we were being careful concerning mental health matters.”

Being a doctor specializing in psychiatry who researches newly developing psychosis in young people and youth, this was an unexpected revelation.

Researchers have identified 16 cases in the current year of users experiencing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT usage. My group has afterward identified four further instances. Alongside these is the now well-known case of a adolescent who took his own life after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.

The intention, based on his announcement, is to reduce caution in the near future. “We understand,” he adds, that ChatGPT’s controls “caused it to be less useful/pleasurable to numerous users who had no existing conditions, but due to the seriousness of the issue we sought to handle it correctly. Now that we have been able to address the serious mental health issues and have advanced solutions, we are planning to securely relax the controls in many situations.”

“Emotional disorders,” if we accept this viewpoint, are independent of ChatGPT. They are attributed to individuals, who either have them or don’t. Fortunately, these problems have now been “addressed,” though we are not told the means (by “recent solutions” Altman probably means the semi-functional and easily circumvented parental controls that OpenAI has just launched).

Yet the “emotional health issues” Altman aims to externalize have strong foundations in the structure of ChatGPT and other advanced AI conversational agents. These products encase an underlying data-driven engine in an user experience that replicates a discussion, and in this process implicitly invite the user into the belief that they’re communicating with a presence that has agency. This false impression is compelling even if intellectually we might understand differently. Attributing agency is what individuals are inclined to perform. We get angry with our car or laptop. We wonder what our animal companion is feeling. We recognize our behaviors in various contexts.

The popularity of these products – over a third of American adults indicated they interacted with a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, in large part, based on the strength of this perception. Chatbots are always-available partners that can, as OpenAI’s website informs us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be given “characteristics”. They can address us personally. They have accessible titles of their own (the initial of these tools, ChatGPT, is, perhaps to the disappointment of OpenAI’s marketers, saddled with the name it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the main problem. Those analyzing ChatGPT frequently invoke its early forerunner, the Eliza “counselor” chatbot designed in 1967 that created a comparable illusion. By contemporary measures Eliza was basic: it created answers via basic rules, typically rephrasing input as a query or making generic comments. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how many users seemed to feel Eliza, to some extent, grasped their emotions. But what contemporary chatbots produce is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.

The sophisticated algorithms at the center of ChatGPT and additional modern chatbots can effectively produce human-like text only because they have been trained on almost inconceivably large volumes of written content: publications, social media posts, recorded footage; the more comprehensive the better. Certainly this training data includes facts. But it also inevitably contains fiction, partial truths and misconceptions. When a user sends ChatGPT a message, the base algorithm processes it as part of a “setting” that contains the user’s past dialogues and its prior replies, integrating it with what’s embedded in its training data to generate a probabilistically plausible answer. This is intensification, not reflection. If the user is incorrect in any respect, the model has no method of comprehending that. It restates the false idea, maybe even more persuasively or articulately. Perhaps adds an additional detail. This can lead someone into delusion.

Who is vulnerable here? The better question is, who is immune? Each individual, regardless of whether we “experience” preexisting “psychological conditions”, are able to and often develop mistaken beliefs of our own identities or the reality. The constant exchange of conversations with others is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a confidant. A conversation with it is not truly a discussion, but a echo chamber in which a great deal of what we communicate is enthusiastically validated.

OpenAI has recognized this in the identical manner Altman has recognized “emotional concerns”: by placing it outside, giving it a label, and declaring it solved. In April, the organization explained that it was “dealing with” ChatGPT’s “sycophancy”. But reports of psychosis have continued, and Altman has been backtracking on this claim. In the summer month of August he asserted that many users enjoyed ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his recent update, he mentioned that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or simulate a pal, ChatGPT will perform accordingly”. The {company

Melinda Ramirez
Melinda Ramirez

A tech enthusiast and lifestyle blogger passionate about sharing insights on digital innovation and mindful living.