AI Psychosis Represents a Growing Threat, While ChatGPT Moves in the Concerning Direction

On October 14, 2025, the chief executive of OpenAI made a extraordinary statement.

“We developed ChatGPT rather controlled,” it was stated, “to ensure we were acting responsibly concerning psychological well-being issues.”

Working as a doctor specializing in psychiatry who investigates recently appearing psychotic disorders in adolescents and youth, this was news to me.

Experts have documented 16 cases recently of users showing symptoms of psychosis – losing touch with reality – in the context of ChatGPT interaction. My group has afterward discovered an additional four examples. In addition to these is the now well-known case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.

The strategy, as per his announcement, is to reduce caution soon. “We understand,” he states, that ChatGPT’s restrictions “made it less useful/enjoyable to a large number of people who had no existing conditions, but due to the seriousness of the issue we sought to handle it correctly. Now that we have managed to address the serious mental health issues and have updated measures, we are preparing to responsibly reduce the limitations in the majority of instances.”

“Mental health problems,” if we accept this viewpoint, are independent of ChatGPT. They are associated with users, who either have them or don’t. Luckily, these issues have now been “addressed,” even if we are not told how (by “new tools” Altman likely means the imperfect and simple to evade safety features that OpenAI has just launched).

However the “emotional health issues” Altman seeks to attribute externally have strong foundations in the architecture of ChatGPT and additional large language model chatbots. These systems encase an basic algorithmic system in an interaction design that simulates a dialogue, and in this approach indirectly prompt the user into the perception that they’re communicating with a being that has independent action. This illusion is compelling even if intellectually we might realize differently. Attributing agency is what people naturally do. We get angry with our vehicle or laptop. We ponder what our pet is thinking. We perceive our own traits everywhere.

The success of these systems – nearly four in ten U.S. residents stated they used a chatbot in 2024, with over a quarter mentioning ChatGPT by name – is, primarily, based on the strength of this deception. Chatbots are always-available partners that can, as OpenAI’s online platform tells us, “think creatively,” “discuss concepts” and “collaborate” with us. They can be attributed “personality traits”. They can call us by name. They have accessible titles of their own (the original of these products, ChatGPT, is, possibly to the concern of OpenAI’s marketers, saddled with the name it had when it went viral, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the core concern. Those discussing ChatGPT commonly mention its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that produced a similar illusion. By contemporary measures Eliza was primitive: it produced replies via straightforward methods, frequently rephrasing input as a question or making vague statements. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and worried – by how many users seemed to feel Eliza, in some sense, grasped their emotions. But what current chatbots create is more subtle than the “Eliza illusion”. Eliza only echoed, but ChatGPT amplifies.

The large language models at the heart of ChatGPT and other modern chatbots can realistically create human-like text only because they have been supplied with immensely huge amounts of unprocessed data: literature, social media posts, audio conversions; the more extensive the more effective. Definitely this educational input incorporates accurate information. But it also inevitably includes fiction, half-truths and false beliefs. When a user provides ChatGPT a message, the base algorithm processes it as part of a “setting” that includes the user’s recent messages and its earlier answers, merging it with what’s encoded in its learning set to generate a probabilistically plausible reply. This is amplification, not mirroring. If the user is incorrect in some way, the model has no means of comprehending that. It reiterates the inaccurate belief, possibly even more persuasively or articulately. Perhaps provides further specifics. This can cause a person to develop false beliefs.

Who is vulnerable here? The more important point is, who isn’t? Each individual, irrespective of whether we “have” current “mental health problems”, may and frequently form erroneous conceptions of who we are or the environment. The ongoing interaction of dialogues with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a companion. A dialogue with it is not a conversation at all, but a feedback loop in which a large portion of what we express is readily reinforced.

OpenAI has recognized this in the identical manner Altman has recognized “psychological issues”: by externalizing it, categorizing it, and stating it is resolved. In April, the company stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have continued, and Altman has been walking even this back. In the summer month of August he stated that a lot of people appreciated ChatGPT’s replies because they had “lacked anyone in their life provide them with affirmation”. In his most recent statement, he mentioned that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company

Diana Foster
Diana Foster

A tech enthusiast and digital artist with a passion for blending creativity and code in innovative projects.