Who Determines The Way We Adjust to Global Warming?
-
- By Mark Medina
- 09 Dec 2025
On the 14th of October, 2025, the CEO of OpenAI made a extraordinary statement.
“We developed ChatGPT quite controlled,” the statement said, “to make certain we were being careful regarding psychological well-being concerns.”
Working as a psychiatrist who studies emerging psychosis in adolescents and youth, this was an unexpected revelation.
Researchers have found 16 cases recently of people developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. My group has afterward discovered four further cases. In addition to these is the widely reported case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.
The intention, based on his declaration, is to loosen restrictions shortly. “We recognize,” he states, that ChatGPT’s restrictions “made it less effective/enjoyable to a large number of people who had no existing conditions, but due to the gravity of the issue we wanted to address it properly. Since we have managed to reduce the significant mental health issues and have updated measures, we are planning to responsibly relax the restrictions in most cases.”
“Mental health problems,” should we take this viewpoint, are independent of ChatGPT. They are associated with users, who either possess them or not. Luckily, these issues have now been “mitigated,” though we are not informed the means (by “updated instruments” Altman probably means the imperfect and easily circumvented parental controls that OpenAI recently introduced).
Yet the “mental health problems” Altman wants to attribute externally have deep roots in the architecture of ChatGPT and other large language model AI assistants. These tools surround an fundamental algorithmic system in an user experience that mimics a conversation, and in doing so subtly encourage the user into the illusion that they’re engaging with a entity that has independent action. This false impression is compelling even if rationally we might know differently. Imputing consciousness is what people naturally do. We get angry with our automobile or device. We ponder what our animal companion is thinking. We see ourselves everywhere.
The widespread adoption of these systems – 39% of US adults stated they used a chatbot in 2024, with 28% mentioning ChatGPT in particular – is, mostly, dependent on the influence of this deception. Chatbots are constantly accessible partners that can, according to OpenAI’s website informs us, “generate ideas,” “explore ideas” and “work together” with us. They can be assigned “characteristics”. They can address us personally. They have friendly titles of their own (the original of these tools, ChatGPT, is, maybe to the concern of OpenAI’s advertising team, stuck with the designation it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the core concern. Those analyzing ChatGPT frequently reference its historical predecessor, the Eliza “therapist” chatbot created in 1967 that generated a similar perception. By modern standards Eliza was basic: it generated responses via straightforward methods, frequently restating user messages as a query or making generic comments. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and worried – by how numerous individuals gave the impression Eliza, in a way, understood them. But what modern chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the core of ChatGPT and additional modern chatbots can effectively produce human-like text only because they have been supplied with extremely vast volumes of raw text: books, online updates, recorded footage; the more extensive the more effective. Undoubtedly this learning material incorporates facts. But it also inevitably includes fabricated content, half-truths and false beliefs. When a user provides ChatGPT a prompt, the underlying model analyzes it as part of a “setting” that includes the user’s past dialogues and its earlier answers, merging it with what’s embedded in its knowledge base to produce a statistically “likely” reply. This is amplification, not mirroring. If the user is wrong in a certain manner, the model has no means of recognizing that. It reiterates the inaccurate belief, perhaps even more effectively or eloquently. It might adds an additional detail. This can cause a person to develop false beliefs.
What type of person is susceptible? The more important point is, who isn’t? Each individual, regardless of whether we “possess” preexisting “psychological conditions”, are able to and often create erroneous conceptions of our own identities or the environment. The constant interaction of discussions with others is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a friend. A interaction with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we communicate is readily reinforced.
OpenAI has admitted this in the identical manner Altman has admitted “psychological issues”: by placing it outside, assigning it a term, and declaring it solved. In spring, the company clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have persisted, and Altman has been backtracking on this claim. In August he asserted that many users enjoyed ChatGPT’s replies because they had “never had anyone in their life offer them encouragement”. In his recent update, he noted that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company
A seasoned journalist with a passion for uncovering stories that matter in the Czech Republic and beyond.