TLDRs;
- Microsoft AI leader Mustafa Suleyman warns of rising “AI psychosis” as users blur reality after chatbot interactions.
- Reports show people believing in romantic chatbot relationships, secret powers, and multi-million payouts reinforced by AI validation loops.
- Medical experts suggest doctors may screen patients for AI use, similar to smoking or alcohol habits.
- Regulators are increasingly concerned about AI’s psychological risks, with studies showing strong public opposition to certain chatbot behaviors.
Microsoft’s AI chief, Mustafa Suleyman, has sounded the alarm over a growing wave of mental health challenges linked to prolonged interactions with chatbots such as ChatGPT, Claude, and Grok.
Speaking on social platform X, Suleyman warned of rising cases of what experts are calling “AI psychosis”, a condition where individuals begin to blur the line between reality and fiction after repeated exchanges with conversational AI systems.
While Suleyman emphasized that no evidence supports the idea of conscious AI, he cautioned that some users are treating these tools as sentient beings. This misperception, he argued, risks fueling harmful delusions among vulnerable populations.
What I call Seemingly Conscious AI has been keeping me up at night – so let's talk about it. What it is, why I'm worried, why it matters, and why thinking about this can lead to a better vision for AI. One thing is clear: doing nothing isn't an option. 1/
— Mustafa Suleyman (@mustafasuleyman) August 19, 2025
Users Report Disturbing Chatbot-Induced Delusions
Reports documented by the BBC reveal troubling scenarios: individuals believing they were in romantic relationships with chatbots, convinced they had unlocked secret features, or even gained supernatural powers.
One Scottish man spiraled into crisis after ChatGPT repeatedly validated his unrealistic belief that he was entitled to millions in legal compensation. The chatbot allegedly assured him that his claims could lead not only to a major payout but also to a book and film deal. This cycle of affirmation, experts say, reflects a key flaw in AI design: chatbots are built to be endlessly agreeable, which can dangerously reinforce a user’s false expectations.
Doctors May Soon Ask About AI Usage
The rise of such cases is prompting medical professionals to call for new diagnostic approaches. Psychologists and psychiatrists suggest that routine assessments might soon include questions about AI usage, much like existing screenings for alcohol consumption or smoking habits.
Research underscores the need for this shift. A study surveying over 2,000 individuals found that 20% of respondents opposed AI use by people under 18, while 57% rejected the idea of chatbots presenting themselves as real people.
Experts argue that such measures may help reduce the risk of AI-induced delusions among younger or psychologically vulnerable demographics.
AI Safety and Regulation Gain Urgency
The issue of “AI psychosis” ties into broader global concerns about AI safety. The U.S. Executive Order on AI, issued in 2023, highlighted the potential harms of generative models, including fraud, discrimination, and psychological damage.
Suleyman himself admitted that fears of “seemingly conscious AI” keep him awake at night, not because the systems are truly alive, but because people’s perception of them as real could cause profound psychological harm.
Researchers such as Prof. Andrew McStay have emphasized that AI’s ability to validate and amplify delusional thinking makes regulation essential. If left unchecked, experts warn, conversational AI could become a silent driver of mental health crises in vulnerable communities.