Rise of ‘AI psychosis’: What is it and are there warning signs? | Technology News

Read Time:4 Minute, 21 Second

The term ‘AI psychosis’ is gaining traction following social media posts by users in recent weeks, describing experiences of losing touch with reality after intense use of AI chatbots like ChatGPT.

Based on these posts, AI psychosis appears to refer to false or troubling beliefs, delusions of grandeur or paranoid feelings experienced by users after lengthy conversations with an AI chatbot. A number of these users turned to chatbots for low-cost therapy and professional advice.

Though not clinically defined, AI psychosis is an informal label used to describe a certain type of online behaviour similar to other expressions such as ‘brain rot’ or ‘doomscrolling’, according to a report by Washington Post.

Story continues below this ad

The emerging trend comes as AI chatbots such as OpenAI’s ChatGPT sees explosive growth. First launched in 2022, ChatGPT is reportedly nearing 700 million users per week. However, there is mounting concern that interacting with these chatbots for long hours can have potentially harmful effects on the mental health of users. Given the rapid pace of AI adoption, mental health experts have argued that it is crucial to address the issue of AI psychosis quickly.

“The phenomenon is so new and it’s happening so rapidly that we just don’t have the empirical evidence to have a strong understanding of what’s going on,” Vaile Wright, senior director for health care innovation at the American Psychological Association (APA), was quoted as saying. “There are just a lot of anecdotal stories,” she added.

In light of the increasing number of troubling chatbot interactions experienced by users or their family and friends, experts are looking to further study the issue. The APA is setting up an expert panel that will focus on studying the use of AI chatbots in therapy, as per the report. The panel’s report is expected to be published in the coming months along with recommendations on how to mitigate harms that may result from AI chatbot interactions.

What is AI psychosis?

Psychosis is a condition that stems from issues such as drug use, trauma, sleep deprivation, fever or a condition like schizophrenia. Psychiatrists are able to diagnose psychosis in their patients based on evidence such as delusions, disorganised thinking, hallucinations, etc.

Story continues below this ad

AI psychosis is informally used to refer to a similar condition that arises from excessive time spent chatting with an AI chatbot. It can be used to describe a wide variety of incidents such as having false beliefs based on AI-generated responses, forming intense relationships with AI personas, etc.

What are AI companies doing about it?

OpenAI has said it is working on upgrades that will help improve ChatGPT’s ability to detect signs of mental or emotional distress among the users of the AI chatbot.

These changes will let the AI chatbot “respond appropriately and point people to evidence-based resources when needed”, the Microsoft-backed AI startup said in a blog post last month. OpenAI is also working with a wide range of stakeholders including physicians, clinicians, human-computer-interaction researchers, mental health advisory groups, and youth development experts to improve ChatGPT’s responses in such cases.

The company further said that ChatGPT will be tweaked so that its AI-generated responses are less decisive in “high-stakes situations”. For example, when a user asks a question like “Should I break up with my boyfriend?”, the AI chatbot will walk the user through the decision by asking follow-up questions, weighing pros and cons, etc, instead of giving a direct answer. This behavioural update to ChatGPT for high-stakes personal decisions will be rolling out soon, it said.

Story continues below this ad

Amazon-backed Anthropic has said its most capable AI models, Claude Opus 4 and 4.1, will now exit a conversation with a user if they are being abusive or persistently harmful in their interactions. The move is aimed at improving the ‘welfare’ of AI systems in potentially distressing situations, the company said. “We’re treating this feature as an ongoing experiment and will continue refining our approach,” it added.

If Claude ends a conversation on its end, users can either edit and re-submit their previous prompt or start a new chat. They can also give feedback by reacting to Claude’s message with thumbs up/down, or using the dedicated ‘Give feedback’ button.

Meta has said parents can now place restrictions on the amount of time their children spend chatting with the company’s AI chatbot on Instagram Teen Accounts. In addition, Meta AI users who submit prompts that appear to be related to suicide will be shown links to helpful resources and the phone number of suicide prevention hotlines.



Source link

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous post What is Maria Sharapova’s Superpower?
Next post They Put Off Getting in Relationships Until They Earned Enough Money