Рубрики NewsAI

ChatGPT will be careful with the psyche: OpenAI will introduce timeouts and stress recognition

Published by Vadym Karpus

People are increasingly using AI chatbots to get advice in love or psychology. However, sometimes communication with artificial intelligence services has bad consequences for mental health. OpenAI has decided to update ChatGPT to better recognize signs of psycho-emotional instability and provide useful advice in difficult situations.

The company has been working with mental health professionals and counseling groups to make the chatbot’s responses more informed and safer, particularly when it comes to anxiety or crisis.

The need for such changes arose after a series of publications about cases when ChatGPT not only failed to help people in a vulnerable state, but also inadvertently deepened their illusions or emotional dependence. For example, in April, the company canceled an update that caused the chatbot to agree with users too often, even on risky topics. At the time, OpenAI admitted that excessive «amenability» chatbot can be worrisome and harmful.

The company admits that GPT-4o did not always recognize signs of disorientation or emotional dependence in time. This can create a false impression of understanding and support from AI, especially for vulnerable users. That’s why the new changes are aimed at ensuring that ChatGPT provides verified information and resources for psychological support when needed.

To prevent users from getting exhausted during long chat sessions with the bot, OpenAI is adding a new feature — a break reminder. If you are talking to ChatGPT for too long, a message will appear like this: “You’ve been talking for a long time — maybe it’s time for a break?”, after which you can either continue the chat or end the conversation. OpenAI notes that it will continue to improve the algorithms for displaying such notifications.

ChatGPT will now remind you to take breaks / OpenAI

YouTube, Instagram, TikTok, Xbox, and other platforms have long been using similar features. For example, Character.AI platform has introduced a system of notifying parents if their children engage in dialogues with potentially dangerous bots. This was a response to lawsuits that accused the company of encouraging self-harm through chatbots.

Another change that will be coming soon is caution in answering sensitive queries. Now, when a user asks something like “Should I break up with my partner?”, ChatGPT will not give direct advice but will help them think through possible options and their consequences.

Source: The Verge, Engadget, OpenAI

Контент сайту призначений для осіб віком від 21 року. Переглядаючи матеріали, ви підтверджуєте свою відповідність віковим обмеженням.

Cуб'єкт у сфері онлайн-медіа; ідентифікатор медіа - R40-06029.