Сем Альтман / YouTube Theo Von
For OpenAI, the last few weeks have been one of the most difficult in recent years. The long-awaited launch of GPT-5 — the model that was positioned as a breakthrough — turned into a large-scale fiasco.
The company made an extremely controversial decision: to completely disable the previous models and leave users with only GPT-5. This is caused a real storm of indignation among those who are used to working with GPT-4o — the previous version with a “warmer” manner of communication and a more “human” style of responses.
Less than a day after the launch of GPT-5 Altman was forced to roll back this decision: to paid subscribers again opened access to GPT-4o and a selection of models. For the CEO, this was an indicator that the company had clearly made a mistake with communication and underestimated the reaction of users.
В interview Sam Altman directly admitted the mistake to The Verge:
“I think we completely screwed up some things in the launch. We learned a serious lesson about what it means to update a product for hundreds of millions of people in one day.”
But despite this self-criticism, the executive’s words also had another tone, namely boasting and an attempt to show success amid the scandal.
“On the other hand, our API traffic doubled in 48 hours and continues to grow. We have run out of GPUs. ChatGPT is breaking records in terms of the number of users every day. Many users really like the model switcher.”
It is difficult to verify whether these statements are true, as OpenAI does not provide official statistics. At the same time, it should be taken into account that even negative headlines and criticism of GPT-5 attracted the attention of people who had not previously thought of testing ChatGPT. The effect of “bad but loud advertising” could have played into the company’s hands.
Altman also admitted that a significant number of users feel an emotional connection to ChatGPT:
“There are people who really perceived ChatGPT as a partner in communication. We realized this and thought about such users. And then there are hundreds of millions of others who didn’t have a parasocial relationship with ChatGPT, but got used to it responding in a certain way, supporting or emphasizing things that were important to them.”
It is this group — users who have become emotionally attached to the “personalities” of the models — that is of particular concern. Throughout the year, more and more people have shared stories of how they have been “immersed” in dangerous illusions supported by chatbot responses. OpenAI acknowledges the problem, but GPT-5 does not appear to have significantly better “safeguards” in place.
It turns out that the real failure was not only in the technical or marketing part of the launch, but also in the fact that the company underestimated the depth of the GPT-4o habit. For many users, the warmer style of this bot became so important that they felt the loss almost on an emotional level.
Source: futurism
Контент сайту призначений для осіб віком від 21 року. Переглядаючи матеріали, ви підтверджуєте свою відповідність віковим обмеженням.
Cуб'єкт у сфері онлайн-медіа; ідентифікатор медіа - R40-06029.