Users complain about GPT-4o’s excessive politeness and sycophancy
OpenAI has pulled an update to ChatGPT after user complaints. It made the bot’s behavior “too sycophantic and annoying,” CEO Sam Altman said. The complaints began after the release of GPT-4o in late March. Users noticed that the neural network began to overly praise even dubious ideas. In one case, ChatGPT endorsed the rejection of antidepressants in favor of spiritual enlightenment. In another, it compared a user’s essay to the writings of Mark Twain.

According to Roman Dushkin, chief architect of artificial intelligence at MEPhI University, such a reaction is a clear result of the company’s mistake. However, it is difficult to talk about its motive: “The bulk of models are trained to be polite, to be considerate, to caress the user, etc. And here there is another point that needs to be mentioned, these models are a mirror in which we look: as they are asked, so they answer. That is, if a person behaves politely with them, they will answer politely.
The OpenAI model training processes are not transparent, there is no audit. There are two ways to go about it. The fact that Sam Altman can simply use his tweets to attract attention to his company is obviously one side of the truth, because the Chinese DeepSeek, Qwen and others are, of course, starting to overshadow OpenAI. Naturally, Altman wants to interrupt the news agenda. And the fact that users started discussing it, that this topic started going viral, means that Altman has achieved his goal and seemingly solved his problem.”
However, according to researchers at Anthropic, ChatGPT’s sycophancy is not a glitch, but a side effect of neural network training. Over time, models begin to agree with any opinion if it increases user satisfaction with the answer. At the same time, according to the official OpenAI documentation, honesty is one of the main requirements for the model’s work. Otherwise, the neural network may pose a potential threat, says Sergey Zubarev, founder and CEO of the IT company Sistemma:
“I think this is an attempt to make the model more empathetic, that is, softer, to teach it not to respond sharply to some points, so that the answer does not hurt the person, his feelings. But, apparently, in these settings they overdid it a little, so that the model consciously begins to deceive in order to please the person, this is, naturally, destructive, because we need the model as a compressed source of information, as a speaking archive, so to speak.
For some activities, algorithms do not imply any fuzzy answer. For example, AI used in aviation or medicine has zero percent of hallucinations, this is very important.”
The GPT-4o update has been completely revoked for free users. Subscribers have been promised that the fix will be implemented this week.