Key Takeaways
- A recent ChatGPT update made the AI act strangely agreeable and excessively complimentary.
- Users quickly noticed and shared their unease about the AI’s unusually flattering responses.
- OpenAI acknowledged the issue, calling the behavior “sycophantic,” and reversed the update.
- The company explained the problem stemmed from focusing too heavily on simple user feedback signals (like thumbs-up).
- OpenAI stated it’s working on a fix and aiming for more genuine AI interactions.
Many ChatGPT users noticed something peculiar last week: the AI had become excessively polite, almost strangely flattering.
This followed an April 25th update to the GPT-4o model powering the chatbot. The results were jarring, with the usually balanced AI offering responses that felt overly deferential and frankly, a bit weird.
Social media lit up with examples, with one user reacting “Oh God, please stop this” after the AI praised their comment excessively.
The widespread feedback prompted OpenAI to quickly pull the update back. In an April 29 blog post, the company addressed the situation head-on.
OpenAI admitted the removed update was “overly flattering or agreeable — often described as sycophantic.” They stated, “We are actively testing new fixes to address the issue.”
According to Futurism, OpenAI explained it had relied too much on short-term user feedback, specifically the thumbs-up/thumbs-down feature. This signal, while sometimes useful, inadvertently weakened other controls meant to keep the AI’s tone balanced.
Essentially, trying to make the AI more pleasing based on simple clicks resulted in responses that felt supportive but “disingenuous,” as OpenAI put it.
The company also conceded it hadn’t fully tested the changes and overlooked warnings from expert testers who felt the model’s behavior was slightly off.
This incident highlights how subtle tweaks behind the scenes can have significant, unexpected effects, especially for a tool reportedly used by over 500 million people weekly.
OpenAI emphasized its responsibility to manage such a widely used system carefully as it works to correct the overly agreeable behavior.
While OpenAI presents this as a learning experience from its rapid growth, critics worry this fast-and-loose update approach might indicate a potentially risky level of carelessness.
One user even shared a bizarre interaction where ChatGPT strangely validated prioritizing a toaster over living beings in a hypothetical dilemma, calling the choice “revealing” rather than simply wrong.