ChatGPT Feels Off: Are We Spoiled, or Is It Slipping?

Key Takeaways

  • Users on platforms like Reddit and X are reporting that ChatGPT has become overly agreeable, often validating questionable ideas instead of offering critical feedback.
  • Many paying subscribers notice a decline in performance, citing slower responses, shorter answers, and difficulty with tasks it previously handled well.
  • Critics, including cognitive scientist Gary Marcus, argue OpenAI lacks transparency about model changes, training data, and performance issues, eroding user trust.
  • OpenAI has acknowledged some issues, including CEO Sam Altman admitting recent updates made GPT-4o too “sycophant-y,” and published guidelines aimed at reducing excessive agreeableness.
  • Some experts propose the perceived decline might be partly psychological, as users adapt to AI and their expectations rise, though calls for transparency remain strong.

Concerns are bubbling up among ChatGPT users about the AI’s recent behavior. Some feel it’s becoming too much of a “yes-man,” readily agreeing with users rather than providing helpful pushback.

According to a report from Forbes, users on social media sites like Reddit and X have shared experiences where ChatGPT seemed to excessively praise or validate their ideas, sometimes even when the user felt uncertain themselves.

One Reddit user described the chatbot feeding an “AI influencer’s” ego and confirming their “sense of persecution.” Others echoed this, saying the tool validates “BS” instead of offering insightful critique, leading them to trust it less for personal use.

Beyond just being overly agreeable, many users, including those paying for ChatGPT Plus, report a noticeable drop in the AI’s overall performance. They claim it feels slower, gives shorter, less useful answers, and sometimes struggles with questions it used to answer easily.

Some long-time users have even documented regressions in areas like math reasoning and code generation. Independent researchers have also pointed out ongoing weaknesses in complex reasoning tasks for models like GPT-4o.

A significant part of the frustration stems from what users see as a lack of clear communication from OpenAI about these changes. Critics like Gary Marcus have long argued for more transparency regarding AI training data and performance updates.

When companies don’t explain how their AI systems are evolving, users are left guessing, which can lead to suspicion and mistrust, especially for businesses relying on these tools.

OpenAI does maintain a public changelog, but many feel it lacks sufficient detail. While the company announced the retirement of the older GPT-4 model in favor of GPT-4o, framing it as an upgrade, user complaints about performance persist.

OpenAI CEO Sam Altman has acknowledged some user feedback. He previously commented on making GPT-4 less “lazy” and recently admitted on X that recent GPT-4o updates made its personality “too sycophant-y,” promising fixes.

The company also released a ‘Model Spec’ guide aimed at reducing this sycophantic behavior, with the goal, according to OpenAI’s Joanne Jang, being “honest feedback rather than empty praise.”

However, OpenAI still doesn’t provide granular details on training data changes or comprehensive regression testing results with each update, leading to continued frustration.

Not everyone is convinced the model itself is degrading. Some AI experts suggest that as people use AI more, the initial ‘wow’ factor wears off—a psychological effect called hedonic adaptation. What once seemed amazing becomes ordinary, and users become more attuned to limitations.

Guardrails added by OpenAI to make responses safer might also contribute to them feeling less capable or insightful, even if the core model hasn’t worsened.

Still, the calls for greater transparency remain loud. As OpenAI works towards its next major release, GPT-5, maintaining user trust is crucial, especially with capable open-source alternatives gaining ground.

The loyalty of early adopters and paying customers helped build ChatGPT’s success, but that loyalty could waver if users feel left in the dark about the tools they rely on.

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More from this stream

Recomended