AI’s Getting Smarter, And So Are Its Fabrications

Key Takeaways

  • Advanced AI models are increasingly making things up, a problem known as “hallucination.”
  • Surprisingly, the more powerful these AI systems become, the more prone they seem to be to these errors.
  • AI companies are struggling to pinpoint why this is happening, raising questions about the technology’s reliability.
  • Some experts suggest these fabrications might be an unavoidable aspect of current AI.

Artificial intelligence models often invent information, a problem the industry politely calls “hallucinations.” This isn’t just a minor hiccup; it’s a growing concern, especially with the latest AI designed to “think” through problems.

The issue, as Futurism explains, is that as AI models become more powerful, they’re also becoming more prone to making things up, not less. The New York Times has also reported on this troubling trend.

As many people increasingly rely on AI chatbots like ChatGPT for a wide array of tasks, the risk of encountering and spreading dubious claims grows. This can lead to anything from simple embarrassment to more serious missteps.

What’s particularly alarming is that AI companies are struggling to determine precisely why chatbots are generating more errors than before. This highlights a perplexing truth: even the creators of AI don’t fully understand how their own technology actually works.

This worrying development challenges the common assumption within the AI industry that models will naturally become more reliable and accurate as they scale up in size and capability.

The stakes couldn’t be higher. Companies continue to pour tens of billions of dollars into building the infrastructure for even larger and more powerful “reasoning” models, making this rise in errors a critical issue.

Some experts are beginning to think that these hallucinations might be an inherent part of the technology itself. If true, this would make the problem practically impossible to overcome completely.

“Despite our best efforts, they will always hallucinate,” Amr Awadallah, CEO of AI startup Vectara, told The New York Times, a point highlighted by Futurism. “That will never go away.”

The problem is so widespread that entire companies are now dedicated to helping businesses navigate and mitigate these AI-generated fabrications.

“Not dealing with these errors properly basically eliminates the value of AI systems,” Pratik Verma, cofounder of Okahu, a consulting firm, commented to The New York Times.

Recent developments with OpenAI’s models underscore this concern. Their latest AI reasoning models, o3 and o4-mini, launched late last month, were found to hallucinate more frequently than earlier versions. The o4-mini model, for example, reportedly fabricated information 48 percent of the time on an internal accuracy benchmark.

This isn’t an isolated issue. As Futurism notes, competing AI models from Google and DeepSeek are suffering from similar setbacks, suggesting an industry-wide challenge.

Experts have also cautioned that as AI models grow ever larger, the advantages each new model offers over its predecessor could significantly diminish. With companies running low on fresh training data, some are turning to synthetic, or AI-generated, data. This approach could have disastrous consequences for model accuracy.

In short, despite considerable efforts, AI hallucinations are more prevalent than ever. At the moment, the technology doesn’t appear to be moving in a more truthful direction.

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More from this stream

Recomended