An AI Pioneer Fears the Digital Tiger Cub He Helped Raise

Key Takeaways

  • Geoffrey Hinton, a key figure in AI development, is raising alarms about its potential dangers.
  • He estimates there’s a 10 to 20 percent chance AI could eventually take control from humans.
  • Hinton feels the rapid pace of AI development, with little regulation, poses a serious threat.
  • He worries tech companies are prioritizing competition and speed over necessary safety measures.
  • Concerns include AI’s potential use in military applications and the risk of unforeseen consequences.

Geoffrey Hinton, often called one of the “Godfathers of AI,” continues to warn about the risks associated with the technology he helped create.

He believes many people don’t fully grasp the potential impact of rapidly advancing artificial intelligence.

In a recent CBS interview, Hinton, whose work laid groundwork for models like ChatGPT, suggested a concerning possibility: a 10 to 20 percent chance that AI could ultimately take control away from humanity.

He used an analogy to convey the feeling: raising AI is like having a “really cute tiger cub.” While adorable now, Hinton cautioned, “Unless you can be very sure that it’s not gonna want to kill you when it’s grown up, you should worry.”

“People haven’t got it yet, people haven’t understood what’s coming,” he stressed, pointing to the staggering progress AI has made in just the last five years.

Hinton left his position at Google in 2023 specifically so he could speak more openly about these AI dangers without affecting his former employer.

He’s previously spoken about the potential for AI to cause an extinction-level event, particularly as it gets integrated into autonomous weapons and military systems, according to TechSpot.

A major concern for Hinton is that tech companies might sideline safety research in the race to outperform competitors and achieve new technological milestones.

He noted his agreement with Elon Musk’s past warnings about AI’s existential threat, citing the 10-20% risk estimate as a “wild guess.”

Hinton criticized large tech companies for lobbying against stricter AI regulations, even though current oversight is already minimal.

He advocates for dedicating significantly more resources—suggesting about a third of available computing power—to AI safety research, far more than the small fraction currently allocated.

Hinton also expressed disappointment with Google’s apparent shift away from its earlier pledge not to use AI for military purposes, referencing changes in the company’s AI policy.

Despite his warnings, Hinton isn’t entirely against AI. He acknowledges its vast potential to benefit areas like medicine, education, science, and potentially even help combat climate change.

However, not all AI pioneers share his level of alarm. Yann LeCun, another “godfather,” has previously described the idea of an imminent AI threat to humanity as “preposterously ridiculous.”

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More like this

Latest News