AI Pioneer Calculates the Odds of Losing Control

Key Takeaways

  • Geoffrey Hinton, a key figure in AI development, expresses serious concerns about the technology’s rapid advancement and potential risks.
  • He believes AI surpassing human intelligence could happen within 10 years, sooner than previously thought.
  • Hinton estimates a 10-20% chance that advanced AI could eventually take control from humanity.
  • The development of autonomous AI “agents” capable of performing tasks independently adds to the potential danger.
  • Global competition among companies and nations makes it difficult to prevent the development of superintelligence.
  • Hinton voiced disappointment with tech companies, including his former employer Google, for exploring military AI applications.

Geoffrey Hinton, a scientist often called the “godfather of AI,” shared some stark warnings about the technology he helped create.

In a recent interview with CBS News, Hinton revealed he’s “kind of glad” to be 77, suggesting he might not live to see AI’s potentially dangerous outcomes.

He cautioned that artificial intelligence is progressing much faster than many experts predicted. Once AI becomes significantly smarter than humans, Hinton worries we might lose control.

Hinton, who received the Nobel Prize for his work, explained that superintelligent systems could easily manipulate people. He compared developing AI to raising a tiger cub – cute now, but potentially deadly later.

While stressing the uncertainty, Hinton estimated a “10 to 20% chance” that AI systems could one day seize control. A major concern is the rise of AI agents, which can act independently, making the situation “scarier than they were before,” according to Hinton.

His timeline for potentially uncontrollable AI has also shortened. Previously thinking it might take five to 20 years, he now believes superintelligence capable of outperforming humans in all areas could arrive “in 10 years or less.”

According to Business Insider, Hinton feels that fierce competition between tech giants and countries makes it “very, very unlikely” that humanity will collectively decide against building superintelligence.

The challenge, he noted, is designing AI so fundamentally that it never desires control.

Hinton also expressed disappointment in major tech companies, including Google, where he worked for over a decade. He specifically mentioned Google’s reversal on avoiding military uses of AI as a point of concern.

He resigned from Google in 2023, stating he left to speak openly about the risks associated with unchecked AI development. Hinton is now a professor emeritus at the University of Toronto.

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More from this stream

Recomended