Key Takeaways
- Most people are more worried about current AI problems like bias and job loss than future “doomsday” scenarios.
- Focusing on potential long-term AI dangers doesn’t seem to reduce public concern about today’s real issues.
- People can differentiate between immediate, tangible AI risks and more abstract, future threats.
- A study from the University of Zurich suggests we can, and should, address both present and future AI challenges simultaneously.
Concerns about artificial intelligence are widespread, but people tend to worry more about the problems AI creates today rather than potential future threats to humanity.
A new study highlights this difference in perception. While some focus on speculative, long-term risks, like AI endangering human existence, many others are more concerned about immediate issues.
These present-day problems include AI systems amplifying social biases or spreading misinformation online.
Some experts have cautioned that fixating on dramatic “existential risks” might pull attention away from the concrete problems AI is causing right now.
To explore this, researchers at the University of Zurich conducted online experiments with over 10,000 people in the US and UK.
Participants read different types of headlines: some warning of AI catastrophes, others detailing current risks like discrimination, and some highlighting AI’s benefits.
The results showed people are significantly more concerned about present AI risks, such as biased decision-making or job losses, than potential future disasters, according to Professor Fabrizio Gilardi from the University of Zurich.
Even when exposed to warnings about existential threats, participants remained highly concerned about immediate problems.
This suggests people can distinguish between theoretical future dangers and tangible current issues, taking both seriously.
The study addresses fears that sensational future scenarios might overshadow urgent present-day concerns.
According to research published in the Proceedings of the National Academy of Sciences, awareness of current AI threats persists even when people are warned about apocalyptic futures.
Co-author Emma Hoes notes that discussing long-term risks doesn’t automatically distract from immediate harms.
Professor Gilardi adds that the conversation shouldn’t be a choice between one or the other. Understanding and addressing both the immediate and potential future challenges of AI is vital.