Key Takeaways
- Pope Leo XIV has identified artificial intelligence as a critical issue facing humanity, emphasizing its impact on dignity and labor.
- This stance continues the Vatican’s previous warnings about AI lacking human values and the need for careful, human-centric regulation.
- MIT physicist Max Tegmark has calculated a 90% probability that advanced AI could pose an existential threat to humanity.
- Tegmark is urging AI developers to conduct rigorous risk assessments, similar to those performed before the first atomic bomb tests, to understand the potential for losing control.
- Despite some political skepticism, there is a renewed international effort to collaborate on AI safety research and mitigation strategies.
In his first formal address as the newly elected pontiff, Pope Leo XIV highlighted artificial intelligence as one of the most significant challenges confronting humanity today.
He stated that the church aims to apply its social teachings to navigate this new “industrial revolution,” especially as AI developments pose fresh challenges to human dignity, justice, and work. This approach, he noted, connects to the legacy of Pope Leo XIII’s 1891 writings on workers’ rights and capitalism’s moral aspects.
These remarks echo concerns previously voiced by the late Pope Francis. In his 2024 annual peace message, Pope Francis warned that AI, lacking human values like compassion and morality, is too perilous to develop without strict oversight. He called for an international treaty to regulate AI, insisting technology must always serve humanity, particularly in areas like weaponry or governance, according to reports referenced by ZeroHedge.
This growing concern within religious and ethical spheres is mirrored by a similar sense of urgency in the scientific community.
Max Tegmark, a physicist and AI researcher at MIT, has drawn a sobering parallel between the current race to develop artificial superintelligence (ASI) and the dawn of the atomic age. In a new paper, Tegmark introduced the idea of a “Compton constant”—a way to estimate the probability of ASI escaping human control. This is named after physicist Arthur Compton, who famously calculated the risk of nuclear tests harming Earth’s atmosphere in the 1940s.
“The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,” Tegmark told The Guardian. “It’s not enough to say ‘we feel good about it’. They have to calculate the percentage.”
Tegmark’s own calculations suggest a startling 90% probability that a highly advanced AI could pose an existential threat. His paper urges AI companies to undertake risk assessments as rigorous as those conducted before the first atomic bomb test, where the odds of a global catastrophe were estimated to be incredibly low.
As a co-founder of the Future of Life Institute, Tegmark argues that quantifying such risks can help build the “political will” for global safety regulations. He also contributed to the Singapore Consensus on Global AI Safety Research Priorities, which outlines key research areas: measuring AI’s real-world impact, defining intended AI behavior, and ensuring consistent control over AI systems.
This renewed commitment to managing AI risks follows a period Tegmark described as a setback, particularly after the AI Safety Summit in Paris. At that event, U.S. Vice President JD Vance dismissed some safety concerns. Nevertheless, Tegmark observed a positive shift, stating, “It really feels the gloom from Paris has gone and international collaboration has come roaring back.”