AI Is Getting Its Own Branch of Physics

Key Takeaways

  • Artificial intelligence is widely used but operates like a “black box,” meaning even its creators don’t fully understand how it works.
  • NTT Research has launched a new Physics of Artificial Intelligence (PAI) Group to fundamentally understand AI systems.
  • The goal is to establish the basic principles governing AI, similar to how physics explains the natural world.
  • This research aims to create more transparent, trustworthy, and human-aligned AI.
  • The group explores deep questions about AI bias, hallucinations, and personality development.

Artificial intelligence is transforming everything from customer service to scientific discovery, yet there’s a significant gap in our understanding.

While companies create increasingly powerful AI, exactly how these systems “think” or why they sometimes behave unexpectedly remains largely a mystery, even to the engineers building them.

This lack of clarity makes AI feel both magical and potentially risky.

Kazu Gomi, CEO of NTT Research, compared the situation to physics before Newton established clear laws. We see AI’s outputs, but don’t grasp the underlying mechanics, as he explained in an interview with Forbes.

To address this, NTT Research, the R&D division of IT giant NTT, recently introduced its Physics of Artificial Intelligence (PAI) Group.

This new team, led by physicist Dr. Hidenori Tanaka, aims to uncover the fundamental principles governing AI behavior.

Working with researchers from universities like Harvard and Stanford, the PAI Group isn’t just focused on performance; it’s tackling deeper questions.

They want to understand the roots of AI bias, figure out why AI “hallucinates” or makes things up, and explore how to responsibly design AI personalities.

Current methods like “fine-tuning” AI might not be enough to fix issues like bias, as models can sometimes revert to harmful behaviors.

Gomi emphasized the need to understand *how* bias gets embedded in AI, down to the data and network level, rather than just patching over problems.

The group is also examining AI hallucinations, suggesting they might be akin to a form of creativity where AI fills information gaps. Understanding this is key to controlling it.

Beyond technical aspects, the team is considering the societal impact of AI personalities, questioning how designed AI interactions might influence human users.

Tanaka highlighted the challenge of translating concepts like empathy and kindness into rules that guide AI, suggesting we are essentially shaping “new citizens” for our digital world.

Given AI’s rapid integration into critical decisions, NTT Research believes this foundational work is urgent.

Their hope, as Tanaka stated, is to guide AI’s growth responsibly by building understanding and intervening wisely before potential harms become widespread.

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More from this stream

Recomended

Finally, an AI That Understands Lego Physics (Mostly)

Key Takeaways Researchers from...

AI’s New Game Has Google Watching From Sidelines.

Key Takeaways Investment firm...

Insuring Against AI’s Confident Mistakes

Key TakeawaysLloyd’s of...

How Everyday AI Tools Quietly Became Essential

Key TakeawaysGetting familiar...