Google’s Gemini for Kids: On By Default, Ready or Not

Key Takeaways

  • Google plans to launch its Gemini AI chatbot for Australian children under 13 in the coming months.
  • Experts are sounding alarms about potential risks, including misinformation and manipulation of young users.
  • The AI tool will be available via Google’s Family Link app, reportedly turned on by default, raising concerns.
  • There are growing calls for the government to consider age restrictions for AI chatbots, similar to social media.

Google is set to introduce its Gemini AI chatbot to Australian children under 13 within months, a move that the ABC has revealed. The tech giant is already rolling out the program in the US, with a global launch to follow.

This announcement has quickly sparked calls for the government to consider banning AI chatbots for children. Professor Toby Walsh, a leading AI expert at the University of New South Wales, suggested we should have been more cautious with social media and now urges leaders to seriously consider age limits for this new technology.

The chatbot is expected to be automatically available to children through Google’s Family Link app. Parents will have an option to switch it off, but the default “on” status has caused unease. “It’s unusual to me that this would be turned on by default,” said Professor Lisa Given from RMIT University, an expert on technology’s social impact.

She highlighted that this approach relies on parents or children having the know-how to navigate controls and disable the feature. Professor Given warned that it might often only be turned off after a problem occurs, at which point “it’s too late.”

While other AI tools like OpenAI’s ChatGPT are accessible online, they are generally not intended for users under 13. Google’s Gemini, however, is one of the few major AI tools explicitly targeting this younger age group.

Professor Walsh believes tech companies might be “tone deaf” to parental concerns due to strong financial incentives. He suggested they are focused on “how to onboard the next generation of users.”

Multiple experts expressed serious alarm, warning that AI chatbots like Gemini could confuse, misinform, and even manipulate children. “Systems that are enabled by AI can certainly hallucinate or make up information,” Professor Given noted, emphasizing the need for sophisticated skills to discern truth.

A significant concern shared by all experts consulted by the ABC is that younger users may struggle to understand that chatbots are not human. Professor Given explained these systems “attempt to replicate or mirror how people engage,” adding that even adults can be deceived. She cited research on the AI Replika, where adults formed deep, personal connections with the system.

Google itself acknowledged some risks in an email to US parents, warning that Gemini can make mistakes and advising them to help children think critically about their AI interactions. John Livingstone, director of digital policy for UNICEF Australia, interpreted this as Google admitting “it isn’t entirely safe.” He added, “If a tech platform is acknowledging that there may be risks in their own products then yeah, we should sit up and take notice.”

Google plans to include default protections to filter inappropriate content for younger users, but experts remain skeptical. “It’s very hard to deliver on that promise,” Professor Walsh commented, noting that safeguards are often quickly circumvented. Professor Given echoed this, fearing children might still access unsuitable content.

Currently, Australia lacks specific AI safeguards, although the government has been developing them for over two years. A proposed “digital duty of care,” which would compel tech companies to build safe products, has yet to become law.

Professor Given sees this situation as “an excellent example of why Australia needs a digital duty of care legislation.” She stressed the need for tech companies to manage content appropriately for everyone, regardless of age.

Mr. Livingstone acknowledged AI’s immense potential benefits for children, particularly in education, but underscored the “serious risks” involved. He stated that AI is rapidly changing childhood and Australia must address it seriously.

Professor Walsh concluded that the issue goes beyond just better filters. “It’s asking fundamental questions about should this be age limited,” he said. “We should be having a serious conversation as a society about that.”

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More like this

Latest News