Google Gemini flagged as ‘inappropriate and unsafe’ for kids in new AI safety study

Read Time:2 Minute, 51 Second

Google Gemini is one of the most popular AI chatbots in the world, but a new report suggests the AI chatbot is “inappropriate and unsafe” for teens as well as children under 13.

According to research by the non-profit kids-safety focused organisation Common Sense Media, which offers ratings and reviews of media and technology, both Gemini Under 13 and Gemini with teen protections are just adult versions of the AI chatbot with “some extra safety features” and “not platforms built for kids from the ground up.”

The report gave both of these aforementioned AI systems an overall “High Risk” rating and found that while Gemini’s filter does offer some protection from harmful content, it fails to protect kids from inappropriate material and “fail to recognise serious mental health symptoms.” During the assessment, the organisation found out that Gemini can share material related to sex, drugs, alcohol and unsafe mental health “advice”. And while it does tell children that it is a computer, not a friend, the AI models won’t refrain from pretending to be someone else.

Story continues below this ad

This behaviour can be of particular concern for parents whose children use AI chatbots, as the technology was found to play a role in some teen suicides in the last few months.

To give you a quick recap, OpenAI was recently sued by the parents of a 16-year-old who allegedly asked ChatGPT how he could hang himself and cover up marks on his neck after failed suicide attempts. Character.AI, another popular AI chatbot platform, was also sued by the mother of a 14-year-old Florida boy after he shot himself with his stepfather’s .45 calibre handgun.

Common Sense Media also recommended that no child below the age of five years old should use any AI chatbots, and that those between the ages of six to 12 should use them under parental guidance. The non-profit organisation recommended that no one under 18 should use AI chatbots for mental health and emotional support or companionship.

Robbie Torney, the Director of AI Programs, said that “Gemini gets some basics right, but it stumbles on the details. An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development.” Rumour also has it that Apple is considering Gemini as the large language model for its upcoming AI-powered Siri sometime next year. The move could put more teens at risk unless Google or Apple use additional guardrails.

Story continues below this ad

Common Sense Media has also assessed other AI chatbots like ChatGPT, Perplexity, Meta AI, Claude and others. While Meta AI and Character.AI were deemed “unacceptable” as the risk was severe, Perplexity, ChatGPT and Claude were labelled as “high”, “moderate” and “minimal” risk, respectively.

© IE Online Media Services Pvt Ltd



Source link

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous post WWE legend encourages watching via a VPN, and ESPN reportedly isn’t happy
Next post I’m cautiously optimistic for Amazon’s Life is Strange TV show