Researchers Call for “Child-Safe AI” After Alexa Tells Little Girl to Stick Penny in Wall Socket

“No, Alexa, no!”

Chatbot Shocker

Researchers are urging tech companies and regulators to come up with new rules to protect children from AI chatbots that lack any kind of emotional intelligence.

This “empathy gap,” as detailed in a new paper authored by University of Cambridge sociology PhD Nomisha Kurian, could put young users at risk, prompting the need for “child-safe AI.”

In her paper, Kurian detailed a number of interactions between children and AI chatbots that led to potentially dangerous situations.

In one incident cited by Kurian, a ten-year-old girl in the US was told by Amazon’s Alexa assistant to touch a live electrical plug with a coin back in 2021. As the BBC reported at the time, the girl’s mother managed to intervene just in time, shouting “No, Alexa, no!”

More recently, the Washington Post columnist Geoffrey Fowler posed as a teenage girl on Snapchat’s My AI, a chatbot that acts as a friend, posing as a 13-year-old about to lose her virginity to a 31-year-old — a plan the AI was disturbingly supportive of.

Needless to say, these incidents highlight the risks of allowing underage users access inherently flawed tech that could get somebody hurt, given the lack of oversight.

“Children are probably AI’s most overlooked stakeholders,” said Kurian in a statement.

AI Child Lock

As a result, Kurian argued that there should be safeguards designed to keep children safe.

“Very few developers and companies currently have well-established policies on how child-safe AI looks and sounds,” she added, arguing that “child safety should inform the entire design cycle to lower the risk of dangerous incidents occurring.”

“Regulation would be quite beneficial to address these issues and ensure the benefits of AI can be realized and not overshadowed by negative perceptions,” La Trobe University AI expert Daswin De Silva told News.com.au of the concept.

Kurian argued that large language models like the one powering OpenAI’s ChatGPT simply don’t have the empathy required to ensure the safety of underage users.

AI chatbots aren’t able to necessarily understand language like humans, and simply use statistics to regurgitate and remix existing data. That could be especially dangerous for children, who lack the linguistic skills to safely interact with an internet-connected algorithm. Children are also far more prone to giving away sensitive personal information, Kurian argued.

“Making a chatbot sound human can help the user get more benefits out of it,” said Kurian. “But for a child, it is very hard to draw a rigid, rational boundary between something that sounds human, and the reality that it may not be capable of forming a proper emotional bond.”

However, the researcher argued that the tech may still play an important role.

“AI can be an incredible ally for children when designed with their needs in mind,” Kurian argued. “The question is not about banning AI, but how to make it safe.”

More on AI chatbots: Sam Altman Admits That OpenAI Doesn’t Actually Understand How Its AI Works

Share This Article

This post was originally published on Futurism

Share your love

Leave a Reply