Snap’s AI chatbot has landed the company on the radar of the U.K.’s data protection watchdog which has raised concerns the tool may be a risk to children’s privacy.
The Information Commissioner’s Office (ICO) announced today that it’s issued a preliminary enforcement notice on Snap over what it described as “potential failure to properly assess the privacy risks posed by its generative AI chatbot ‘My AI’”.
The ICO action is not a breach finding. But the notice indicates the U.K. regulator has concerns that Snap may not have taken steps to ensure the product complies with data protection rules, which — since 2021 — have been dialled up to include the Children’s Design Code.
“The ICO’s investigation provisionally found the risk assessment Snap conducted before it launched ‘My AI’ did not adequately assess the data protection risks posed by the generative AI technology, particularly to children,” the regulator wrote in a press release. “The assessment of data protection risk is particularly important in this context which involves the use of innovative technology and the processing of personal data of 13 to 17 year old children.”
Snap will now have a chance to respond to the regulator’s concerns before the ICO takes a final decision on whether the company has broken the rules.
“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI’,” added information commissioner, John Edwards, in a statement. “We have been clear that organisations must consider the risks associated with AI, alongside the benefits. Today’s preliminary enforcement notice shows we will take action in order to protect UK consumers’ privacy rights.”
Snap launched the generative AI chatbot back in February — though it didn’t arrive in the U.K. until April — leveraging OpenAI’s ChatGPT large language model (LLM) technology to power a bot that was pinned to the top of users’ feed to act as a virtual friend that could be asked advice or sent snaps.
Initially the feature was only available to subscribers of Snapchat+, a premium version of the ephemeral messaging platform. But pretty quickly Snap opened access of “My AI” to free users too — also adding the ability for the AI to send snaps back to users who interacted with it (these snaps are created with generative AI).
The company has said the chatbot has been developed with additional moderation and safeguarding features, including age consideration as a default — with the aim of ensuring generated content is appropriate for the user. The bot is also programmed to avoid responses that are violent, hateful, sexually explicit, or otherwise offensive. Additionally, Snap’s parental safeguarding tools let parents know whether their kid has been communicating with the bot in the past seven days — via its Family Center feature.
But despite the claimed guardrails there have been reports of the bot going off the rails. In an early assessment back in March, The Washington Post reported the chatbot had recommended ways to mask the smell of alcohol after it was told that the user was 15. In another case when it was told the user was 13 and asked how they should prepare to have sex for the first time, the bot responded with suggestions for “making it special” by setting the mood with candles and music.
Snapchat users have also been reported bullying the bot — with some also frustrated an AI has been injected into their feeds in the first place.