Snap’s AI chatbot has brought the company to the attention of the UK’s data protection watchdog, raising concerns that the tool could put children’s privacy at risk.
The Information Commissioner’s Office (ICO) today announced that Enforcement notice on Snap criticized the company’s explanation that it “may have failed to adequately assess the privacy risks posed by the generative AI chatbot ‘My AI.'”
An ICO action is not a discovery of infringement. But the notice comes as UK regulators are concerned that Snap may not be taking steps to ensure its products comply with data protection rules, including the Children’s Design Regulations from 2021. It shows.
“The ICO’s investigation has preliminary found that the risk assessment carried out by Snap prior to the launch of ‘My AI’ did not adequately assess the data protection risks posed by generative AI technology, particularly to children. ” the regulator wrote in the letter. press release. “The assessment of data protection risks is particularly important in this situation, which involves the use of innovative technologies and the processing of personal data of children aged between 13 and 17.”
Snap will now have an opportunity to respond to regulators’ concerns before making a final decision on whether the ICO violated the rules.
“The preliminary findings of our investigation indicate that Snap failed to adequately identify and assess the privacy risks to children and other users before launching My AI,” Information Commissioner John Edwards said in a statement. “This suggests an alarming failure.” “We are clear that organizations need to consider the risks associated with AI alongside the benefits. It shows us to take action.”
Snap announced its generative AI chatbot in February, but it didn’t arrive in the UK until April. Utilizing OpenAI’s ChatGPT large-scale language model technology, a bot pinned to the top of a user’s feed acts as a virtual friend, potentially asking for advice or sending a snapshot.
Initially, the feature was only available to subscribers of Snapchat+, the premium version of the ephemeral messaging platform. However, Snap soon opened up access to “My AI” to free users and also added the ability to send snaps back to users that the AI interacted with (these snaps are created with generative AI).
The company said the chatbot was developed with additional moderation and protection features, such as considering age as a default, with the aim of ensuring that the content generated is appropriate for users. The bot is programmed to avoid violent, hateful, sexually explicit, or otherwise offensive responses. Additionally, Snap’s parental protection tools let parents know if their child has communicated with a bot in the past seven days through the Family Center feature.
However, despite the claimed guardrails, there have been some reports of bots going off the rails. In the initial evaluation in March, washington post Users reported that the chatbot recommended ways to mask the smell of alcohol after being told they were 15 years old. In another case, when a user was told they were 13 years old and asked how they should prepare for having sex for the first time, the bot responded by setting the mood with candles and music to “make it special.” We will make suggestions for
There have also been reports of Snapchat users bullying the bot, with some complaining about the AI being injected into their feed in the first place.
Asked for comment on the ICO notice, a Snap spokesperson told TechCrunch:
We are carefully considering the ICO’s preliminary decision. Like the ICO, we are committed to protecting your privacy. In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being released to the public. We will continue to work constructively with the ICO to ensure that they are satisfied with our risk assessment procedures.
This is not the first time that AI chatbots have attracted the attention of European privacy regulators. in Italy in February Galante San Francisco-based virtual friendship service maker Replika has been ordered to stop processing local users’ data, citing concerns about the risk to minors.
The following month, Italian authorities issued a similar cease-and-desist order to OpenAI’s ChatGPT tool. The block was later lifted in April, but only after OpenAI added more detailed privacy disclosures and some new user controls. This included making it impossible for users to request that their data not be used to train AI or delete it.
The regional launch of Google’s Bard chatbot was also delayed after concerns were raised by the region’s main privacy regulator, the Irish Data Protection Commission. Subsequently, after adding more disclosures and controls, starting in July in the EU, a regulatory taskforce set up within the European Data Protection Board has announced that generative AI chatbots should be We remain focused on assessing how to enforce the Data Protection Regulation (GDPR). Includes ChatGPT and Bard.
Poland’s data protection authority also confirmed last month that it was investigating complaints against ChatGPT.