The UK’s Information Commissioner’s Office (ICO) has raised concerns about Snapchat’s artificial intelligence (AI) chatbot, “My AI,” and its potential impact on the privacy of children and other users. The ICO’s provisional findings, released on Friday, suggest that Snap, the US company behind Snapchat, may have failed to adequately assess the privacy risks associated with “My AI” before its launch in April.
The ICO is currently investigating how “My AI” processes the personal data of Snapchat’s approximately 21 million users in the UK, including children aged 13 to 17. While these findings do not immediately imply a breach of British data protection laws, they signal serious concerns about Snap’s approach to privacy safeguards.
Snapchat’s AI faces scrutiny over privacy risks to children
Information Commissioner John Edwards expressed his apprehension, stating, “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI’.”
If Snap fails to address the ICO’s concerns satisfactorily, the regulator may take enforcement actions against the company. These actions could range from fines to the outright banning of “My AI” in the UK, a decision that would impact a significant portion of Snapchat’s user base.
Snap responded to the ICO’s notice by stating that it is reviewing the findings and remains committed to user privacy. A Snap spokesperson mentioned, “My AI went through a robust legal and privacy review process before being made publicly available. We will continue to work constructively with the ICO to ensure they’re comfortable with our risk assessment procedures.”
“My AI” is powered by OpenAI’s ChatGPT, one of the most prominent examples of generative AI. Policymakers worldwide are grappling with how to regulate such AI systems due to concerns about privacy and safety. This investigation by the ICO sheds light on the challenges associated with deploying AI on platforms used by younger demographics.
Underage users on social media
Social media platforms, including Snapchat, typically require users to be 13 years of age or older to access their services. However, these platforms have struggled to enforce age restrictions effectively. In August, Reuters reported that the ICO was gathering information to determine whether Snapchat was taking adequate measures to remove underage users from its platform.
Snapchat’s AI chatbot, “My AI,” is under scrutiny from the UK’s Information Commissioner’s Office due to concerns about privacy risks to children and users. While the investigation’s provisional findings raise concerns, Snap has expressed its commitment to user privacy and is reviewing the ICO’s notice. The broader questions surrounding the regulation of generative AI and the challenge of preventing underage users from accessing social media platforms remain significant issues for policymakers and technology companies alike.