OpenAI, the organization behind the AI chatbot ChatGPT, is under scrutiny by the Italian Data Protection Authority over potential violations of the European Union’s General Data Protection Regulation (GDPR). Following a multi-month investigation, the Italian authority has issued a notice of objection, suspecting that OpenAI may have breached EU privacy regulations. The details of the draft findings have not been disclosed, but OpenAI has been given 30 days to respond and present a defense against the allegations.
Background and previous actions by the Italian authority
The Italian Data Protection Authority had previously raised concerns about OpenAI’s compliance with GDPR when it ordered a temporary ban on ChatGPT’s local data processing, leading to the suspension of the AI chatbot in the Italian market. The authority highlighted issues such as the lack of a suitable legal basis for collecting and processing personal data for training ChatGPT’s algorithms. Child safety and the AI tool’s tendency to ‘hallucinate,’ producing inaccurate information about individuals, were also flagged as concerns.
Despite temporarily addressing some issues raised by the Italian authority, OpenAI now faces preliminary conclusions that ChatGPT is violating EU law. The crux of the matter lies in the legal basis OpenAI claims for processing personal data to train its AI models, especially considering that ChatGPT was developed using data scraped from the public internet, including individuals’ personal data.
Legal basis for data processing and potential consequences
OpenAI initially claimed “performance of a contract” as a legal basis for ChatGPT model training, but the Italian authority directed it to remove this reference. This left OpenAI with only two potential legal bases: consent or legitimate interests. Obtaining consent from the vast number of individuals whose data has been processed appears impractical, leaving legitimate interests as the primary legal basis. However, this basis requires OpenAI to allow data subjects to object to the processing, posing challenges for an AI chatbot’s continuous operation.
The broader question is whether the Garante, the Italian Data Protection Authority, will ultimately accept legitimate interests as a valid legal basis in this context. Previous decisions by the EU’s top court suggest potential hurdles, as legitimate interests require a careful balance between the data processor’s interests and the rights and freedoms of individuals. Notably, the court found this basis inappropriate for Meta’s behavioral advertising business.
OpenAI’s response and ongoing GDPR compliance efforts
OpenAI has responded to increasing regulatory risks by seeking to establish a physical base in Ireland, designating it as the service provider for EU users’ data. The aim is to achieve “main establishment” status in Ireland and have GDPR compliance oversight led by Ireland’s Data Protection Commission. However, this status is pending, and ChatGPT could still face probes from Data Protection Authorities (DPAs) in other EU countries.
In addition to the Italian investigation, OpenAI is also under scrutiny in Poland following a complaint about inaccurate information produced by ChatGPT and OpenAI’s response to the complainant. OpenAI’s efforts to coordinate with EU DPAs through the European Data Protection Board may lead to more harmonized outcomes, but individual authorities remain competent to issue decisions in their respective markets.