AI Hallucinations Brought Another Legal Trouble for OpenAI

A privacy organization, Noyb, has filed a complaint against OpenAI with the Austrian Data Protection Authority (DPA) on the grounds that its product ChatGPT breaks many data protection laws of the EU. The organization said that ChatGPT shares incorrect information about people, and the EU’s General Data Protection Regulation (GDPR) requires that the information about people should be correct, and they must be provided with full access to the information that is about them.

OpenAI faces GDPR charges

Noyb was established by famous lawyer and activist Max Schrems, and it claimed that ChatGPT shared a false birthday date about a famous public figure, and when asked for permission to access and delete data related to him, his request was denied by Open AI.

Buy physical gold and silver online

Noyb Says that according to the EU’s GDPR, any information about any individual must be accurate and they must have access and information about the source, but according to it, OpenAI says that it is unable to correct the information in their ChatGPT model. The company is also unable to tell where the information came from, and even it doesn’t know what data ChatGPT stores about individuals. 

Noyb claims that OpenAI is aware of the problem and seems like it does not care about it as its argument on the issue is that,

“Factual accuracy in large language models remains an area of active research.”

Noyb noted that wrong information may be tolerable when ChatGPT spews it when students use it for their homework, but it said that it’s clearly unacceptable for individual people as it is a requirement of EU law that personal data must be accurate. 

Hallucinations make chatbots non-compliant with EU regulations

Noyb mentioned that AI models are prone to hallucinations and make information that is actually false. They questioned OpenAI’s technical procedure of generating information, as it noted OpenAI’s reasoning that,

“responses to user requests by predicting the next most likely words that might appear in response to each prompt.”

Source: Statista.

Noyb argues that it means that despite the fact that the company has extensive data sets available for training its model, but still, it cannot guarantee that the answers provided to users are factually correct.

Noyb’s data protection lawyer, Maartje de Gaaf, said,

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences.”

Source: Noyb.

He also said that any technology has to follow laws and cannot play around, as according to him, if a tool cannot produce correct results about individuals, then it cannot be used for this purpose, he also added that companies are not yet technically sound to create chatbots that can comply with EU laws on this subject.

Generative AI tools are under the strict scrutiny of European privacy regulators, as back in 2023, the Italian DPA temporarily restricted data protection. It is yet unclear what the outcomes will be, but according to Noyb, OpenAI doesn’t even pretend that it will comply with the EU law.

About the author

Why invest in physical gold and silver?
文 » A