In a legal clash that could reshape the landscape of artificial intelligence and its integration with news dissemination, the New York Times Company has taken legal action against OpenAI’s ChatGPT and Microsoft’s Bing Chat. The Times alleges not only widespread copyright infringement but also the propagation of inaccurate information attributed to the esteemed news organization.
The real news here is the bold move by The New York Times, accusing OpenAI and Microsoft of reproducing its content verbatim and generating false narratives. The complaint sheds light on the risks associated with Large Language Models (LLMs), specifically the phenomenon of AI “hallucinations.”
The legal battle unfolds
Under the weighty accusation of reproducing copyrighted content without authorization, The New York Times accuses both OpenAI and Microsoft of training their chatbots on non-public and copyright-protected material. The Times points to instances where prompts to ChatGPT resulted in verbatim excerpts from paywalled articles, raising concerns about unauthorized reproduction of locked-down content.
The complaint emphasizes the prevalence of AI “hallucinations,” citing an alarming example where Bing Chat “completely fabricated” a paragraph from a Times article, including quotes attributed to Steve Forbes’s daughter, Moira Forbes, that were nowhere to be found in the original article or anywhere else on the internet. Another instance involved Bing Chat generating a list of heart-healthy foods based on a Times article, despite 12 of the 15 foods not being mentioned in the original piece.
The heart of The Times’ grievance lies in the potential harm caused by misinformation disseminated by these AI models. It argues that users seeking information from a search engine should receive accurate links to the original articles, not unauthorized copies or inaccurate forgeries.
Market impact and ethical dilemmas
As legal experts analyze the case, some reports suggest The Times may have a stronger stance due to potential market harm. The complaint outlines instances where reproduced locked-down content may diminish the newspaper’s subscriber base. Yet, the irony lies in the potential backlash against AI companies if users choose LLMs over traditional news subscriptions.
Noah Feldman, a noted Bloomberg columnist, weighs in, suggesting that taking business away from The New York Times could backfire on OpenAI and Microsoft. He argues that these AI giants need reliable news organizations to exist if they are to provide trustworthy information as part of their services. Feldman poses a rational and economic obligation for AI companies to pay for the information they utilize.
OpenAI’s response and ongoing dialogue with the New York Times
OpenAI, caught off guard by The Times’ legal action, expresses surprise and disappointment. The company emphasizes its commitment to respecting content owners’ rights and reveals ongoing talks with The New York Times. In November, OpenAI announced a program, Copyright Shield, offering to cover customers’ costs incurred from copyright lawsuits, showcasing a proactive approach to addressing legal concerns.
As the legal battle unfolds, questions linger about the future of AI integration with news dissemination and the ethical responsibilities of tech giants like OpenAI and Microsoft. Can these companies strike a balance between innovation and respecting the intellectual property of news organizations? The New York Times has thrown down the gauntlet, seeking billions in damages. The outcome of this legal clash could shape the future dynamics between AI, media, and the delicate dance of information dissemination in the digital age. What impact will this lawsuit have on the evolving relationship between traditional journalism and cutting-edge artificial intelligence?