In a recent update to its commercial, Anthropic’s Terms of Service, a generative artificial intelligence (AI) startup led by former OpenAI researchers, has explicitly pledged not to use client data for the training of its large language models (LLMs). Effective January 2024, the updated terms assure customers that the company does not anticipate obtaining any rights to customer content and that commercial users retain ownership of all outputs generated through Anthropic’s AI models.
Copyright protection for users
Anthropic’s commitment extends beyond data privacy, as the company promises to support users in copyright disputes. The updated terms explicitly state that customers will be protected from copyright infringement claims arising from the authorized use of Anthropic’s services or outputs. This move aligns with a broader industry trend, with major players like OpenAI, Microsoft, and Google pledging similar support for customers facing legal challenges related to copyright claims in the latter half of 2023.
Financial backing for legal challenges
As part of its commitment to legal protection, Anthropic goes a step further by promising to cover the costs of approved settlements or judgments resulting from its AI’s infringements. This financial backing aims to provide customers with increased protection and peace of mind as they engage with Anthropic’s AI models, particularly through the Claude API and Bedrock, Amazon’s generative AI development suite.
Recent legal challenges and industry response
Anthropic’s proactive stance on legal issues comes in the wake of a lawsuit filed by Universal Music Group against the company in October 2023. The lawsuit alleges copyright infringements related to Anthropic AI’s use of “vast amounts of copyrighted works, including the lyrics to myriad musical compositions.” The commitment to financially support users facing similar challenges reflects the company’s dedication to mitigating the legal risks associated with its AI technologies.
Simultaneously, the legal landscape surrounding AI use faces another challenge, with author Julian Sancton suing OpenAI and Microsoft. The lawsuit claims unauthorized use of Sancton’s work, including nonfiction content, to train AI models such as ChatGPT. While these legal battles highlight the growing intersection of AI and intellectual property rights, companies like Anthropic are taking proactive measures to address concerns and protect their user base.
Anthropic’s updated commercial terms of service showcase the company’s commitment to prioritizing data privacy and legal protection for its users. The pledge not to use client data for training purposes and the promise to support users in copyright disputes align with industry efforts to establish ethical AI practices. As the legal landscape surrounding AI continues to evolve, companies like Anthropic are taking proactive steps to navigate challenges, ensuring responsible and user-centric AI development.