On July 13, 2023, The Washington Post revealed a groundbreaking development in the world of artificial intelligence with the leak of a Federal Trade Commission (FTC) Civil Investigative Demand (CID) directed at OpenAI, LLC. The CID seeks to determine whether OpenAI is complying with FTC standards for privacy, data security, and advertising in relation to their AI products, including ChatGPT and DALL-E. The leaked CID offers a unique and preliminary view into the FTC’s enforcement priorities concerning the nascent generative AI industry, shedding light on issues such as advertising practices, privacy concerns, safety considerations, and data security.
FTC’s Stringent Scrutiny on AI Product Advertising Practices
The leaked CID highlights the FTC’s focus on advertising practices within the AI industry. Specifically, the FTC seeks to understand how OpenAI advertises its Large Language Model (LLM) products, including the information conveyed about the capabilities, accuracy, and reliability of AI outputs. This indicates that AI product advertising is a priority for the FTC, and they are determined to ensure that companies accurately represent their AI products to consumers. The CID emphasizes the importance of truthful advertising in the generative AI industry, particularly to prevent the creation of “dark patterns” that can manipulate users into unintended actions.
FTC’s rigorous examination of privacy concerns in the generative AI industry
The leaked CID examines several key aspects related to privacy concerns in the generative AI industry. One significant area of interest for the FTC is the source of training data sets used by OpenAI to develop their products. The CID inquires whether OpenAI obtained the data through data scraping, third-party purchases, or from publicly-available websites. Additionally, the FTC questions the steps taken by OpenAI to remove personal information from the training data sets, highlighting their concern about data scraping and the secondary use of publicly available personal information for AI training.
The FTC’s focus on personal data in training sets might be based on the “inconsistent secondary use” concept, which prohibits using consumers’ personal information for purposes inconsistent with those for which it was initially disclosed. This approach could have significant implications for LLM training sets in the future and may impact demands for data sets free from personal information and intellectual property protections.
The CID also covers user data controls, such as how OpenAI handles requests from users to refuse data collection, retention, use, transfer, or deletion. The FTC aims to identify whether OpenAI honors consumers’ privacy choices as offered and described in its policies, focusing on potential failures to honor user requests.
Also, the CID seeks to assess the accuracy and completeness of OpenAI’s privacy policy. The FTC inquires about the personal information collected, its sources, types, storage duration, disclosure recipients, and purposes of use. Any discrepancies found in OpenAI’s privacy policy could lead to potential enforcement actions by the FTC.
FTC’s inquisitive stance on leading LLMs’ safety landscape
The leaked CID showcases the FTC’s interest in the safety landscape of leading LLMs in the market. OpenAI is required to provide information on any complaints or reports related to safety challenges caused by its LLMs. This includes risks associated with hallucinations, harmful content, biased representations, disinformation, cybersecurity vulnerabilities, economic impacts, and user overreliance on potentially inaccurate information.
While this line of questioning provides the FTC with valuable insights into AI safety, it also serves as a basis for potential enforcement actions if the information provided contradicts OpenAI’s public statements or reveals harmful aspects of the LLMs, potentially falling under the FTC Act’s definition of “unfair” practices.
Investigation on OpenAI’s data security measures
The leaked CID raises questions concerning data security both within OpenAI and with respect to its LLMs when made available through third-party APIs or plugins. The FTC specifically addresses a security incident from March 2020 involving a bug that exposed user chat history and payment-related information. OpenAI is required to disclose the number of affected users, types of exposed information, and the company’s response to the incident.
The CID explores the company’s policies and procedures for assessing risks to user data when integrating APIs or plugins, oversight of third-party API users, restrictions on third parties’ use of user data, and measures to ensure compliance with OpenAI’s data security policies. The FTC emphasizes the importance of due diligence and contractual provisions to prevent misuse of AI technology by partners and holds companies responsible for inadequate security protections.
The leaked FTC CID to OpenAI provides a unique opportunity to glimpse into the FTC’s enforcement policy development concerning the rapidly growing generative AI industry. As policymakers attempt to establish guardrails for AI technologies, expect increased activity in this area as the FTC and other regulatory bodies strive to address advertising, privacy, safety, and data security concerns in the realm of AI enforcement.