Tech Companies Crafting Disclosure Policies for AI in Political Ads as Congress Considers Regulation

Campaigns and advertisers increasingly use artificial intelligence (AI) in political ads, raising concerns about the potential spread of misinformation. Tech giants like Meta and Google have announced their disclosure policies to address the use of generative AI in political advertising as lawmakers in Congress contemplate regulatory measures.

Disclosure requirements by tech companies

Google’s policy: In September, Google unveiled a policy requiring election advertisers to “prominently disclose” when their ads contain digitally altered synthetic content, including AI-generated or altered. This policy aims to address concerns related to the use of AI in political ads.

Buy physical gold and silver online

Meta’s policy: Meta, the parent company of Facebook and Instagram, has also introduced a similar policy. Starting in the new year, political advertisers must disclose the use of AI when their ads feature “photorealistic image or video, or realistic-sounding audio” that has been digitally manipulated for potentially deceptive purposes. This includes situations where real individuals are depicted saying or doing things they did not or events that did not occur.

Criticisms and advocacy

Consumer advocacy group’s perspective: Robert Weissman, president of the consumer advocacy group Public Citizen, commended the tech companies’ efforts but stated that they are “not enough” and should not substitute government action. He raised concerns that these policies do not cover all outlets and fail to address deceptive AI in organic posts that are not political ads.

Senator Schumer’s call for government action**: Senate Majority Leader Chuck Schumer echoed the need for government intervention, emphasizing that self-imposed guardrails and voluntary commitments by tech companies may not suffice. He expressed concerns about outlier companies that could undermine industry standards and emphasized the importance of regulating deceptive AI beyond political ads.

Congress’ regulatory efforts

Introduced bills: Several bills have been introduced in Congress to address the use of AI in political ads. One bill, co-sponsored by Senators Amy Klobuchar, Josh Hawley, Chris Coons, and Susan Collins, aims to ban the use of deceptive AI-generated audio, images, or video in political ads that aim to influence federal elections or fundraising.

Disclaimers and watermarks: Another proposed measure, introduced by Senators Klobuchar, Cory Booker, Michael Bennet, and Representative Yvette Clarke, would require disclaimers on political ads that use AI-generated images or video. However, discussions during the AI Insight Forum raised concerns about potential issues, such as excessive warning labels and potential hindrances to beneficial AI applications like closed captions and translations.

Enforcement challenges

Tech oversight project’s perspective: Kyle Morse, deputy executive director of the Tech Oversight Project, criticized the tech companies’ policies as “voluntary systems” lacking robust enforcement mechanisms. He argued that these policies are more like press releases and questioned their effectiveness in practice.

Meta and Google enforcement: Meta has stated that ads without proper disclosures will be rejected, and accounts with repeated nondisclosures may face penalties, though specifics were not provided. On the other hand, Google stated that it would not approve ads violating the policy and might suspend advertisers with repeated violations but did not outline the exact thresholds for suspension.

Weissman’s priority: Robert Weissman emphasized that the primary concern should be establishing clear rules against misleading AI, with enforcement questions as a secondary consideration. He highlighted the importance of states taking action alongside platform initiatives.

Federal Election Commission’s role

FEC considering rule clarification: The Federal Election Commission (FEC) is contemplating clarifying a rule in response to a petition from Public Citizen. This rule would address the use of AI in political campaigns and advertising.

Expert opinion: Jessica Furst Johnson, a partner at Holtzman and Vogel and general counsel to the Republican Governors Association, noted that the policies adopted by Meta and Google may serve as a middle ground in the absence of federal guidelines and legislation. She acknowledged the challenges of implementing a complete prohibition and emphasized the need for sensible solutions given the uncertain timeline for congressional action.

As the 2024 election approaches, the use of AI in political advertising remains a topic of concern and debate. While tech companies have taken steps to introduce disclosure policies, lawmakers in Congress are actively considering regulatory measures. The key challenge lies in balancing transparency and the responsible use of AI in the political landscape while ensuring effective enforcement mechanisms. As discussions continue, the role of the Federal Election Commission and the possibility of federal guidelines remain pivotal in shaping the future of AI in political ads.

About the author

Why invest in physical gold and silver?
文 » A