In the ever-evolving landscape of modern politics, the issue of deceptive AI-generated political ads has captured the attention of lawmakers from both sides of the aisle. A bipartisan group of legislators, led by Senator Amy Klobuchar of Minnesota, has proposed legislation aimed at curbing the proliferation of misleading AI-generated audio and visual content.
This legislation seeks to address the potential threat to the democratic process posed by such deceptive media, particularly in the context of critical swing states during presidential elections.
Bipartisan push for regulation
Senators Klobuchar, Josh Hawley, Chris Coons, Susan Collins, Pete Ricketts, and Michael Bennet have joined forces to introduce a bill that would prohibit the “distribution of materially deceptive AI-generated audio or visual media” pertaining to individuals seeking federal office.
Under this proposed legislation, individuals depicted in fake ads would have the ability to sue those responsible for their creation and distribution. However, the bill does not penalize online platforms that host such content, nor does it affect traditional news media, as long as they explicitly identify the content as fake or parody.
In a parallel effort, Representative Yvette D. Clarke of New York has introduced legislation that mandates disclosure regarding the use of generative AI technologies, like ChatGPT, in the creation of political ads. This move towards transparency aims to ensure that voters are aware of the technology’s involvement in shaping political narratives.
The concern surrounding deceptive AI-generated political ads stems from the potential for misinformation to influence voters and undermine the integrity of elections. Senators Klobuchar and Hawley, along with Senate Majority Leader Charles E. Schumer, have underscored the urgency of addressing this issue. Without regulation, they argue, both Democrats and Republicans risk falling victim to deceptive ads, ultimately compromising the electoral process itself.
The challenge of defining deception
While there is a consensus on the need to address deceptive political advertising, determining what constitutes deception and how to enforce such prohibitions remain challenges. Some skeptics, including Senator Bill Hagerty, argue that the proposed legislation is too vague, potentially stifling political speech. This concern extends to the possibility of AI tools being used to manipulate images for benign purposes, such as making a candidate appear younger.
Ari Cohn, free speech counsel at TechFreedom, argues that deceptive techniques in political campaigns existed long before AI tools came into play. Cohn emphasizes that addressing the issue should not be contingent solely on the use of AI-generated content.
Neil Chilson of the Utah State University’s Center for Growth and Opportunity suggests that legislation could focus on AI-generated ads within a specific timeframe, such as two weeks before an election. Such a limitation would prevent last-minute deceptive ads that leave little time for fact-checking.
Advocates for regulation, like Maya Wiley, president of the Leadership Conference on Civil and Human Rights, highlight the vulnerability of voters to deepfake videos. A study by the Rand Corporation found that a significant portion of the population struggles to distinguish between authentic and deepfake content, with vulnerability varying by age and political orientation. Without safeguards, large tech platforms could inadvertently aid foreign adversaries in targeting American voters with deceptive ads.
While creators of political ads share concerns about the spread of AI-generated deceptive content, they emphasize the importance of clear boundaries. Larry Huynh, president of the American Association of Political Consultants, distinguishes between benign uses of AI technology, such as image background smoothing, and more malicious applications, like manipulating speech. Huynh underscores the importance of preserving public confidence in elections and democracy.