In the midst of a surge in AI-generated deepfake content, two Democratic members of Congress, U.S. Sen. Amy Klobuchar of Minnesota and U.S. Rep. Yvette Clarke of New York, have called on major social media platforms, including Meta (parent company of Facebook and Instagram) and X (formerly Twitter), to clarify their stance on deceptive AI-generated political advertisements.
In a letter addressed to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino, the lawmakers expressed “serious concerns” about the emergence of AI-generated political ads on these platforms and urged them to disclose any measures they are implementing to combat the potential spread of misinformation and disinformation leading up to the 2024 U.S. presidential election.
Growing concerns ahead of the 2024 U.S. presidential election
With the 2024 U.S. presidential election looming, Klobuchar and Clarke emphasized the need for transparency regarding AI-generated political content on social media platforms. They asserted that a lack of transparency could result in a dangerous flood of election-related misinformation and disinformation, as many voters rely on these platforms to gather information about candidates and issues. The letter calls for a prompt response from Meta and X, requesting them to address the concerns raised by October 27.
The pressure on social media companies to address this issue coincides with bipartisan efforts in Congress to regulate AI-generated political ads. Earlier this year, Rep. Yvette Clarke introduced a House bill that seeks to amend federal election laws, mandating disclaimers on election advertisements containing AI-generated images or video. In parallel, Senator Amy Klobuchar is sponsoring companion legislation in the Senate with the hope that it will be passed before the end of the year.
The lawmakers view these regulatory efforts as a necessary step to ensure the integrity of political discourse in the digital age. Klobuchar stated that while these legislative measures are crucial, they hope that major tech platforms will take proactive steps to address the issue voluntarily.
Google’s preemptive measures and Meta’s existing policies
Google has already taken a step in the right direction by announcing that, starting in mid-November, it will require a clear disclaimer on all AI-generated election ads that alter people or events on YouTube and other Google products, both in the United States and internationally.
Meta, on the other hand, currently lacks a specific rule regarding AI-generated political ads. However, it does have a policy that restricts “faked, manipulated, or transformed” audio and imagery intended for misinformation purposes.
In addition to Clarke and Klobuchar’s efforts, a more recent bipartisan Senate bill co-sponsored by Klobuchar and Republican Senator Josh Hawley of Missouri seeks to go further by banning “materially deceptive” deepfakes related to federal candidates. This proposed legislation includes exceptions for parody and satire, emphasizing the importance of distinguishing between harmful deceptive content and legitimate forms of expression.
AI-generated political ads have already made their presence felt in the lead-up to the 2024 election. Notably, the Republican National Committee aired an ad in April that utilized fake but highly realistic images to depict a dystopian future if President Joe Biden were to be reelected. This ad featured boarded-up storefronts, military patrols, and waves of immigrants, aiming to sway public opinion.
Klobuchar highlighted that under the proposed rules, such ads and others like them would likely be banned. This would also apply to a fake image showing Donald Trump hugging Dr. Anthony Fauci, used in an attack ad by Trump’s GOP primary opponent and Florida Governor Ron DeSantis.
The challenge of deepfake videos in politics
Deepfake videos, such as the one falsely depicting Democratic Senator Elizabeth Warren advocating restrictions on Republican voting, pose a significant challenge to the accuracy and credibility of political discourse. With the potential for candidates to be portrayed saying things they never did, distinguishing between fact and fiction becomes increasingly difficult.
During a hearing held on September 27, chaired by Senator Klobuchar, various perspectives on AI and the future of elections were presented. While concerns about the impact of deepfakes were acknowledged, some argued that existing protections for free speech should not be compromised. Ari Cohn, an attorney at think-tank TechFreedom, noted that deepfakes had not significantly affected voter behavior and questioned the necessity of new regulations.
The Federal Election Commission took a procedural step in August towards potentially regulating AI-generated deepfakes in political ads. They opened a petition by the advocacy group Public Citizen to develop rules regarding misleading images, videos, and audio clips in political advertising to public comment. The comment period for this petition will conclude on October 16.
As the 2024 U.S. presidential election approaches, the debate over the role and regulation of AI-generated political content intensifies, with lawmakers, tech giants, and regulatory bodies seeking to strike a balance between free speech and the prevention of deception in political discourse.