In a significant move to address concerns over the potential misuse of artificial intelligence (AI) in political advertising, Meta, the parent company of Facebook, has announced a new policy barring political advertisers from utilizing its generative AI advertising products. This decision comes in response to mounting fears that AI-powered tools could amplify the spread of election-related misinformation. While Meta has not yet publicly disclosed this decision in its advertising standards, it marks a significant step in the evolving landscape of AI-related policies within the tech industry.
The rise of generative AI advertising
Meta’s decision to restrict political advertisers from using generative AI advertising tools follows its earlier announcement to expand access to these cutting-edge technologies. These tools are capable of instantly generating backgrounds, making image adjustments, and creating various ad copy variations based on simple text prompts. Initially, these AI-powered tools were offered to a select group of advertisers, with plans to roll them out globally to all advertisers in the near future. However, concerns over their potential misuse in political campaigns have led Meta to take this preventive measure.
Tech industry’s response to generative AI
Meta is not the only tech company venturing into the realm of generative AI advertising. Other major players, including Alphabet’s Google, have also launched similar tools. Google, the largest digital advertising company, has taken steps to keep politics out of its generative AI ad products by blocking a list of “political keywords” from being used as prompts. Additionally, Google plans to update its policies in mid-November to require election-related ads to include disclosures if they contain “synthetic content that inauthentically depicts real or realistic-looking people or events.”
Snapchat and TikTok have opted to outright ban political advertisements, while Twitter (now known as X) has not introduced any generative AI advertising tools.
Meta’s policy evolution
Meta’s decision to restrict the use of generative AI in political advertising reflects a broader acknowledgment within the company of the challenges posed by these technologies. Nick Clegg, Meta’s top policy executive, emphasized the need to update the rules governing generative AI’s use in political campaigns. He expressed concerns about the potential for this technology to be exploited to interfere in upcoming elections, particularly in 2024, and called for a special focus on election-related content that moves across different platforms.
Earlier, Clegg announced that Meta would block its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures. The company also committed to developing a system to “watermark” content generated by AI, reinforcing the importance of transparency and accountability in the use of these tools.
Meta’s approach to misleading AI-generated content
Meta has adopted a cautious stance towards misleading AI-generated content. The company narrowly bans such content in all forms, including organic non-paid posts, with the exception of parody or satire. However, this approach has faced scrutiny, prompting Meta’s independent Oversight Board to examine its wisdom. The board recently took up a case involving a doctored video of U.S. President Joe Biden, which Meta had left online, claiming that it was not AI-generated.
Meta’s decision to restrict the use of generative AI tools in political advertising marks a significant step in addressing concerns over the potential misuse of these technologies. As the tech industry continues to grapple with the challenges posed by AI in political campaigns, the policies adopted by companies like Meta and Google set important precedents for the responsible use of AI in the digital advertising landscape. These developments highlight the growing need for robust safeguards and regulations to ensure the integrity of political discourse in an increasingly AI-driven world.