In response to growing concerns about the spread of misinformation through artificial intelligence (AI) generated content, the European Commission has put forth draft guidelines to safeguard upcoming elections in the region. The proposed measures target tech platforms like TikTok, Facebook, and others, mandating them to detect and mitigate the dissemination of AI-generated content that could potentially manipulate voter behavior or distort electoral processes.
Proposed guidelines for election security
The European Commission has initiated a public consultation on the draft election security guidelines, which specifically target very large online platforms (VLOPs) and very large online search engines (VLOSEs). These guidelines, if implemented, seek to address the threat posed by generative AI and deepfakes to the democratic integrity of European elections.
The draft guidelines outline various measures to mitigate election-related risks associated with generative AI content. Among these measures are actions to be taken by platforms before or after electoral events, including providing clear guidance to users during European Parliament elections. Additionally, platforms are urged to alert users to potential inaccuracies in AI-generated content, guiding them towards authoritative information sources.
Transparency and accountability
One key recommendation is for platforms to indicate the sources of information used to generate AI content, enabling users to verify its reliability. This move towards transparency aims to empower users to discern authentic information from misleading or synthetic content. The guidelines also emphasize the implementation of safeguards by tech giants to prevent the generation and dissemination of misleading content with the potential to influence user behavior.
The proposed guidelines draw inspiration from existing legislative frameworks such as the recently approved AI Act and the non-binding AI Pact. These frameworks serve as a foundation for the suggested “best practices” for risk mitigation in combating AI-generated misinformation. The initiative reflects the European Union’s commitment to regulating advanced AI systems, particularly in light of the widespread use of generative AI tools like OpenAI’s ChatGPT.
While the European Commission has not specified the timeline for enforcing these guidelines under the Digital Services Act, Meta, the parent company of Facebook and Instagram, has announced plans to introduce fresh guidelines regarding AI-generated content. Meta’s initiative involves labeling AI-generated content on its platforms, ensuring transparency for users by identifying such content through metadata or intentional watermarking.