As the United States gears up for its upcoming election, concerns are rising about the proliferation of misinformation on social media platforms. These concerns are fueled by the fact that major platforms, including Elon Musk’s Twitter-turned-X, Meta Platforms Inc. (formerly Facebook), and Google’s YouTube, are becoming increasingly hesitant to combat false content. This shift in approach coincides with the rise of artificial intelligence tools that make it easier to spread misinformation, posing a significant challenge to election integrity.
Social media transformation and shifting content monitoring
One of the most notable transformations in the social media landscape is Elon Musk’s rebranding of Twitter as “X,” a move towards a more unrestricted platform. However, X is not alone in altering its approach to content monitoring. Meta Platforms Inc., which owns Facebook,
Instagram and Threads have been downplaying news and political content on their platforms. Similarly, Google’s YouTube has opted to avoid purging falsehoods related to the 2020 election, citing concerns about restricting political speech.
This shift comes at a critical juncture when artificial intelligence tools are enabling the rapid dissemination of false information and societal divisions are eroding trust. The World Economic Forum has identified misinformation as the most significant short-term threat in its Global Risks Report.
A threat to American democracy
Mark Jablonowski, Chief Technology Officer for Democratic ad-tech firm DSPolitical, warns that while platforms are demanding more transparency for advertisements, the unchecked spread of organic disinformation poses a fundamental threat to American democracy.
As companies reassess their moderation practices, Jablonowski expresses concerns that unaddressed false viral content could shape voters’ perceptions and influence election outcomes in 2024.
Risks beyond the U.S.
The implications of these platform changes extend beyond the United States. In 2024, elections are taking place in approximately 60 other countries, making it a risky year to experiment with new content moderation dynamics.
The U.S. election campaign is already underway, with former President Donald Trump making strides in the Iowa caucus, potentially setting the stage for a rematch with President Joe Biden. Given the polarizing nature of both candidates, there is a heightened risk of real-world violence, as demonstrated by the January 6th attack on the Capitol in 2021.
Despite policy guidelines against content that incites violence or misleads voters, platforms like X, Meta, and YouTube must grapple with the challenge of maintaining election integrity while avoiding censorship.
Financial and political pressures
Several factors contribute to platforms’ changing attitudes towards content moderation. Financial motivations, such as the drive for efficiency, have led tech companies to reduce non-engineering staff. For instance, Meta’s Mark Zuckerberg described the company’s extensive job cuts as beneficial for the industry.
Political pressures have also played a role, with U.S. conservatives arguing that tech firms should not have the authority to define truth in sensitive political and social matters. The suppression of a story about Biden’s son before the 2020 election, later deemed unfounded, sparked controversy and raised questions about the trade-offs involved in aggressive content removal.
Enforcing policies against misinformation has proven challenging, particularly during rapidly evolving events like the COVID-19 pandemic. Attempting to enforce content removal at the speed of fast-changing events can result in limitations and difficulties in establishing the truth.
One consequence of these challenges is the shift of social media platforms towards less controversial subject matter. Meta’s introduction of Threads, positioned as a competitor to Twitter, focuses on lifestyle and entertainment content to avoid the scrutiny and negativity associated with hard news and politics.
AI-generated deep fakes: A growing concern
To address the rising concern of AI-generated deepfakes, Meta plans to implement protocols similar to those used in previous elections, including a ban on new political ads one week before the election. Platforms are increasingly worried about deepfakes, which involve using AI to create false images, audio, or videos. While their impact on misinformation is not yet substantial, they have the potential to create doubt, particularly in time-sensitive situations like election days.
The Biden campaign has already taken measures to challenge online disinformation, including the use of deepfakes, through legal avenues. They are focused on addressing potential violations of copyright law and statutes against impersonation.