In the midst of the Israel-Hamas conflict, disinformation has spread like wildfire, creating what has been described as an “algorithmically driven fog of war.” However, while concerns were raised that this conflict might be dominated by machine-generated fake images, the role of artificial intelligence (AI) has turned out to be more nuanced and subtle.
Contrary to initial fears, AI-generated content has not taken center stage in this information war. Layla Mashkoor, an associate editor at the Atlantic Council’s Digital Forensic Research Lab, notes that AI-generated disinformation has primarily been used by activists to either solicit support for a particular side or create the illusion of broader support. Examples include AI-generated billboards in Tel Aviv supporting the Israel Defense Forces (IDF), fake images of people cheering for the IDF, Israeli influencers using AI to generate condemnations of Hamas, and AI-generated images depicting victims of Israel’s bombardment of Gaza.
The deluge of AI misinformation
One key factor mitigating the impact of AI-generated content is the overwhelming amount of misinformation already circulating. The online space is inundated with authentic images and footage, making it challenging for AI-generated content to significantly shape the conversation.
A recent paper from the Harvard Kennedy School Misinformation Review suggests that concerns about the effects of generative AI may be overblown. The authors argue that while generative AI has the potential to proliferate misinformation rapidly, individuals who seek out such misinformation—often characterized by low trust in institutions or strong partisan beliefs—already have access to a plethora of familiar falsehoods, from conspiracy theory websites to forums like 4chan. This means that there is limited demand for more misinformation.
The impact of misinformation
Misinformation gains power when people see it, and with finite attention spans for viral content, its impact is limited. Moreover, generative AI is not the only tool for creating convincing false content. Traditional methods like Photoshop or video editing software can also produce highly realistic content. Altering the date on a low-quality cell phone video, for instance, can be just as effective in misleading viewers. Journalists and fact-checkers often grapple more with out-of-context images or content crudely manipulated into something it’s not, such as presenting video game footage as a Hamas attack.
The red herring of flashy technology
In the quest to combat disinformation, an excessive focus on new and flashy technology can sometimes distract from the core issue. Sacha Altay, a coauthor of the paper and a postdoctoral research fellow at the University of Zurich’s Digital Democracy Lab, highlights that realism is not always a prerequisite for virality on the internet. Misinformation often thrives on sensationalism and emotional appeal rather than realism.
In the context of the Israel-Hamas conflict, generative AI has played a peripheral role in the spread of disinformation. While the technology has been used to garner support for various sides, the sheer volume of existing misinformation, coupled with the preference for sensational content over realism, has limited its impact. As the information landscape continues to evolve, understanding the nuances of how different forms of disinformation spread and gain traction becomes crucial in the fight against misleading narratives.