In the ever-evolving landscape of 21st-century warfare, the role of Artificial Intelligence (AI) has emerged as a game-changer. This is particularly evident in the Russo-Ukrainian conflict, where AI is not only influencing the battlefield but also shaping the narrative in the information domain. Global Voices recently engaged in a conversation with Anton Tarasyuk, a prominent data and AI expert, to unravel the intricate ways in which AI is employed in this high-stakes conflict.
AI dominance in disinformation warfare on the battlefield
In the interview, Anton Tarasyuk highlights the unprecedented level of digitization and AI dependence in the Russo-Ukrainian war. The conflict didn’t commence with traditional kinetic warfare; instead, it unfolded through a sophisticated information warfare strategy. Russia, leading up to February 24, 2022, launched cyberattacks and disinformation campaigns to instill confusion and fear among Ukrainian citizens.
Despite strategic resilience on Ukraine’s part, tactical victories for Russia were achieved early on. A notable example is the “graffiti marks” campaign, a disinformation effort claiming that Russian agents left specific marks to guide military strikes. Anton emphasizes the substantial role of AI in such campaigns, enabled by generative technologies that drastically reduce the cost of producing misleading textual and visual content. AI becomes a necessity to counteract the speed and scale of these disinformation efforts, as human capabilities fall short.
Anton explains that while the initial focus of AI technology was on the government sector, particularly the National Security and Defense Council of Ukraine, the scope is expanding. Information warfare now extends into the corporate sector, with billions of dollars at stake annually due to AI-fueled disinformation campaigns and fraud. As the battleground expands, the necessity to harness AI for defense, politics, and business decision-making becomes increasingly critical.
The role of AI in information campaigns and ethical face recognition
Anton delves into Russia’s utilization of AI not only within the borders of Ukraine but also on a global scale, especially in the Global South. Recognizing the significance of public opinion in electoral democracies, Russia tailors its informational campaigns to foster anti-Western sentiment, presenting itself as the leader of an “anti-Western coalition.” The historical skepticism towards the West in the Global South provides fertile ground for Russia’s messaging.
The informational campaigns orchestrated by Russia extend beyond malinformation to full-blown disinformation, where false and harmful narratives are strategically disseminated. AI becomes a potent tool in this dissemination, particularly as traditional media structures erode. Anton issues a warning to the West, urging preparedness before it’s too late and pointing to Ukraine’s proactive stance in addressing the challenges posed by Russian aggression.
The conversation shifts to face recognition technology, a potent tool in today’s intelligence landscape. Anton acknowledges the wealth of insights that can be gained by monitoring discussions and behaviors, raising questions about the ethical use of this technology. He highlights the danger zones where the quest for intelligence might inadvertently lead to the construction of surveillance states.
Anton presents a commitment to democratic values but suggests a reimagining of fundamental beliefs regarding transparency in the face of the “structural transformation of the public sphere” brought about by new technologies. He emphasizes the need for societies to prepare for this shift and navigate the ethical challenges associated with the evolving landscape of intelligence gathering.
Regulatory considerations in Ukraine
Anton Tarasyuk concludes the interview by discussing Ukraine’s evolving AI regulatory landscape. Historically, the country had minimal AI regulation, but there’s a shift with the Ministry of Digital Transformation unveiling a roadmap to align with EU and US frameworks. The challenge lies in fostering AI innovation without overregulation, a delicate balance amidst potential life-threatening consequences during conflict.
Anton warns against Silicon Valley’s “move fast and break things” approach, emphasizing the risks of centralized AI regulations seen in China and Russia. Ukraine faces the task of balancing innovation with responsibility in the dynamic realm of AI. Overall, Tarasyuk’s insights highlight AI’s multifaceted role in the Russo-Ukrainian conflict, crucial for battling disinformation and navigating emerging technology’s ethical complexities on both the battlefield and the information front.