China is using cognitive warfare tactics to influence public beliefs and opinions without directly engaging in military confrontation, according to reports that are surfacing publicly. This new term has emerged due to the evolving nature of warfare, as manipulation of thoughts is the key focus of this strategy.
China is also relying on artificial intelligence-generated disinformation, such as realistic but fake voice tapes and videos, as part of its strategy. The concerning factor is that for the first time, state-backed actors are working to manipulate foreign elections as AI has brought new possibilities that were not possible a year ago with such convenience.
China is trying to mold opinions
China-Taiwan rivalry has brought the term cognitive warfare to the mainstream, as China has been focusing on influence-based operations lately for the last two decades. Due to the geopolitical situation and ground realities, China may have concluded that direct intervention in Taiwan through military force can be a costly affair.
Reports say that China’s trial of AI-generated content in Taiwan’s general election was a test run, and now it is trying to deploy the same practices for US, South Korean, and Indian elections, according to a report by Microsoft last month.
According to reports, North Korea may also partner with China to target US elections this year through their state-backed cyber groups. A report read,
“Meanwhile, as populations in India, South Korea, and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent, North Korean cyber actors, work toward targeting these elections.”
Source: Microsoft.
It is assumed that China will create and distribute the AI-generated content through social media to mold the elections in a way that benefits them. At the moment, the impact of fake content in swaying opinions is not considerable, but that could change with the advancement of technology and China’s increasing experimentation.
State-backed cyber actors are at the forefront
A Chinese-backed cyber group called Storm 1376, also known by Dragonbridge or Spamouflage names, is said to have been quite active during the Taiwan presidential election. This same group is thought to be behind the fake audio of candidate Terry Gou, who withdrew from the election. YouTube removed the clip when it was reported, but it must have reached many users.
Another pro-Soviet candidate, William Lai, was also the target of a series of AI-generated memes about stealing state funds because he was considered anti-China. Along with memes, there was a surge in the use of AI-generated TV anchors who made false claims about candidates such as Lai fathering illegitimate children.
It is said that a tool called Capcut was used to generate the anchors, which is a product of Chinese tech giant ByteDance, which is the owner of TikTok. Back in February, a report by the Institute for Strategic Dialogue said that an account on the X platform with a western name shared a video from RT, which is a Russian network, claiming that Biden and the CIA had sent a gangster to fight in Ukraine. The account, in an attempt to look legit, was shown as being run by a 43-year-old in Los Angeles who was a supporter of Trump, with a picture taken from a Danish blog.
There were many other accounts also identified that usually repeated content that was published by Chinese groups, like Storm 1376. Meta, which is the parent company of Facebook, Threads, and Instagram, removed thousands of suspicious accounts likely linked to Storm 1376. But the newer accounts are not that easy to identify as they are adopting organic styles to build a following and appear to be operated by humans. China denies the blame and says that it is not supporting any activities to impact elections in any region, but the campaigns from the said groups continue.