Deepfakes have become easier to deploy and more accurate, making them increasingly believable. This...
AI, Deepfakes, and Misinformation: The 2024 Election & China's Digital Influence Operations
In 2024, media coverage and awareness of deepfakes surged as they became more prevalent in mainstream and social media. This rise in visibility brought with it a range of alarming examples: a Hong Kong banker defrauded of millions by scammers posing as executives, fake AI robocalls impersonating Joe Biden and urging voters to avoid the polls, and viral memes featuring President Trump. Deepfakes and AI-generated content have become so pervasive that they’ve carved out a new subgenre in our political landscape.
As the 2024 election cycle approached, speculation grew about whether deepfakes and AI-generated content would have a significant impact. Many feared that synthetic videos of candidates saying or doing things they hadn’t done could become widespread and hard to distinguish from reality, but that wasn’t the case.
According to the Centre for Emerging Technology and Security (CETaS) report, AI-Enabled Influence Operations: Safeguarding Future Elections, there is little evidence that AI-enabled disinformation directly influenced the 2024 US presidential election results, largely due to limited research on its impact on voter behavior.
However, AI-generated disinformation contributed to election discourse by reinforcing falsehoods and fueling further political debate with AI-created content. According to the CETaS report, this election cycle saw extensive use of automated social media bots, primarily linked to foreign interference from Russia and China, with some domestic bot activity also reported. These bots played a significant role in spreading misleading content on key issues. Examples include Russia’s war in Ukraine, the July 2024 assassination attempt, and false information about disaster relief efforts for Hurricane Helene. They also amplified conspiracy theories and targeted specific candidates with smear campaigns or favorable coverage.
Bot activity was more sophisticated than in previous elections, with AI-enhanced tools used to create fake profiles, generate posts, and amplify content. These bots acted as force multipliers, reposting, commenting, and popularizing disinformation content to maximize viewership. They also targeted down-ballot candidates, where less scrutiny and fact-checking exist, increasing the risk of influencing outcomes in smaller-scale races.
China’s Digital Influence Campaign
According to the U.S. Department of Defense’s annual report on Military and Security Developments Involving the People’s Republic of China, the misuse of AI to spread disinformation and disrupt democracy fits into the mission space of the People’s Republic of China.
The Chinese Communist Party (CCP) continues to prioritize psychological warfare within its military strategy. The People’s Liberation Army (PLA) developed the concept of cognitive domain operations (CDO) which “combines psychological warfare with cyber operations to shape adversary behavior and decision making” (U.S. Department of Defense 2024). The PLA will likely use this strategy to deter the U.S. and other nations from conflict or directly shape public perceptions of citizens and/or polarize society.
While the CCP intends to deploy these influence operations domestically, there are situations where they could deploy them against the U.S. and other nations to disseminate information with the goal of further polarizing public opinion. In this case, psychological warfare uses propaganda, deception, and coercion to exert pressure and affect the target audience's behavior.
The evolving strategy of CDO likely includes the use of synthetic media through the creation of generative AI content and deepfakes. The PLA is continuing to develop voice information synthesis technology that will enable them to create realistic-sounding deepfakes of “political and military leadership to mislead adversaries and shape their decision-making process” (U.S. Department of Defense 2024). The PLA recognizes the efficiency of using generative AI applications to create synthetic media including deepfakes. This requires less human input, is often more accurate, and requires significantly less time to deploy and have an impact on their target audience. In 2020, there was an instance where “elements of the PLA had reportedly created a deepfake to mislead the U.S. public” (U.S. Department of Defense 2024).
In this case, the PLA has not only the ability to ignite the wildfire of synthetic media and deepfakes but also spread it effectively using their bot accounts on social media. Detecting AI/Deepfake content helps inform audiences of what’s real, but lacks the explainability and insight government officials need to understand the context of these incidents. Authentica, our media vetting tool, offers an in-depth forensic analysis of synthetic media, which provides government officials with information about who created it and how enabling them to take actionable steps towards a solution. For government officials looking to enhance their media vetting capabilities, sign up here for a free demo of Authentica.
Sources
Stockwell, Sam, et al. “AI-Enabled Influence Operations: Safeguarding Future Elections.” Centre for Emerging Technology and Security, 13 Nov. 2024, cetas.turing.ac.uk/publications/ai-enabled-influence-operations-safeguarding-future-elections.
U.S. Department of Defense. Military and Security Developments Involving the People’s Republic of China, 2024. media.defense.gov/2024/Dec/18/2003615520/-1/-1/0/MILITARY-AND-SECURITY-DEVELOPMENTS-INVOLVING-THE-PEOPLES-REPUBLIC-OF-CHINA-2024.PDF.