Author: Piero Soave and Wesley Issey Romain - AI, Cyber Security & Space Team

The year 2024 is sure to be remembered when it comes to elections: first, never before have so many people around the world been called to cast their vote; second, these elections will be the first to take place in a world of widespread Generative Artificial Intelligence (GenAI). The combined impact of these two elements is likely to have a lasting impact on democracy. This article looks at how GenAI can influence the outcome of elections, reviews examples of risks from recent elections, and investigates possible mitigations.

The year of high-stakes elections

In over 70 elections throughout 2024, some 800mn voters will take to the ballots in India, 400mn in Europe, 200mn in the United States of America, and many more across Indonesia, Mexico, and South Africa1. In many cases, these elections will be polarized and will feature candidates from populist backgrounds. Previous electoral rounds have scarcely been an example of moderation, featuring instead accusations of foreign interference, and a deadly assault on the US Congress. Whoever wins the most votes will make decisions on topics as consequential as the US-EU relationships, the future of NATO, trade wars, the geopolitical equilibrium in the Middle East, Hindu-Muslim relationships, and more. With so much at stake, the risk of election interference warrants a closer look.

Enter GenAI

The launch of OpenAI’s ChatGPT at the end of 2022 brought GenAI to the mainstream. GenAI indicates an AI system that has the ability to create content in the form of text, audio or video. Having been popularized by ChatGPT, there are now thousands of applications readily available at minimal to no cost. These systems have been trained on billions of elements of text, sound or video, and are able to respond to a user query and create synthetic content in those formats. 

The existing legal and regulatory frameworks are poorly suited to mitigate the risks deriving from GenAI. Since the launch of ChatGPT, there have already been lawsuits related to intellectual property2, sanctions to corner-cutting lawyers3, egregious reinterpretations of historical facts4, as well as general concern about the bias inherent in these systems5. One specific problem related to GenAI is that of deepfakes, i.e. audio or video files that show people saying or doing things they never in fact said or did. This content is so realistic that it is all but impossible to determine whether what is in front of us is reality, or an artificial creation. The consequences are far-ranging, from the potential increase in financial and other fraud6, to the infringement of privacy and individual rights7. But it is in the domain of politics that deepfakes are particularly troubling. They can be used for a variety of bad purposes, from misleading voters about where, when and how they can vote, to spreading fake content from well recognizable public figures, to generating inflammatory messages that lead to violence8

GenAI and misinformation in elections

Misinformation is not a new phenomenon, and it certainly is older than artificial intelligence. However, technology can exacerbate and multiply its effects. By some accounts, “25% of tweets spread during the 2016 US presidential elections were fake or misleading”9. GenAI has the potential to turbocharge the creation of fake content, as this no longer requires sophisticated tools and expertise - anyone with an internet connection could do it. 

Examples of deepfake interference in the political process abound10, despite the relative young age of the technology. In what is perhaps the most consequential event to date, Gabon’s President Ali Bongo appeared in a 2019 video in good health, despite having recently suffered a stroke. The media started questioning the veracity of the video - which is still being debated - ultimately triggering an attempted coup11. Crucially, Schiff et al suggest that “the mere existence of deepfakes may allow for plausible claims of misinformation and lead to significant social and political harms, even when the authenticity of the content is disputed or disproved”12.

During Argentina’s 2023 presidential elections, both camps made extensive use of AI generated content. Ads featured clearly fake propaganda images of candidates as movie heroes, dystopian villains or zombies. In an actual deepfake video - labeled as AI generated - “Mr Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian views”13. Also in 2023, synthetic content featured in mayoral elections14 in Toronto and Chicago, the Republican primaries in the US, Slovakia’s parliamentary elections - all the way to New Zealand15.

In the run-up to general elections in India, the Congress party shared a deepfake video of a Bharat Rashtra Samiti leader calling to vote for Congress. The video was shared on social media and messaging apps as voters went to the ballot, and was viewed over 500,000 times before the opposing campaign could contain the damage. AI is being widely used in India to create holograms of candidates, and translate speeches across multiple local languages - as well as for less ethical and transparent objectives16.

In an attempt to simulate bad actors’ attempt to generate misinformation, researchers tested four popular AI image generators and found that the tools “generated images constituting election disinformation in 41%” of cases. This is despite policies in place for these tools which should prevent the creation of misleading materials about elections. The same researchers looked for evidence of bad use and found that individuals “are already using the tool to generate content containing political figures, illustrating how AI image tools are already being used to produce content that could potentially be used to spread election disinformation”17.

Source: Markus Spiske. - https://www.pexels.com/photo/technology-computer-desktop-programming-113850/

Controls and mitigations

Regulation around AI is moving fast in response to even faster technological advancements. Perhaps the most thorough attempt at creating a regulatory framework is the EU AI Act18, approved in March 2024. In the US, a mix of federal and state initiatives seek to address several concerns related to AI, from bias to GenAI, and data privacy. These include the 2023 Presidential Executive Order and related OMB guidance; the NIST Risk Management Framework; and state legislation, from the early New York City Law 144 to the more recent California guidance and proposed bills. Other countries, from Singapore to Australia and China, have approved similar rules. 

Looking at elections integrity specifically, the EU adopted in March a new regulation “on the transparency and targeting of political advertising, aimed at countering information manipulation and foreign interference in elections”. This focuses mostly on making political advertising clearly recognizable, but most of the provisions won’t enter into force before the autumn of 202519. Also in March, the European Commission leveraged the Digital Services Act - which required very large online platforms to mitigate the risks related to electoral processes - to issue guidelines aimed at protecting the June European Parliament elections. The guidelines include labeling of GenAI content. Although these are just best practices, the Commission can start formal proceedings under the Digital Services Act if it suspects a lack of compliance20. In the US, two separate bipartisan bills have been introduced in the Senate: the AI Transparency in Elections Act21 and the Protect Elections from Deceptive AI Act22.

These frameworks have yet to stand the test of time, and the proliferation of open-source models and APIs makes it an uphill struggle for regulators. Regulation around deepfakes specifically is scarce and complex, as it needs to address two separate issues: the creation of the synthetic material, and its distribution. What regulation does exist, tends to focus on sexual content23, although in some cases political content is also covered24. Existing norms around privacy, defamation or cybercrime can offer some support, but are ultimately inadequate to prevent harm25. Some tech solutions are available, such as watermarks, detection algorithms to verify authenticity, or including provenance tags into content26. Whether these techniques are able to prevent or counter the creation and spread of deepfakes at scale remains an open question - and some of them may have unintended drawbacks27. The experience of social media platforms in tackling the spread of harmful content and misinformation is mixed at best28. Platforms’ efforts to mitigate harm (from content moderation to the provision of trustworthy information), and solutions proposed by other parties (such as the removal of the reshare option) are steps in the right direction - but seem unlikely to move the needle.

It is possible that tech developments in the near future will make it easier to detect and disrupt the flow of disinformation, fake news and deepfakes that threaten to sway elections - such as the recently released OpenAI detector29. But the best tool available right now might be literacy interventions, which can make readers more alert to fake news3031. For example, news media literacy aims to provide the tools to assess information more critically and to identify false information. Hameleers found that this type of intervention is effective at reducing the perceived accuracy of false information, although importantly it does not reduce agreement with it (when the reader’s beliefs align with its message)32

Conclusions

2024 will be a critical year for liberal democracies and election processes worldwide, from the Americas and Europe to Africa and Asia. Election outcomes will play a crucial role in shaping the orientation of the most pressing issues in world affairs.

The advent of AI tools such as Generative AI threatens electoral processes in democratic countries as it increases the risks of disinformation, potentially swaying voting outcomes. GenAI effectively gives anyone the ability to create synthetic content and deploy it in the form of robocalls, phishing emails, realistic deepfake photography or video, and more. Once this content is online, previous experience teaches that it is very difficult to moderate or eliminate, especially on social media platforms.

While continuing to support tech-based initiatives to detect or tag synthetic content, Governments and education institutions should invest in information literacy programs to equip people with the tools to critically evaluate information and make informed electoral decisions. 


  1. Keating, Dave. “2024: the year democracy is voted out?” Gulf Stream Blues (blog). Substack. Dec 29, 2023.<https://davekeating.substack.com/p/2024-the-year-democracy-is-voted?r=wx462&utm_campaign=post&utm_medium=web&triedRedirect=true> ↩︎
  2. Grynbaum, Michael M., and Ryan Mac. “The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work.” New York Times. Dec 23, 2023. <https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html↩︎
  3. Merken, Sara. “New York lawyers sanctioned for using fake ChatGPT cases in legal brief.” Reuters. June 26, 2023. <https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22> ↩︎
  4. Grant, Nico. “Google Chatbot’s A.I. Image Put People of Color in Nazi-Era Uniforms.” New York Times. Feb 22, 2024. <https://www.nytimes.com/2024/02/22/technology/google-gemini-german-uniforms.html> ↩︎
  5. Nicoletti, Leonardo., and Dina Bass. “Humans are Bias. Generative AI is even Worse.” Bloomberg. June 9, 2023. <https://www.bloomberg.com/graphics/2023-generative-ai-bias/> ↩︎
  6. Sheng, Ellen. “Generative AI financial scammers are getting very good at duping work email.” CNBC. Feb 14, 2024. <https://www.cnbc.com/2024/02/14/gen-ai-financial-scams-are-getting-very-good-at-duping-work-email.html ↩︎
  7. Weatherbed, Jess. “Trolls have flooded X with graphic Taylor Swift AI fakes.” The Verge. Jan 25, 2024. <https://www.theverge.com/2024/1/25/24050334/x-twitter-taylor-swift-ai-fake-images-trending> ↩︎
  8. Alvarez, Michael R., Frederick Eberhardt., and Mitchell Linegar. “Generative AI and the Future of Elections” California Institute of Technology Center for Science, Society, and Public Policy (CSSP), July 21, 2023. <https://lindeinstitute.caltech.edu/documents/25475/CSSPP_white_paper.pdf> ↩︎
  9. Bovet, Alexandre., and Hernán A. Makse. “Influence of fake news in Twitter during the 2016 US presidential election.” Nature Communications. Vol. 10(1), 7. Jan 2, 2017. <https://pubmed.ncbi.nlm.nih.gov/30602729/> ↩︎
  10. Bontcheva, Kalina., Symeon Papadopoulous., Filareti Tsalakanidou., Riccardo Gallotti., et al. “Generative AI and Disinformation: Recent Advances, Challenges, and Opportunities”. European Digital Media Observatory (EDMO), February 2024. <https://edmo.eu/edmo-news/new-white-paper-on-generative-ai-and-disinformation-recent-advances-challenges-and-opportunities/> ↩︎
  11. Delcker, Janosche. “Welcome to the age of uncertainty”. Politico. Dec 17, 2019. <https://www.politico.eu/article/deepfake-videos-the-future-uncertainty/> ↩︎
  12. Bueno, Natalia., Daniel Schiff., and Kaylyn Jackson Schiff. “The Liar’s Dividend: The Impact of Deepfakes and Fake News on Politician Support and Trust in Media.” Georgia Institute of Technology GVU Center. <https://gvu.gatech.edu/research/projects/liars-dividend-impact-deepfakes-and-fake-news-politician-support-and-trust-media> ↩︎
  13. Nicas, Jack., Lucia Cholakian Herrera. “Is Argentina the First A.I. Election?” New York Times. Nov 15, 2023. <https://www.nytimes.com/2023/11/15/world/americas/argentina-election-ai-milei-massa.html> ↩︎
  14. Wirtschafter, Valerie. “The Impact of Generative AI in a Global Election Year”. Brookings Institution. Jan 30, 2024. <https://www.brookings.edu/articles/the-impact-of-generative-ai-in-a-global-election-year> ↩︎
  15. Hsu, Tiffany., and Steven Lee Myers. “A.I. Use in Elections Sets Off a Scramble for Guardrails.” New York Times. June 25, 2023. <https://www.nytimes.com/2023/06/25/technology/ai-elections-disinformation-guardrails.html> ↩︎
  16.  Sharma, Yashraj. “Deepfakes democracy: Behind the AI trickery shaping India’s 2024 election.” Aljazeera. Feb 20, 2024. <https://www.aljazeera.com/news/2024/2/20/deepfake-democracy-behind-the-ai-trickery-shaping-indias-2024-elections> ↩︎
  17. “Fake image factory: How image generators threaten election integrity and democracy.” Center for Countering Digital Hate (CCDH). March 6 2024. <https://counterhate.com/wp-content/uploads/2024/03/240304-Election-Disinfo-AI-REPORT.pdf↩︎
  18. Abdurashitov, Oleg., and Caterina Panzetti. “AI Regulatory Landscape in the US and the EU: Regarding the Unknown.” ITSS Verona. Jan 18, 2024. <https://www.itssverona.it/ai-regulatory-landscape-in-the-us-and-the-eu-regulating-the-unknown-ai-cybersecurity-space-group↩︎
  19. “EU introduces new rules on transparency and targeting of political advertising.” Council of the European Union. March 24, 2024. <https://www.consilium.europa.eu/en/press/press-releases/2024/03/11/eu-introduces-new-rules-on-transparency-and-targeting-of-political-advertising/> ↩︎
  20. “Commission publishes guidelines under the DSA” European Commission. March 26, 2024. <https://ec.europa.eu/commission/presscorner/detail/en/ip_24_1707> ↩︎
  21. “Murkowski, Klobuchar Introduce Bipartisan Legislation to Require Transparency in Political Ads with AI-Generated Content.” Lisa Murkowski, United States Senator for Alaska. March 6, 2024. <https://www.murkowski.senate.gov/press/release/murkowski-klobuchar-introduce-bipartisan-legislation-to-require-transparency-in-political-ads-with-ai-generated-content↩︎
  22. Klobuchar, Hawley, Coons, Collins Introduce Bipartisan Legislation to Ban the Use of Materially Deceptive AI-Generative Content in Elections.” Amy Klobuchar, United States Senator. September 12, 2023.  ↩︎
  23. UK Ministry of Justice., and Laura Farris MP. “Government cracks down on ‘deepfakes’ creation.” Press Release. April 16 2024. <https://www.gov.uk/government/news/government-cracks-down-on-deepfakes-creation↩︎
  24. Ahmed, Trisha. “Minnesota advances deepfakes bill to criminalize people sharing altered sexual, political content.” Associated Press (AP). May 11, 2023. <https://apnews.com/article/deepfake-minnesota-pornography-elections-technology-5ef76fc3994b2e437c7595c09a38e848↩︎
  25. Jodka, Sara H. “Manipulating reality: the intersection of deepfakes and the law.” Reuters. Feb 1, 2024. <Manipulating reality: the intersection of deepfakes and the law | Reuters↩︎
  26. Content Authenticity Initiative Website: <https://contentauthenticity.org/↩︎
  27. Wirtschafter, Valerie. “The Impact of Generative AI in a Global Election Year” <https://www.brookings.edu/articles/the-impact-of-generative-ai-in-a-global-election-year> ↩︎
  28. Aïmeur, Esma., Sabrine Amri., and Gilles Brassard. “Fake news, disinformation and misinformation in social media: a review.” Social Network Analysis and Mining. Vol. 13, 30, 2023. <https://link.springer.com/article/10.1007/s13278-023-01028-5#Fn18↩︎
  29. Cade Metz and Tiffany Hsu, “OpenAI Releases ‘Deepfake’ Detector to Disinformation Researchers”, New York Times, May 7, 2024.
    <https://www.nytimes.com/2024/05/07/technology/openai-deepfake-detector.html> ↩︎
  30. Jones-Jang, S Mo, Tara Mortensen, and Jingjing Liu. “Does Media Literacy Help Identification of Fake News? Information Literacy Helps, but Other Literacies Don’t.” American Behavioral Scientist. Vol. 65(2). <https://journals.sagepub.com/doi/10.1177/0002764219869406↩︎
  31. Helmus, Todd C. “Artificial Intelligence, Deepfakes, and Disinformation: A Primer”. RAND Corporation, July 2022. <http://www.jstor.org/stable/resrep42027> ↩︎
  32. Hameleers, Michael. “Separating truth from lies: comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the US and Netherlands. Information, Communication, & Society,. Vol 25(1). 2022. <https://www.tandfonline.com/doi/full/10.1080/1369118X.2020.1764603↩︎