All Articles
Technology5 min read8 November 2024

AI and the 2024 US Election: What Actually Happened

The 2024 US election was the first national election in the era of widely available generative AI. The doomsday scenarios about AI-generated misinformation were less consequential than expected.

AIElectionDeepfakesMisinformationPolicy

The 2024 US election was the first national election conducted entirely in the era of widely available generative AI. For the previous eighteen months, articles, papers, and commentary had warned about the potential consequences. Deepfakes of candidates that voters could not distinguish from real footage. AI-generated articles flooding information channels. Voice cloning used in robocalls to impersonate politicians. Targeted persuasion content produced at a scale that would overwhelm traditional fact-checking.

The actual election happened. Some of the predicted phenomena occurred. Most did so at a smaller scale and with less effect than the most concerning forecasts had suggested. The reasons were a mix of effective platform responses, public skepticism that turned out to be more robust than predicted, and the fact that the AI-generated content was often not as effective as more traditional misinformation that had been refined over years.

Several specific incidents did happen. AI-generated robocalls impersonating Joe Biden were used in the New Hampshire primary in January 2024. AI-generated images of Donald Trump in fabricated scenarios circulated on social media. Deepfake videos of various candidates appeared in different contexts. Each of these incidents attracted attention but did not, in any documented case, swing meaningful numbers of votes.

The platform responses had improved significantly compared to previous elections. Watermarking, provenance verification, and automated detection of synthetic media had all advanced enough that obvious deepfakes were caught and labelled or removed relatively quickly on major platforms. Smaller platforms and messaging apps remained vulnerable, but the most-used distribution channels were not as wide open as they had been in 2016 or 2020.

Public skepticism about content authenticity had also evolved. Voters who had been hearing about deepfakes for two years were more likely to view unusual content with suspicion. The same skepticism had downsides, including authentic content being dismissed as AI-generated. But on balance, the broader population had become more careful about taking video and audio at face value.

The non-event nature of the AI misinformation story did not mean that AI had no effect on the election. The use of AI in legitimate campaign operations, including voter targeting, fundraising, and content production, was significant. The question of how those legitimate uses had affected outcomes was harder to study and was still being analysed long after the votes had been counted.

What the election demonstrated was that the worst-case scenarios for AI in elections had not materialised in the way predicted. The gradual scenarios, where AI quietly changed the economics and operations of campaigning, were the more durable story.

Found this useful?

Share it with someone who'd enjoy it.