This November’s election is the first in which generative AI is widely available, presenting a challenge for policymakers in how best to establish guardrails on the use of this technology. In particular, in the absence of federal legislation regulating the use of deepfakes – realistic yet fabricated media – in political ads, states have spearheaded this effort. This year, 17 states have enacted legislation on the issue, bringing the total number of states with deepfake-related legislation to 23.
While traditional media continues to dominate political ad spending, digital platforms are growing in importance. Connected TV (CTV), which refers to devices connected to the internet, is expected to receive 45 percent of digital ad spending in 2024, up from 19 percent in 2020.
State laws fall into two categories: disclosure requirements and temporary bans.
In July, the FCC issued a Notice of Proposed Rulemaking to mandate disclosure of AI-generated content in political ads; it would effectively require television and radio ads to disclose whether they include AI-generated content. However, the rule is unlikely to take effect before the election. The Deepfake Threat The rapid growth in the quantity and sophistication of generative-AI is cause for concern as the campaign season for the November election heads into full gear.
Already, bad actors have made use of this technology to create deepfakes in attempts to deceive voters regarding candidates as well as the voting process. A deepfake robocall impersonating the voice of President Biden that went to Democratic New Hampshire voters telling them not to vote in the state’s primaries is perhaps the most high-profile example of how the proliferation of this technology is creating new avenues for fraud, scams, and manipulation. This proliferation of deepfakes is likely to confuse and disillusion voters in an already hyperpolarized political environment.
The spread of mis- and disinformation also presents a threat to civic engagement, as voters may choose not to participate in our nation’s most fundamental process if they feel that they cannot access trustworthy information on the election. Yet mitigating the threat of AI-generated election disinformation online is complicated, as social media platforms vary in their willingness to detect, ban, or require disclosures on AI-generated content.
Some platforms have self-adopted rules in advance of a federal framework, with Meta requiring AI disclosures for all of its advertising and prohibiting any new political ads the week leading up to the election. Google requires any political ad with modified content that “inauthentically depicts real or realistic-looking people or events” to include disclosure of that fact but does not require AI disclosures on all political ads. State Efforts to Combat Deepfakes Election administrators across the country are working to mitigate the threat of disinformation ahead of the November election. States are hard at work on incident response preparation, also known as tabletop
Chief election officials have built out public awareness campaigns in preparation for the spread of election falsehoods. New Mexico Secretary of State Maggie Toulouse Oliver has made educating the public on the deepfake threat a priority, building out a webpage offering tips on how to spot AI-generated content. In addition to the awareness campaign. New Mexico in March enacted HB 182 to require political campaigns to disclose when AI is used in ads. “Hopefully, just increasing awareness and requiring disclosure . . . will heavily discourage any of that potential activity. But if it were to happen, we’re hoping that forewarned is forearmed,” said Toulouse Oliver.
New Mexico is not the only state having recently passed legislation in this area. State legislatures have become increasingly active in the last several months, with legislation regulating political deepfakes generally enjoying strong bipartisan support. State-level legislation on deepfakes in elections has received bipartisan backing in every state in which these laws have passed.
As of July 2024, 23 states have passed legislation on the issue, with 17 states passing bills or amendments this year.
State bills tackling political deepfakes tend to fall into two categories: disclosure requirements and temporary bans. Legislation on disclosure requirements varies but most commonly requires a disclaimer to be placed on any political media created with the use of AI. For instance, in March, Wisconsin enacted A.B. 664. Under the law, political campaign-affiliated entities already regulated under state law are required to add a disclaimer noting the use of generative AI for any content released. Failure to comply is punishable by a $1,000 fine per violation. The law does not address AI-generated content released by non-campaignaffiliated entities not already required to be regulated under state law.
“The use of AI to create a political ad is not inherently good or bad. Generative AI could be used to create a clever animation to illustrate a candidate’s views, or it could be used to create a realistic-looking video clip that makes it look like their opponent said something they never did,” said State Senator Mark Spreitzer (D-Beloit). In contrast, Arizona’s approach to disclosure is more limited. Arizona’s S.B. 1359 and H.B. 2394, unlike Wisconsin’s law, limit their application specifically to digital impersonation of a candidate or elected official.
However, H.B. 2394 would create a civil cause of action, allowing an aggrieved party to file suit against the publisher creating and driving the spread of the material, as well as the ability to seek monetary damages under certain circumstances. Several states, including Hawaii in S.B. 2687, California in A.B. 730, and Michigan in H.B. 5144, enacted legislation that bans media wrongfully depicting a candidate for public office created using generative-AI within specific time frames before elections if the media does not contain a disclosure identifying the media as manipulated. Hawaii’s law, for instance, prohibits the distribution of “materially deceptive material” during election years between February and Election Day without the presence of a clearly visible or audible disclaimer. Some experts worry that this type of legislation – mandates on disclosures only during a certain period of time leading up to an election – does not offer enough protections. “[I]nstead of restricting such depictions only during an election year, surely misleading depictions of real individuals (politician or not) should always be required to bear such a disclaimer,” said Travis Mandel, Associate Professor of Computer Science at the University of Hawaii at Hilo who was awarded a federal grant in 2020 to research AI.
Federal Efforts Against Election Disinformation
While there is bipartisan interest in Congress on the need for AI guardrails in political ads, federal legislation has not yet advanced to floor action. The proposed bipartisan Protect Elections from Deceptive AI Act in the Senate would amend the Federal Election Campaign Act (FECA) to prohibit distributing materially deceptive AI-generated video, images, or audio related to candidates for federal office. Other prominent legislation includes the proposed REAL Political Advertisements Act which would require disclosures of AI-generated media as well as the DEEPFAKES Accountability Act which would require watermarked disclosures on this content as well as establish a task force to advance public-private efforts in developing deepfake detection technologies. Federal agencies with the authority to regulate AI have been more active.
In August 2023, the Federal Elections Commission (FEC) announced began a process to potentially regulate AI-generated deepfakes in political ads, voting to advance a petition to initiate a rulemaking to clarify that the Federal Election Campaign Act (FECA) prohibits deceptive advertisements created by generative AI under the authority it gives the FEC to regulate fraudulent misrepresentation. However, while comments on the proposal were due October 16, the FEC has not announced a formal rulemaking or adopted a new rule, which would require at least four Commissioners (and therefore at least one from each political party) to vote to advance a rulemaking.
At this point seems unlikely to do so before the November elections. However, during the comment period, 52 Members of Congress encouraged the FEC to adopt rules to confirm that the FECA fraudulent misrepresentation provision applies to ads using generative AI and to require disclosures on those ads.
John Gardner & Mallory Block, Courtesy Conference Board











