Skip to content

“[A]I Approve This Message.” AI, Deep Fakes and Political Ads

With state and local election season in full swing and the 2024 presidential election just around the corner, political advertising spend continues to grow. And the advertising stakes continue to rise—not just because there are millions of dollars on the line, but because we’re starting to see what elections with AI-generated ads and deep fakes might look like. 

As we’ve written about previously, a number of federal and state political advertising laws govern how, and to what extent, political ads can appear online. In December 2022, the Federal Election Commission (FEC) adopted final rules addressing online political ads (“internet public communications”) and related disclosure requirements, stemming from a rulemaking it initiated in 2018. While the FEC’s rulemaking was pending, several states passed their own laws governing online political advertising. Generally, these laws place obligations on both the online platform on which the political ads run and the advertiser (or the party paying for the ad). These laws tend to regulate state (not federal) election ads and include various recordkeeping and ad disclosure requirements. In some states (California and Washington, for example), online platforms are required to collect and maintain certain information about both the content of the ad and the purchaser. The FEC’s new online political ad regulations impose somewhat similar requirements for online federal election ads.

However, one area where the states seem to be outpacing federal law is in regulating the use of AI or deep fakes in political advertising. Currently, only a handful of states have any kind of law that regulates the use of deep fakes with respect to political advertising. 

State Laws Restricting AI Used for Political Advertising

Minnesota 

HF 1370 criminalizes the use of deep fakes to influence an election or hurt a political candidate. The law defines a deep fake as any video recording, motion-picture film, sound recording, electronic image or photograph, or any technological representation of speech or conduct, that appears so realistic that a reasonable person would believe it is depicting speech or conduct that a person actually engaged in and that the speech or conduct was created using technical means (and was not just an impersonation, for example). 

The law broadly applies to local, state and even federal elections, prohibiting the use of deep fakes of any individual who seeks to be nominated or elected to a federal, statewide, legislative, judicial or local office (including special districts, school districts, towns, home rule charter and statutory cities, and counties). Importantly, for online platforms, the law also criminalizes disseminating or entering into a contract or agreement to disseminate an ad that includes a deep fake—meaning that online platforms that accept an online political ad that includes a deep fake could also be in violation of the law. Violations can carry fines ranging from $1,000 to $5,000, or up to five years in prison, depending on the nature of the violation.

Texas

Enacted in 2019, Section 255.004 of the Texas Election Code makes it a misdemeanor to create a deep fake video and cause the video to be published or distributed within 30 days of an election, where the intent of deploying the deep fake is to injure a candidate or influence the result of an election. Violations are punishable by up to a year in jail or fines of up to $4,000.

Washington 

Washington recently passed SB 5152, a new deep fake political ad law that became effective in July. Washington’s law focuses on “synthetic media,” which is defined as an audio or video recording of an individual’s appearance, speech or conduct that has been intentionally manipulated with the use of generative adversarial network techniques or other digital technology in a manner to create a realistic but false image, audio or video. To be actionable under the law, the “synthetic media” (deep fake) must appear to depict a real person, action or speech and produce a fundamentally different understanding than a reasonable person would have from seeing the unaltered or original version of the image or recording. Interestingly, the law does not outright ban the use of synthetic media in political advertising, but it does require that synthetic media include a disclosure stating, “This (image/video/audio) has been manipulated.” The sponsor of the electioneering communication can be held liable under the law (plaintiffs can also obtain an injunction or damages), but the law specifically states that the platform may only be liable if it transmits a communication subject to the FCC’s “equal time” rules (47 U.S.C. Sec. 315) and the platform removes the required synthetic media disclosure or otherwise changes the ad such that it now qualifies as “synthetic media.” 

California 

Until AB 730 sunsetted on Jan. 1, 2023, California had a law that prohibited the distribution of materially deceptive audio or visual material within 60 days of an election unless the material included a disclosure stating, “This (image/video/audio) has been manipulated.” Under the law, a “materially deceptive audio or visual media” meant an image or an audio or video recording of a candidate’s appearance, speech or conduct that had been intentionally manipulated such that the image or recording would falsely appear to a reasonable person to be authentic and would cause a reasonable person to have a fundamentally different understanding had they heard or seen the unaltered, original version of the image or audio or video recording. 

Looking Ahead to 2024

Over the past year, Illinois (SB 1742), Massachusetts (H 72) and New Jersey (SB 5510) all had bills regarding the use of deep fakes and election interference that failed to pass. Michigan has several deep fake and AI-related election bills pending. 

We expect to see an uptick in bills and legislation regulating the use of deep fakes and AI in political advertising. In the few state laws that have managed to pass, we’re starting to see some similar concepts and definitions. States either seem to have legislation that outright prohibits the use of deep fakes and AI in political advertising (particularly, close to an election), or, as we’ve seen in California and Washington, states are more permissive regarding the use of AI in political advertising but require prominent disclosures that the content has been altered. 

It’s important to note that not all uses of AI in political ads are necessarily detrimental to candidates or elections. For instance, AI could potentially be used to translate political ads into a number of languages, making the material more accessible to voters. It’s possible that we will start to see new state laws permit these more-specific AI uses.

Key Considerations for Online Platforms Accepting Political Advertisements

Potential AI issues aside, it’s more important than ever for online platforms to have the infrastructure and policies required to accept and legally disseminate online political ads.

Going into this election season, online platforms should:

  • Understand the various state requirements and which obligations rest with the platform or the advertiser.
  • Consider how to collect and capture the required recordkeeping information from political advertisers.
  • Review contracts with advertisers to ensure that the political advertisements provided to the platform include the necessary disclaimers and that the advertiser provides the necessary disclosures.
  • Implement policies and procedures for accepting and vetting online political ads—this includes assurances that the ads do not include altered images, audio or other deep fakes.