Skip to content

States Forging Ahead with New AI Laws Despite Federal Opposition

Artificial intelligence (AI) innovation has moved from experimental to operational at breathtaking speed. Businesses are deploying generative and agentic AI tools in marketing campaigns, leveraging machine learning to drive pricing models, using chatbots for customer engagement, and integrating AI into employment, health care and creative workflows.

As questions about how AI models are built, trained, deployed and disclosed to consumers grow louder, businesses developing or using AI tools are operating within an uncertain regulatory landscape. No comprehensive federal law governing AI has been enacted, leaving state legislatures to chart their own paths, all in the shadow of the Trump administration’s clear preference for a light-touch approach to regulation—most recently expressed through an executive order issued at the end of December.

Nevertheless, hundreds of AI-related bills were proposed at the state level in 2025, and nearly every state—44 at last count—has at least one AI law on the books. As a result, a patchwork of laws is being enacted across the country to address a wide range of AI uses, from chatbot disclosures and digital performer labeling to algorithmic pricing notices and rules for use in making employment decisions.

Federal AI Regulation

At the federal level, the only major federal law that squarely addresses AI focuses on a very specific harm: nonconsensual intimate imagery, including AI-generated images or videos known as “deepfakes.”

The Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (TAKE IT DOWN) Act, signed into law May 19, 2025, criminalizes nonconsensual publication of intimate images, requires platforms to remove covered content within 48 hours of notification and imposes penalties for distribution.

Congress overwhelmingly supported the TAKE IT DOWN Act, a feat no other federal AI bill has achieved. The only other proposal that received close to the same support was an attempt to place a 10-year moratorium on states’ ability to enact and enforce AI-related laws, which failed to pass last summer. Of note, however, the Senate recently passed another bill similar to the TAKE IT DOWN Act, called the Disrupt Explicit Forged Images and Non-Consensual Edits, or DEFIANCE, Act, which would enable victims of deepfakes to bring civil suits directly against AI providers. That bill is now being considered by the House.

States Filling the Void

In the absence of comprehensive federal regulation, states are filling the legislative void. Last year, three states—Colorado, Utah and Texas—enacted their own overarching AI laws. While each of these state laws differ in their specifics, they all attempt to address AI development and deployment in general, rather than focusing on a narrow issue or type of use.

The Colorado AI Act, for example, sets out a framework approach to protecting consumers from “algorithmic discrimination” by “high-risk” AI systems used to make decisions in key areas, including employment, health care, insurance and housing. The first of its kind in the nation, the law was set to take effect Feb. 1 but has been pushed back to June 30 in order to give the Legislature time to consider concerns raised about the law’s scope.

The law seemed poised to lead a wave of comprehensive state AI laws last year, but the initial momentum has fizzled in favor of a more cautious, piecemeal approach.
Among the AI laws enacted by states so far, other common regulatory trends have nevertheless emerged, including:

  • Transparency and disclosure. Many state laws aim to prevent consumer deception by requiring businesses to ensure consumers know when AI is used, such as when they are interacting with a chatbot or when a certain type of content or data (such as images or videos used in specific contexts) are generated by AI. Some laws even mandate periodic reminders that an AI chatbot is not a human being.
  • Automated decision-making technologies. To protect consumer privacy, a number of states have enacted laws requiring businesses to offer consumers the option to opt out of processing their personal information for “profiling” in order to make decisions that produce a legal or other significant effect on the consumer. Some states also require businesses to conduct a risk assessment for profiling that carries certain foreseeable risks.
  • Nonconsensual explicit images and political deepfakes. Many states are enacting laws similar to the federal TAKE IT DOWN Act and other protections in order to crack down on nonconsensual, explicit AI-created content and its distribution. Similarly, states commonly outlaw certain types of AI-generated deepfakes about political candidates or use in the context of elections.
  • Deepfakes and “digital replicas.” Several states have clarified that rights of publicity extend to AI-generated versions of a person’s voice, image or likeness, including granting protections to deceased individuals and their estates. Others have enacted laws that specifically address contracting with individuals for the creation and use of digital replicas using AI. Some states also expressly address and criminalize the use of deepfakes of individuals created for the purpose of committing fraud or other crimes.
  • Algorithmic discrimination. States are beginning to scrutinize algorithmic, data-driven decision-making and pricing models, including by requiring prescribed disclosures when companies use algorithms that rely on consumers’ personal information to set prices.
  • AI frontier models. To keep pace with the security risks associated with evolving, large-scale AI tools, a handful of states have passed or are considering laws regulating complex “frontier” models that are trained using massive computing power.

Executive Order Challenges State Laws

Complicating matters as states continue to pass their own AI regulations, the Trump administration issued its AI executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” in December.

The executive order signals a federal preference for a centralized, minimally burdensome AI framework and expresses concern about the growing patchwork of state laws. It criticizes state laws as “increasingly responsible for requiring entities to embed ideological bias within models” and “sometimes impermissibly regulat[ing] beyond State borders, impinging on interstate commerce.”

The executive order directs the attorney general and federal agencies to challenge potentially federally preempted state laws, consider restricting discretionary funding to states with AI laws that conflict with federal policy, and develop recommendations for a uniform federal legislative framework. It is important to note that an executive order cannot itself invalidate state statutes. However, it can influence enforcement priorities, shape future federal legislation and chill state legislative action. For now, however, state law continues to drive most AI compliance obligations.

The EU AI Act Under Review

AI regulation in the European Union, which was at the leading edge of comprehensive legal frameworks, now appears both delayed and less certain as a result of the same concerns around AI regulation stifling AI innovation and global competitiveness expressed in the U.S. executive order.

The EU AI Act, a comprehensive framework rolled out in August 2024, had a two-year timeline and a target for final implementation in August 2026. On Nov. 19, 2025, however, the European Commission announced its Digital Omnibus package to amend the EU AI Act with an aim to reduce business costs and improve innovation.

Some of the notable proposed changes included simplifying the requirements for some high-risk AI systems and delaying the rules on high-risk AI originally due to take effect in August, simplifying the rules applicable to small and midsized businesses, and granting a six-month grace period for certain transparency and marking obligations.

The official publication of the proposal in November triggered a formal legislative process that is currently underway. Most recently, on March 11, members of the European Parliament reached a preliminary deal on amendments to the EU AI Act, which will be reflected in a report and voted on this month by the Committee on Civil Liberties, Justice and Home Affairs and the Committee on Internal Market and Consumer Protection.

2026 Trends to Watch

As the end of the first quarter approaches, legislative trends will continue to focus on increasing transparency through expanded disclosure obligations, strengthening privacy around automated decision-making technology, regulating high-risk AI frontier models and addressing AI use in specific industries or other narrower AI use cases.

In particular, we predict 2026 will see:

  •  Increased scrutiny of frontier AI models, with some states having already enacted frontier model frameworks
  • New proposed laws governing chatbots, particularly focusing on minors’ interactions with companion bots
  • More activity in the health care space, specifically dealing with consumers’ mental health
  • Continued activity around automated decision-making and price-setting
  • Continued uncertainty as the push-pull between federal and state influence, AI competitiveness on the global stage and consumer protection at home, and political pressure and legislative action (or nonaction in the case of the U.S. Congress) plays out

Future U.S. Framework and Compliance Strategy

We may eventually see a federal framework in the U.S. that preempts conflicting state laws, but with a politically divided Congress, that seems unlikely to materialize in the short term. In the meantime, we will also be watching the Trump administration as it rolls out the initiatives set forth in its executive order and following whether those efforts are successful.

But until Congress acts or the administration’s efforts to preempt or discourage state regulation come to fruition (and survive the inevitable constitutional and other state challenges), states remain the primary regulators of AI development and deployment.

While watching this space, businesses will be wise to continue with their state law compliance efforts instead of pausing or waiting until the shifting regulatory landscape settles. Many state laws are already in effect and applicable to U.S. businesses, and ultimately some form of AI regulation will end up surviving the current uncertainty. Abandoning efforts that may need to be rebuilt later could be more costly than keeping such programs in place for the near term. To that end, Loeb’s state AI Legislation Tracker offers an interactive map to support your compliance efforts and help you follow state law developments around the country.