Client Alerts & Insights

Where AI Regulation Stands Today

Part 1: Will Federal AI Policy Preempt State AI Laws? What Companies Should Expect

April 1, 2026

Key Takeaways

  • The White House has released a National Artificial Intelligence Legislative Framework and new executive orders aiming to establish a single, nationwide standard for AI regulation, but for now, companies must still navigate a patchwork of state and federal AI laws.
  • Despite federal efforts to preempt state laws, true uniformity will take time—executive orders alone cannot override state authority. This means businesses face ongoing legal uncertainty, dual compliance burdens and increased litigation risk as federal and state rules continue to evolve and sometimes conflict.
  • Building robust AI governance, documentation and risk assessment processes now will help manage overlapping obligations and position organizations for future regulatory changes. Companies should closely monitor both federal and state AI developments, maintain flexible compliance programs and prepare for regionally variable enforcement.

On March 20, 2026, the White House released its National Artificial Intelligence Legislative Framework (“AI Framework”) addressing six key objectives: (1) protecting children and empowering parents; (2) safeguarding and strengthening American communities; (3) respecting intellectual property rights and supporting creators; (4) preventing censorship and protecting free speech; (5) enabling innovation and ensuring American AI dominance; and (6) educating Americans and developing an AI-ready workforce.

For the last two years, AI governance, privacy and regulatory enforcement have resembled a recurring cycle: rapid bursts of state legislative activity; uneven standards; varying state enforcement theories; and ultimately, federal officials announcing plans to resolve fragmentation through a national framework. Federal AI policy has shifted from broad, agency-driven governance efforts toward an explicit push for a uniform national standard—accompanied by a litigation strategy aimed at challenging state laws deemed onerous or extraterritorial. States, in turn, are likely to challenge federal preemption on a piecemeal basis. As a result, companies may face extended periods of dual compliance and should closely monitor regulatory developments.

Two executive actions form the backbone of the current federal approach: Executive Order (“EO”) 14110, “Safe, Secure and Trustworthy Development and Use of Artificial Intelligence”(Oct. 30, 2023) and  EO 14365, “Ensuring a National Policy Framework for Artificial Intelligence” (Dec. 11, 2025).

EO 14110 directed federal agencies to promote “safe, secure and trustworthy” development and use of AI through standards-setting, civil rights protection and cross-government risk-management frameworks. EO 14365, however, is far more direct in setting the foundation for a central AI governance framework. It states that the United States should adopt a “minimally burdensome national standard—not 50 discordant State ones,” and takes several steps toward that objective.

First, EO 14365 directs the Attorney General to establish an AI Litigation Task Force to identify and challenge state laws that may be preempted by federal law, including under the Commerce Clause, or that the Attorney General deems inappropriate. This marks a significant shift from EO 14110, which largely operated through existing agency authority and did not purport to displace state law. Second, EO 14365 instructs the Commerce Department to evaluate “onerous” state AI laws, including those compelling altered “truthful outputs” or disclosures that may implicate the First Amendment. Third, it seeks to restrict Broadband Equity Access and Deployment Program funding for states with identified “onerous” AI laws. Fourth, it directs the Federal Communications Commission to consider whether to adopt federal reporting and disclosure standards that would preempt conflicting state requirements. Fifth, it tasks the Federal Trade Commission (“FTC”) with issuing a policy statement addressing when state laws requiring altered model outputs may be preempted under the FTC’s Unfair and Deceptive Acts and Practices (“UDAP”) authority. Finally, it instructs senior White House advisors to jointly prepare a legislative proposal for a uniform federal AI policy framework that would preempt state AI laws.

The AI Framework further emphasizes the importance of a uniform nationwide standard reasoning that a “patchwork of conflicting state laws” would undermine AI development.

While EO 14365 repeatedly asserts the need for a national standard and criticizes the patchwork of different state laws, the question remains whether executive orders and policy frameworks can actually establish preemption.

They cannot—at least not directly. Federal preemption arises in three ways: (1) express preemption, where Congress explicitly displaces state law; (2) field preemption, where federal regulation is so pervasive that it occupies an entire field; and (3) conflict preemption, where state law conflicts with federal law or frustrates federal objectives. Executive orders fall outside the first two categories. They do not carry the force of a statute expressly displacing state authority, nor does federal AI regulation yet “occupy the field.” Even EO 14110’s whole-of-government framework relies on sector-specific agency powers (e.g., FTC, CFPB, EEOC), which vary widely. Meanwhile, states continue legislating at scale: the National Conference of State Legislatures’ Artificial Intelligence 2025 Legislation report shows all 50 states, several U.S. territories and D.C. introduced AI-related bills, with dozens now enacted.

In the near term, conflict preemption and targeted invalidation are the most realistic avenues for displacing specific state AI laws. EO 14365 offers insight into likely federal flashpoints:

  • Extraterritorial Regulation and Interstate Commerce: Federal policy statements highlight concerns that some state AI laws effectively regulate conduct beyond state borders or functionally impose national standards. EO 14365 explicitly directs challenges to state laws that “unconstitutionally regulate interstate commerce.” Where a state requirement cannot be confined to in-state operations, dormant Commerce Clause and conflict-preemption challenges are likely. Companies deploying AI systems nationwide should watch litigation involving state laws that dictate model design or outputs on a nationwide basis.
  • Compelled Outputs and Federal Consumer Protection Policy: EO 14365 signals that mandates requiring altered or “corrected” outputs may conflict with UDAP principles. State requirements compelling modifications that federal regulators view as misleading or inconsistent with federal standards may face conflict-preemption challenges.
  • Disclosure and Reporting Requirements: Federal agencies are evaluating uniform disclosure and reporting standards for AI systems. If such federal standards emerge, state regimes imposing materially different or incompatible requirements may be challenged. However, the federal process is likely to be incremental, and companies should expect a prolonged period of overlapping federal and state disclosure obligations.

Nevertheless, conflict-preemption lawsuits will face challenges. Many states have grounded their AI laws in traditional police powers and tailored them to discrete harms—making them far less susceptible to federal displacement absent congressional action. See Rice v. Santa Fe Elevator Corp., 331 U.S. 218, 230 (1947) (preemption analysis begins “with the assumption that the historic police powers of the States were not to be superseded… unless that was the clear and manifest purpose of Congress”). Even federal policy statements that argue for greater preemption, such as EO 14365, expressly contemplate carve-outs for child protection and state procurement and use of AI.

The AI Framework similarly calls on Congress to act—suggesting that federal officials recognize that executive action alone is insufficient to displace the bulk of state regulation.

The federal initiative to challenge state AI laws will be a long-term project, likely producing a lengthy period of litigation-driven clarification rather than immediate uniformity. The AI Litigation Task Force will play a central role in shaping the boundary between permissible and preempted state regulation. This environment will create significant uncertainty for regulated entities, as courts in different jurisdictions may reach divergent conclusions about similar state provisions—turning compliance into a complex, jurisdiction-specific puzzle requiring careful legal analysis.

Companies should anticipate regionally variable enforcement and compliance expectations until appellate courts resolve key constitutional and preemption questions or Congress reacts to the AI Framework. In the meantime, companies should be prepared for heightened enforcement risk and parallel federal-state obligations, particularly in areas such as data governance, model transparency and litigation exposure.

For example, robust AI governance programs—including system inventories, risk assessments, testing protocols and documentation—will remain essential regardless of whether specific state laws are preempted. Companies should map state AI requirements against their likelihood of being federally challenged and design compliance programs with sufficient flexibility to align with emerging federal standards. Practical steps include incorporating audit, documentation and compliance-cooperation provisions into vendor agreements to manage evolving obligations. Such measures can be aligned with federal expectations reflected in EO 14110.

Part 2 of this series will discuss the current enforcement status and how state AGs are becoming the primary AI enforcers.

The Benesch team stands ready to help navigate through overlapping federal and state AI laws, and to advise on regulatory updates and developments, as well as compliance strategies. If you have any questions, please contact the author or your contact point at Benesch.