Skip to main content

Three Issues with the Trump Administration’s Proposed Preemption of State AI Laws

As more states introduce laws governing the use of AI across employment, housing, health care, and consumer protection, the resulting patchwork is becoming increasingly complex and difficult for national developers to navigate. However, the Trump administration’s response—seeking to bar states from adopting any AI-related rules, including in domains where they have long exercised primary authority—goes well beyond the problem it is trying to solve. Instead of building a federal framework and identifying where national standards are genuinely necessary, this approach rests on uncertain constitutional footing and risks weakening the federal government’s capacity to regulate AI coherently over the long term.

Against this backdrop, the executive order published today proposes a significantly expanded federal role in AI regulation. It instructs the Department of Justice to challenge selected state laws in court, directs the Department of Commerce to catalog state laws deemed “onerous” or inconsistent with federal policy, and authorizes agencies across government to condition discretionary grants on a state’s agreement not to enforce its own AI rules. It also calls for the Federal Communications Commission to consider a federal AI reporting and disclosure standard that would supersede conflicting state requirements, and asks the Federal Trade Commission to issue a policy statement describing when state laws requiring alterations to “truthful outputs” are preempted under the FTC Act. Taken together, these measures seek to centralize most AI-related rulemaking in Washington—sweeping in both the statewide governance statutes that fill a federal vacuum and the long-standing sectoral rules through which states regulate hiring, housing, health care, insurance, and consumer protection.

The first difficulty is that the draft EO treats “state AI laws” as a single category, when, in practice, they fall into two fundamentally different groups that emerged for distinct reasons. One consists of broad, cross-cutting governance requirements—impact assessments, transparency obligations, and documentation standards—such as Colorado’s SB 24-205 (2024), Connecticut’s SB 1103 (2023), and Vermont’s H.121 (2024). States enacted these largely because Congress has not yet established a federal baseline; they are stopgap measures rather than efforts to set national AI policy. The other consists of sector-specific rules governing how AI is used in domains that states have regulated for more than a century, irrespective of the underlying technology—most prominently employment, housing, health care, insurance, and consumer protection, as in New York City’s audit regime for automated employment decision tools or California’s insurance-underwriting guidance. These measures extend long-standing state responsibilities rather than attempting to regulate AI as a technology class. By collapsing these categories, the EO ignores a structural distinction that courts have long treated as constitutionally significant. Courts routinely apply a stronger presumption against preemption when federal action intrudes on areas historically regulated by the states.

In the cross-cutting category, preemption falters not because Congress lacks authority, but because it has not exercised it. An executive order cannot preempt state law on its own, as preemption must rest on congressional statutory authority. Without a federal framework for AI governance—risk classifications, documentation obligations, and transparency standards—there is no baseline against which state provisions could conflict. In the sector-specific category, the obstacle is different: long-standing state authority over employment, housing, health care, insurance, and consumer protection cannot be displaced by executive action alone. Preemption in these domains requires clear congressional authorization, which an executive order cannot supply.

A second difficulty lies in the mechanisms the EO asks federal agencies to deploy. The EO instructs the FTC to issue a policy statement asserting that state laws requiring changes to the “truthful outputs” of AI models are preempted under the FTC Act’s prohibition on deceptive practices—an untested and novel framing that would immediately invite First Amendment and administrative-law challenges. It directs the FCC to consider establishing a federal AI reporting and disclosure standard that would supersede conflicting state requirements, even though the agency lacks a clear statutory mandate over AI regulation and courts have recently been skeptical of expansive assertions of agency preemptive authority. The order further instructs agencies across government to condition discretionary grants—including BEAD funds—on a state’s agreement not to enforce its own AI laws, raising constitutional concerns under the Supreme Court’s limits on coercive funding instruments. Taken together, these mechanisms ask agencies to stretch their authority far beyond what courts have been willing to uphold, creating a gap between the EO’s ambition and the legal tools available to sustain it.

A third problem is that the EO focuses on constraining states rather than building the federal capacity that is actually missing. It offers no substantive federal standards for impact assessments, transparency, documentation, or sector-specific safeguards. Instead, it instructs agencies to litigate, challenge state measures, and tie grant programs to state compliance. Although the EO gestures toward future legislation, it offers no substantive federal framework against which preemption could operate. 

What is absent is the architecture a functional national framework requires: shared taxonomies, documentation and testing baselines, interagency coordination mechanisms, sector-specific evaluative capacity, and AI sandboxes that generate evidence. Preemption without developing these foundations does not create a national system—it simply removes existing obligations without providing federal standards to replace them. Without those national standards, the federal posture becomes largely reactive, leaving courts with no substantive basis on which to uphold preemption. The result is not national coherence but a regulatory vacuum—precisely the condition that led states to legislate in the first place.

A more effective federal strategy would begin by recognizing the distinction the EO overlooks. The cross-cutting elements of AI governance—risk assessments, documentation standards, and transparency obligations—implicate systems and markets that operate across state lines, and thus function as interstate regulatory questions rather than matters tied to state or local conditions. These are precisely the areas for which Congress is institutionally best placed to establish baseline requirements, both because the Constitution assigns interstate regulation to the federal level and because fragmented state-by-state regimes cannot realistically govern the deployment of large-scale AI systems. 

By contrast, the sector-specific rules governing AI use in employment, housing, health care, insurance, and consumer protection fall squarely within long-standing state regulatory authority. These are domains in which states have exercised primary competence for decades, and they remain state responsibilities unless Congress clearly establishes conflicting federal baselines. 

A coherent national framework does not require these elements to be sequenced rigidly, but it does require that federal standards and any associated preemption be developed in tandem rather than reversed. In practice, that means building the missing federal architecture—shared definitions, documentation and testing expectations, and sector-specific evaluative capacity—before or alongside decisions about where targeted preemption is warranted. Federal baselines and preemption can proceed in parallel, but preemption cannot operate meaningfully in the absence of the federal standards it is meant to uphold.

The administration is right to identify fragmentation as a challenge, and the need for national consistency is real. But a durable framework cannot be built by sweeping aside state rules before developing the federal standards that would replace them. A more credible path would pair federalization of the truly interstate elements of AI governance with targeted preemption where conflicts arise, anchored in the institutional architecture that only Congress can provide. That approach would give developers the clarity needed for innovation while preserving the state responsibilities that have long structured the U.S. regulatory system.