State vs Federal AI Regulation: Who Wins?
Here's the situation in early 2026: while Washington talks about AI innovation and deregulation, states are actually writing the rules. Colorado passed thorough AI legislation. California is building its own framework. New York, Illinois, Texas, and others are moving independently. And the federal government's response? Essentially: "We'll get to it eventually." The result is a regulatory space that resembles early internet governance — chaotic, contradictory, and potentially unworkable for companies operating nationally.
This isn't just a policy wonk debate. The tension between state and federal AI regulation has real consequences for every company that builds, deploys, or uses AI systems. And the constitutional question — who actually has the authority to regulate AI — is heading toward a confrontation that could reshape American technology policy for decades.
The State-Level Rush
States aren't waiting for Congress to figure out AI. They're acting because their constituents are experiencing the real-world impacts of AI right now. When a hiring algorithm discriminates against job candidates in Colorado, when a facial recognition system misidentifies someone in New York, when an AI-powered insurance claim gets denied unfairly in Illinois — those problems don't wait for federal legislation.
Colorado's SB 24-205 was particularly significant. It requires companies to conduct impact assessments for "high-risk" AI systems — those used in employment, lending, housing, education, and other consequential decisions. It mandates bias testing, transparency requirements, and consumer notification when AI is used in decisions that significantly affect them. It's the most thorough state-level AI law in the country, and it's likely to serve as a template for other states.
Colorado SB 24-205 — thorough AI governance for high-risk systems including impact assessments and bias testing
- California AI Transparency Act — Requirements for disclosure of AI-generated content and AI use in consequential decisions
- Illinois AI Video Interview Act — Consent and disclosure requirements for AI analysis of video interviews
- New York City Local Law 144 — Mandatory bias audits for automated employment decision tools
- Texas AI advisory proposals — Emerging framework focused on government AI use and accountability
The Federal Preemption Push
Tech companies and Republican lawmakers are pushing hard for federal preemption — a national AI law that would override state regulations. Their argument is straightforward: a patchwork of 50 different AI laws makes compliance impossible and creates a competitive disadvantage for American companies. A single federal standard, they say, would provide clarity and enable innovation.
But federal preemption is a double-edged sword. The most likely federal standard under the current administration would be significantly weaker than what states like Colorado and California have enacted. That means preemption wouldn't just create uniformity — it would eliminate some of the strongest consumer protections in the country. Democratic lawmakers, consumer advocacy groups, and state attorneys general are fighting to preserve state authority.
The Constitutional Question
The legal framework for this fight is surprisingly murky. AI doesn't fit neatly into traditional regulatory categories. It's not purely a telecommunications issue (FCC jurisdiction), not purely a consumer protection issue (FTC jurisdiction), not purely an employment issue (EEOC jurisdiction). It touches all of these and more. This ambiguity gives both states and the federal government plausible claims to regulatory authority.
The Commerce Clause of the Constitution gives Congress broad power to regulate interstate commerce, which would cover most AI systems. But the Supreme Court has also recognized states' rights to regulate in areas of traditional state concern — including consumer protection, employment, and civil rights. AI used in hiring decisions, for instance, is closely tied to state employment law, even if the AI system was developed in another state.
Who Actually Wins?
The honest answer is that neither side "wins" cleanly. States will continue to regulate AI applications that affect their residents, and companies will continue to lobby for federal preemption. The most likely outcome is a messy coexistence: a relatively weak federal framework overlaid on stronger state laws in some areas, with ongoing legal battles over where federal authority ends and state authority begins.
For companies building and deploying AI, the smart play is to design for the strongest regulatory requirements they're likely to face — which increasingly means designing to Colorado and California standards. If federal preemption eventually arrives, they'll already be compliant. If it doesn't, they won't be caught flat-footed.
Related reading: Shipsy Launches AgentFleet — AI Workforce for Logistics · Europe's AI Dilemma: Regulate Now or Fall Behind? · Trump's AI Policy: Light Touch or Dangerously Lax?