As the nation celebrates Independence Day, Washington just missed a rare opportunity to act with foresight and constitutional restraint. On Tuesday morning, the US Senate voted 99–1 to strike a provision from President Trump’s sweeping “One Big Beautiful Bill” that would have paused state and local regulation of artificial intelligence (AI) for five years — but only in jurisdictions that voluntarily accepted federal infrastructure funding.
The final version of the proposal was a far cry from the sweeping, 10-year national moratorium originally introduced in the House. In its Senate form, the AI provision embodied federalist principles: a targeted, temporary pause, tied to federal subsidies, that allowed states to retain their sovereignty by simply declining the funds. In other words, no state was compelled to comply, and essential laws — such as those protecting children or enforcing general consumer protections — were clearly exempt.
Unfortunately, political pressure and misunderstanding prevailed. Despite efforts by Sens. Marsha Blackburn and Ted Cruz to craft a narrow and defensible compromise, the Senate struck the measure. The loss is significant — not just for AI governance, but for the broader question of how Congress should respond to rapidly evolving technologies without trampling innovation or constitutional limits.
Artificial intelligence is already reshaping the American economy. From medical diagnostics to logistics and financial modeling, AI is driving change across industries. That pace of change has prompted growing calls for regulation — many of them preemptive, and some, misguided. Without a clear framework for cooperation between federal and state actors, we risk building a patchwork of conflicting local mandates that confuses developers, deters investment, and isolates jurisdictions from national progress.
That was precisely what the proposed Senate AI moratorium sought to prevent. The policy introduced a five-year pause on AI-specific regulations — but only in states and localities that accepted new federal funding through the Broadband Equity, Access, and Deployment (BEAD) Program. Jurisdictions that declined the money would retain full autonomy. It was not a national ban. It reflected a longstanding model of how Congress uses federal funding to align national priorities while preserving state choice.
Had it passed, the moratorium would have clarified the line between federal and state roles in regulating interstate technologies. The Constitution gives Congress the authority “to regulate Commerce… among the several States.” Artificial intelligence is the textbook example of an interstate — and often global — technology. Cloud-based infrastructure, machine learning platforms, and real-time data systems operate seamlessly across state lines. If every county or state imposes its own design or liability requirements, we will return to the kind of regulatory balkanization the Commerce Clause intended to avoid.
Conservatives rightly defend the 10th Amendment and the rights of states. But federalism also requires clarity about which matters are truly national in scope. The Senate’s moratorium respected that distinction. It left room for general protections — such as fraud, consumer safety, child welfare, and likeness rights — to continue unimpeded. It didn’t silence states; it simply encouraged a pause on AI-specific rules that could disrupt nationwide deployment.
There is also a real cost to premature regulation. We’ve seen it before. The Sarbanes-Oxley Act, though well-intentioned, created compliance burdens that discouraged startups from going public. The 2015 net neutrality rules, grounded in 1930s telephone law, quickly became outdated, leading to regulatory whiplash. In both cases, lawmakers acted before fully understanding the technologies and ended up with rules that either backfired or collapsed.
The AI provision rejected by the Senate would have avoided those mistakes. It created breathing room: five years to observe the technology’s trajectory, assess its implications, and craft long-term frameworks that are effective and durable.
In return, participating jurisdictions would have received access to $500 million in federal funding for AI infrastructure, along with $25 million to negotiate master service agreements that reduce deployment costs, especially valuable for small towns and rural communities.
This wasn’t heavy-handed government. It was restrained, conditional, and constitutionally sound. It upheld conservative principles of limited government and responsible use of the spending power. And it offered a path forward on AI governance that prioritized deliberation over panic.
Congress should have passed it.
In rejecting this framework, the Senate didn’t just strike a policy provision — it walked away from an opportunity to lead with balance and wisdom, two things in short supply in Washington.
As artificial intelligence continues to reshape our economy and society, lawmakers will have to revisit this issue. When they do, they would be wise to begin where this proposal ends — with a federalist framework that honors state sovereignty, fosters national innovation, and gives the nation time to think before it regulates.