
💌 Stay ahead with AI and receive:
✅ Access our Free Community and join 400K+ professionals learning AI
✅ 35% Discount for ChatNode
.png)
Trump’s Push for a Federal AI Standard Is Back
President Trump just reignited one of the most contentious debates in Washington: whether states should be allowed to pass their own AI laws. His public call for Congress to block or curb state-level AI regulation signals the start of a new regulatory chapter, one with direct implications for employers, developers, and any company deploying automated systems.
State AI rules are accelerating and fragmenting quickly. For multi-state employers and tech firms, this patchwork means inconsistent definitions, divergent timelines, and fast-rising compliance costs. A federal preemption bill, even a partial one, could reshape compliance strategies overnight. And if Congress fails to act, states will surge ahead with even more aggressive requirements in 2026.
AI innovation may be national, but AI regulation is becoming hyper-local. Trump is pushing hard to reverse that trend, urging Congress to create one federal AI standard instead of juggling fifty different regulatory regimes.
A sweeping proposal emerged in June: House Republicans passed a 10-year ban on state AI laws.
The Senate immediately softened it: negotiators shifted to a narrower penalty, blocking tech funding for states that regulate AI.
The ban shrank again: the proposed moratorium dropped from ten years to five.
Then the Senate stepped away entirely: by a 99–1 vote in July, senators removed the pause, citing the need for more study.
Trump’s renewed push brings this fight back to center stage.
1. Rising pressure for a national standard
Republican leaders, major tech CEOs, and the White House are aligned on one message. Fragmentation risks slowing innovation and weakening the country’s competitiveness with China.
2. State AI laws are multiplying fast
Colorado’s AI Act, California’s disclosure mandates, New York City’s hiring audit rules, Illinois’s notice and bias obligations, and a new Democratic majority in Virginia are just the beginning. Dozens of ADMT-style bills are queued up for 2026.
3. Employers are caught between regimes
Companies already face conflicting transparency rules, diverging definitions of automated decision tools, different requirements for AI-assisted hiring and monitoring, and growing expectations for bias testing and documentation.
Scenario 1: The NDAA becomes the vehicle
A highly plausible path. If Congress folds an AI standard into the NDAA, expect a shorter moratorium, carve-outs for safety and discrimination, and language centered on national competitiveness.
Scenario 2: A standalone federal AI bill
Possible, but harder to execute. It would likely include a narrower preemption clause, outcome-based requirements, and compromises on labor-market issues like hiring algorithms and workplace monitoring.
Scenario 3: No federal action and states surge ahead
The National Conference of State Legislatures is actively fighting preemption. If Congress stalls, expect a 2026 wave of California and Colorado style ADMT rules, expanded disclosure mandates, New York style hiring audits, and sector-specific laws in healthcare, insurance, finance, and workforce management. Virginia is a likely test case.
1. Map your AI tools, especially high-risk systems
Build a centralized inventory covering hiring and promotion tools, productivity scoring, performance monitoring, sentiment and voice analysis, predictive scheduling, and safety prediction systems.
2. Build a state patchwork strategy
Monitor developments in key jurisdictions such as California, Colorado, Illinois, Virginia, and New York. Track emerging ADMT bills and prepare for overlapping obligations.
3. Prepare for bias testing requirements
Even with preemption, discrimination and applicant-screening issues will likely be carved out. Start developing data retention plans, bias measurement protocols, and documented rationales for each tool’s use.
4. Update vendor contracts
Clarify training data sources, secure audit rights, require risk-mitigation commitments, and align vendors with the NIST AI RMF or comparable frameworks.
5. Stand up a cross-functional AI governance team
Bring HR, Legal, IT, and Security together. Companies that can demonstrate intentional governance will be better positioned regardless of how the federal fight plays out.
6. Watch the NDAA closely
If AI language is added, the law could move fast. Employers will need clarity on whether hiring tools, monitoring systems, or other workplace AI uses fall under any exemptions.
Whether Congress revives a federal AI standard or leaves the field to the states, the regulatory wave is already moving. Employers that map their tools, tighten governance, and prepare for bias testing now will be ready for whichever path Washington chooses next.
Fisher Phillips, founded in 1943, is a leading law firm dedicated to representing employers in labor and employment matters. With nearly 600 attorneys across 38 U.S. and 3 Mexico offices, it combines deep expertise with innovative solutions to help businesses navigate workplace challenges.

.png)