Federal Enforcement Phase Escalates

Trump Orders Federal Agencies to Stop Using Anthropic

On February 27, 2026, President Trump directed all federal agencies to cease using Anthropic, with a six-month phase-out requirement for the Department of Defense. The Pentagon simultaneously designated Anthropic a “supply-chain risk,” citing a contract dispute over safety guardrails.

The dispute: Anthropic sought to limit Claude’s use in mass surveillance of Americans and fully autonomous weapons systems. The Pentagon rejected these guardrails as incompatible with its procurement requirements. Trump threatened to use “Full Power of the Presidency” if Anthropic does not comply with the transition.

Impact: This is the first time the Trump administration has used federal procurement authority as an enforcement mechanism against a major AI vendor for refusing to remove safety guardrails. The designation could block tens of thousands of federal contractors from using Anthropic’s AI. Anthropic has stated it will challenge the designation in court.

Context: The action undercuts Trump’s stated “freedom to innovate” philosophy, demonstrating willingness to use government authority to enforce AI policy constraints. OpenAI simultaneously announced a new Pentagon deal with similar human-responsibility principles.

FTC Policy Statement Deadline: March 11, 2026

The Trump administration directed the Federal Trade Commission to issue a policy statement by March 11, 2026 describing how the FTC Act applies to AI and when state laws requiring “alteration of truthful outputs” are preempted by federal law barring deceptive practices.

Significance: This policy statement could serve as the federal government’s legal justification for challenging state AI transparency and safety disclosure laws. The statement is expected to argue that certain state requirements (particularly California’s AI chatbot safety rules requiring disclosure of self-harm protocols) contradict federal authority over deceptive practices.

Federal AI Model Disclosure Standard: March 16, 2026

David Sacks (AI and Crypto Policy Officer) is directed to explore a federal reporting or disclosure standard for AI models that would preempt state requirements before March 16, 2026. This standard would establish federal baseline requirements intended to prevent a patchwork of state disclosure laws.

Implication: If federal standard is issued, the administration will likely argue it preempts California’s AB 2013 (training data transparency) and similar state requirements.

Federal Pressure on States to Block Child Safety Bills

White House Directly Pressures Florida and Utah on AI Child Protection Laws

Between February 24-26, the Trump administration contacted state legislators to oppose child safety AI bills:

  • Florida: White House contacted House Speaker Daniel Perez to oppose Gov. DeSantis’ AI Bill of Rights, which would establish protections against AI-generated non-consensual explicit images of minors, ban Chinese AI tools in Florida government, and provide parental controls on AI for minors.

  • Utah: White House sent official letter (Feb 16-17) to Utah legislators opposing HB 286 (requiring AI safety and child protection transparency).

Reports indicate the White House plans to expand pressure to Tennessee and Nebraska.

Controversy: The Trump administration claims support for child safety but explicitly opposes these specific bills, citing need for a federal “uniform framework.” However, Congress has not yet passed comprehensive federal AI legislation, raising questions about what federal framework would replace state protections.

Political risk: Utah Governor Spencer Cox (Republican) called preemption efforts “preposterous” on February 23, signaling that Trump’s preemption strategy faces resistance even from GOP-aligned governors.

State Legislative Activity Continues Despite Federal Threats

New Bills Introduced in Georgia, Louisiana, Minnesota, Rhode Island

On March 2, 2026, six new bills were introduced:

  • Georgia: Two bills on AI pricing/algorithmic disclosure
  • Louisiana: Two bills on AI regulation
  • Minnesota: One bill on AI in hiring/employment
  • Rhode Island: One bill on AI transparency

These introductions signal that despite federal preemption threats, states are continuing to advance AI legislation.

Connecticut: Algorithmic Pricing Transparency Advances

Connecticut is amending its data privacy law to add algorithmic pricing disclosure requirements including facial recognition, employment profiling, and precise geolocation data. A hearing is scheduled for March 4, 2026.

Trump EO 90-Day Evaluation Deadline: March 11, 2026

The Commerce Department and AI Czar David Sacks are required to publish an evaluation of “onerous” state AI laws by March 11, 2026. This evaluation will trigger the next phase of federal action: DOJ lawsuits targeting specific state laws and potential BEAD broadband funding withholding.

Legal status: Harvard Law professor Carmel Shachar has stated the Trump EO is “possibly unconstitutional,” as preemption authority generally rests with Congress, not the executive branch. Congress has failed to pass preemption language twice, weakening the legal foundation for the EO.

State response: Colorado, California, and Texas have all publicly stated they will defend their AI laws. Bipartisan opposition from Republican Gov. Cox (Utah) strengthens states’ legal position.

What to Watch

  • March 4: Connecticut hearing on algorithmic pricing and facial recognition amendments
  • March 11: FTC policy statement due (preemption arguments); Commerce Dept evaluation of “onerous” state laws due (triggers lawsuits)
  • March 16: Federal AI model disclosure standard exploration due
  • June 30: Colorado AI Act (SB 24-205) takes effect — likely the primary target for DOJ lawsuit
  • August 2, 2026: EU AI Act high-risk system obligations take effect globally

Sources