Letter to Congress: DoD's Threat Against Anthropic
I wrote this letter to my representatives in Congress about the Department of Defense's threat to designate Anthropic a "supply chain risk" for refusing to remove safety restrictions from its military AI contract. You can read the full context in my post The DoD Wants AI Without Guardrails. Gamers Should Care.
Personalize it, find your representative at house.gov or your senators at senate.gov, and send it.
Dear [Representative/Senator] [Last Name],
I am writing as your constituent to express serious concern about the Department of Defense's reported threat to designate Anthropic — an American AI company headquartered in San Francisco — as a "supply chain risk" for maintaining standard contractual safety restrictions on its technology.
Under the Biden Administration, Anthropic negotiated terms for the use of its AI system, Claude, in classified military settings. These terms included two restrictions: that Claude would not be used for mass surveillance of American citizens, and that it would not be used to deploy lethal autonomous weapons systems. The Trump Administration initially accepted these terms in July 2025, then reversed its position.
I find this reversal alarming for several reasons:
Contractual safety restrictions are standard practice in defense. Defense contractors across every industry — aerospace, pharmaceuticals, munitions — routinely negotiate use limitations. Singling out an AI company for doing the same sets a dangerous precedent.
The "supply chain risk" designation is unprecedented for a domestic company. This designation has historically been reserved for foreign adversaries. Wielding it against an American company as retaliation for maintaining safety terms is an abuse of the designation's purpose and a threat to the rule of law in government contracting.
This undermines American AI competitiveness. If the U.S. government punishes domestic companies for exercising basic safety diligence, it will drive AI talent, investment, and development overseas. It sends a chilling message to every technology company that does business with the federal government.
The underlying safety concerns are legitimate. Restrictions against mass surveillance of Americans and autonomous lethal weapons are not exotic demands — they reflect widely held values and existing legal principles. No private company should be coerced into abandoning them.
I urge you to:
- Investigate the Department of Defense's threatened use of the "supply chain risk" designation against Anthropic.
- Demand transparency on the Administration's reversal of previously accepted contractual terms.
- Support legislation clarifying that domestic companies cannot be designated as supply chain threats for maintaining lawful safety restrictions.
- Affirm Congress's role in overseeing the military's use of AI, particularly regarding surveillance of U.S. citizens and autonomous weapons.
The strength of American industry depends on the government being a trustworthy contracting partner. If officials can accept terms one month and retaliate against a company for those same terms the next, no business — in any sector — is safe from arbitrary government action.
Thank you for your time and attention to this matter.
Respectfully, [Your Name]