Who Decides
On Karp, Congress, and the governance vacuum at the center of military AI
from Global Drafts
The room in the photograph is empty. It has microphones, chairs, wood paneling, and overhead lighting. It has everything a decision requires — except the people authorized to make it.
That room is where this story should have been resolved.
It wasn't.
On Friday at 5:01pm ET, a deadline expired.
By 7pm, Pete Hegseth had designated Anthropic a "Supply Chain Risk to National Security" — a label previously reserved for foreign adversaries — and ordered every federal contractor, supplier, and partner that does business with the United States military to immediately cease all commercial activity with the company.
Donald Trump posted on Truth Social in capital letters.
Anthropic held its line.
And the room stayed empty.
The Question Nobody Asked Congress
This week, two private parties — a San Francisco AI company and the executive branch of the United States government — fought a public battle over who gets to define the limits of artificial intelligence in warfare.
They fought with ultimatums, deadlines, open letters, Truth Social posts, and a designation that cascades through the entire defense industrial base.
What they did not do — what neither side could do — is pass a law.
Congress has not passed a single piece of legislation governing military AI. There are no statutes defining what "autonomous weapons" means in the context of large language models. There are no laws specifying what mass surveillance requires in an era of automated deanonymization pipelines. There is no legislative framework establishing who — the state, the contractor, the model developer — bears responsibility when an AI system makes a decision that kills someone.
The governance vacuum is not an accident. It is a choice — made by omission, over years, as the technology outpaced the institution's willingness to engage with it.
Into that vacuum, two private parties stepped. And resolved it with a Friday deadline.
Karp's Question
Alex Karp, the CEO of Palantir, said something this week that cut closer to the structural truth than anything either side in the Anthropic dispute produced.
"The core issue is: who decides?"
Not whether the policy is right. Not whether you agree with the mission. Who decides.
Karp's answer is clear and consistent: elected leaders, through democratic processes, with accountability to the people. Not Silicon Valley executives. Not terms of service. Not a model card.
"A small island in Silicon Valley that would love to decide what you eat, how you eat, and monetize all your data should not also decide who lives in your country and under what conditions."
It is a strong argument. And it would be more convincing if the elected leaders had actually decided.
They haven't.
The Pentagon is using a Cold War emergency law — the Defense Production Act, written to commandeer steel mills — to resolve a question about AI governance that the legislative branch has declined to address. The executive branch is not filling a democratic mandate. It is filling a legislative void with emergency powers.
"Who decides" is the right question.
The answer this week was: not Congress.
What Hegseth Actually Said
Read Hegseth's statement carefully — past the rhetoric about arrogance and betrayal — and the structural claim is precise.
"The Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic."
Lawful. The word does the work.
Legality, in Hegseth's framework, is the Pentagon's responsibility as end user. Anthropic's role is to provide the capability. The state determines the limits of its own use.
This is a coherent position. It is also the position that produces the ESRC pipeline problem — the one we analyzed earlier this week. When a state actor determines the limits of its own surveillance capabilities, the limits tend to expand.
Anthropic's counter-argument is also coherent: some use cases are "simply outside the bounds of what today's technology can safely and reliably do." The company is not claiming moral authority over military decisions. It is claiming technical authority over what its own system can safely perform.
Two coherent positions. No arbiter.
The room is empty.
The Designation and What It Means
"Supply Chain Risk to National Security."
Until Friday, that designation was applied to Huawei, to certain Chinese semiconductor manufacturers, to companies with direct ties to foreign state actors.
It is now applied to Anthropic — an American company, founded by Americans, funded by American capital, that refused to remove two contractual restrictions that the Pentagon originally agreed to.
The cascade effect is significant. Every Boeing supplier that uses Claude for documentation. Every Lockheed Martin contractor that runs Claude on internal systems. Every defense-adjacent company that built workflows on Anthropic's API. All of them must now certify that they do not touch Anthropic's products — or lose their military contracts.
The designation is not just punitive. It is architectural. It rewires the entire defense industrial base's relationship with one of the most capable AI systems currently available.
And it was done without legislation. Without judicial review. Without a vote.
The Patriotism Standard
Hegseth's statement ended with a phrase that deserves to be read slowly.
Anthropic will provide services for six months "to allow for a seamless transition to a better and more patriotic service."
Patriotic.
This is the new procurement criterion for AI systems that will run on classified military networks, process intelligence data, and inform decisions about targets and operations.
Not capability. Not safety. Not reliability.
Patriotism.
The retired Air Force General Jack Shanahan, who led the Pentagon's AI initiatives, said this week that large language models are "not ready for prime time in national security settings" — particularly not for autonomous weapons. He called Anthropic's redlines "reasonable."
The system replacing Claude will not be selected because it is more capable, or safer, or more reliable. It will be selected because it does not have redlines.
That is the standard "patriotic" is doing the work of establishing.
The Governance Gap Is the Story
Step back from the drama — the capital letters, the Truth Social posts, the community notes, the open letters — and the structural pattern is clear.
A technology became load-bearing infrastructure for national security before the institutions responsible for governing it developed the capacity to do so.
The Manhattan Project resolved this problem by nationalizing the entire supply chain. No private redlines. No terms of service. Federal ownership, federal accountability, federal decision-making.
The AI equivalent was not nationalized. It was commercialized — and the governance frameworks that should have accompanied commercialization were not built.
Congress did not act.
The courts have not been asked.
The executive branch filled the void with emergency powers and a Friday deadline.
The question "who decides" was answered this week by default — not by design.
What Comes Next
Anthropic will transition out of Pentagon systems over six months.
The classified networks that ran Claude will migrate to Grok, or a custom model, or whatever system agrees to operate without the two restrictions Anthropic would not remove.
The open letter from 200+ Google and OpenAI employees will not change Google or OpenAI's contract negotiations. The solidarity was real. The leverage was not.
The ESRC pipeline — the $1 deanonymization tool co-developed with Anthropic — will continue to exist. The capability does not disappear with the contract.
And the room will remain empty.
Until Congress acts — or until the next company refuses, and the next deadline is set, and the next designation is issued — the governance vacuum will be filled the same way it was filled this week.
By whoever shows up.
With whatever authority they can claim.
On a Friday afternoon.
— Global Drafts


