The AI industry just had its defining ethical moment, and it played out not in a research paper or a congressional hearing but in the gap between what companies say about safety and what they do when money is on the table. The Pentagon designated Anthropic a supply-chain risk after the company refused to permit its models for mass surveillance and autonomous weapons applications. Within days, OpenAI signed its own Department of Defense deal. More than thirty employees from OpenAI and Google filed a public statement supporting Anthropic's position. Dario Amodei, Anthropic's CEO, called OpenAI's approach to military AI "safety theater."
This is not a contract dispute. This is the moment where the AI industry's stated commitment to responsible development is being tested against real consequences, and the results are revealing.
The Sequence of Events
The timeline matters because it tells a story about incentives. Anthropic had an existing relationship with the Department of Defense. When contract renewal discussions turned to expanded use cases -- specifically, applications involving surveillance of domestic populations and integration into autonomous weapons systems -- Anthropic drew two explicit red lines. These were not ambiguous edge cases. Mass surveillance of citizens and autonomous lethal systems are among the most clearly defined risks in AI ethics literature. Every major AI safety framework, from the OECD AI Principles to the EU AI Act, treats them as high-risk or prohibited applications.

The Pentagon's response was not to negotiate further or seek alternative vendors for those specific use cases. It was to designate Anthropic as a supply-chain risk -- a classification that effectively blacklists the company from the entire federal defense ecosystem. This designation is typically reserved for foreign adversaries or companies with demonstrated security vulnerabilities. Applying it to a domestic company for refusing a use case is, as far as I can determine, without precedent.
OpenAI's subsequent DOD deal was not explicitly framed as a response to Anthropic's situation, but the timing was unmistakable. The market read it clearly: there is a large and growing pool of defense AI spending, and companies that are willing to work within the Pentagon's terms will capture it while those that are not will be shut out.
The Employee Statement and What It Reveals
The public statement from more than thirty employees at OpenAI and Google supporting Anthropic's position is perhaps the most significant element of this entire episode. These are not activists or outsiders -- they are engineers, researchers, and product managers at Anthropic's direct competitors. Their statement explicitly supported the principle that AI companies should be able to maintain ethical red lines on use cases even when dealing with government customers.
This cross-company solidarity is remarkable for several reasons. First, it contradicts the competitive incentives of the individuals involved. If Anthropic is shut out of defense contracts, OpenAI and Google stand to gain market share. Second, it suggests that the people who build these systems share a broadly consistent understanding of where the ethical boundaries should be, even if their employers' commercial strategies diverge. Third, it signals that the talent pool for frontier AI development has ethical preferences that companies ignore at their peril. AI researchers and engineers are in extraordinarily high demand. Companies that are perceived as abandoning safety commitments for revenue risk losing the people who make their technology possible.
Dario Amodei's "Safety Theater" Accusation
Amodei's characterization of OpenAI's approach as "safety theater" is pointed and, I think, largely accurate. Safety theater is the practice of performing safety-related activities -- publishing principles, forming advisory boards, issuing press releases about responsible AI -- without those activities actually constraining behavior when it matters. The test of a safety commitment is not what you do when safety and profit align. It is what you do when they conflict.
OpenAI has published extensive documentation on its approach to AI safety. It has a safety advisory board. It has usage policies that, on paper, restrict harmful applications. But if those restrictions do not extend to its largest and most lucrative customer relationships, they function as marketing rather than governance. The question is not whether OpenAI has safety policies but whether those policies have teeth when a multi-billion-dollar defense relationship is at stake.
I want to be fair here: it is possible that OpenAI's DOD contract includes meaningful use-case restrictions that are not public. Defense contracts are not typically disclosed in full. But the optics of signing a defense deal immediately after a competitor was punished for refusing defense use cases are difficult to interpret charitably.
Google's Quiet Expansion
While Anthropic and OpenAI have dominated the headlines, Google has been quietly expanding its own Pentagon AI work. In 2018, employee protests forced Google to withdraw from Project Maven, a Pentagon drone surveillance initiative. Eight years later, the landscape has shifted. Google's defense AI work has expanded significantly, and the employee activism that once constrained it has diminished -- partly through layoffs and partly because the cultural moment has evolved. This pattern -- public ethical commitments that erode under sustained commercial pressure -- is a structural feature of how profit-driven organizations interact with lucrative government contracts.
Why AI Is Different
I believe AI warrants stronger constraints than conventional defense technology. Conventional weapons systems have defined physical capabilities and predictable behavior envelopes. An autonomous AI system operating in a surveillance or weapons context has a capability envelope that is difficult to fully characterize and prone to unexpected behaviors. A fighter jet that malfunctions does so in physically bounded ways. An AI surveillance system that malfunctions can produce false positives that ruin lives at scale and speed that outpaces human oversight. AI-powered mass surveillance is qualitatively different from traditional surveillance because it can operate continuously, at population scale, with minimal human involvement. The civil liberties implications are not speculative -- we can observe them in countries that have already deployed these systems.
The Structural Problem
The deeper issue exposed by this episode is that we have no adequate governance framework for military AI. The decisions about which AI applications are acceptable in defense contexts are currently being made through commercial negotiations between vendors and the Pentagon, with no democratic oversight, no public accountability, and no independent review.

Anthropic made its decision based on its own ethical framework. OpenAI made its decision based on different calculations. Google is making its decisions largely outside of public view. None of these companies were elected. None of them are accountable to the public for these decisions. And the Pentagon, while accountable in theory through congressional oversight, is making procurement decisions about AI use cases without the kind of transparent policy debate that this topic deserves.
What we need is a legislative framework that defines acceptable and unacceptable military AI applications with democratic legitimacy. Until that exists, we are relying on the individual ethical commitments of private companies -- and as this episode demonstrates, those commitments are only as durable as the commercial consequences companies are willing to bear.
The Precedent Being Set
The precedent being set right now will shape the AI industry for decades. If refusing a military use case results in a supply-chain blacklist, the message to every AI company is unambiguous: compliance is not optional, and ethical red lines are commercially untenable. If Anthropic's legal challenge succeeds -- and the constitutional questions it raises about government coercion of private companies are substantial -- it establishes that AI companies have legal standing to maintain use-case restrictions even against their most powerful customer.
I do not know which outcome is more likely. I know which one I think is better for the long-term development of AI technology and for the society that technology will shape. The companies that build the most powerful technology in human history should be able to say no to uses they believe are dangerous. That principle is worth defending, even when -- especially when -- it is expensive.



