When AI Met the Military: OpenAI's Pentagon Deal and Why It Matters

Two weeks ago, OpenAI signed a classified deal with the Pentagon. The kind that puts AI inside the military’s most secure systems, right where war plans get made.
What made it explosive was how it happened.
On February 27, a rival AI company called Anthropic walked away from the negotiating table with the Department of War. Anthropic drew two hard lines: their AI would not spy on Americans without a judge’s approval, and it would not power weapons that kill without a human pressing the button. The Pentagon wanted full freedom. Anthropic said No.
Within hours, Secretary of War, Pete Hegseth, labeled Anthropic a “supply chain risk to national security,” a phrase normally reserved for foreign enemies. President Trump called them a “woke company” trying to dictate how the military fights. Federal agencies were ordered to stop using Anthropic’s products immediately.
That same night, OpenAI announced it had signed a deal to fill the gap.
From “no military use” to Pentagon supplier in two years
OpenAI used to ban military use of its technology outright. Then in January 2024, they quietly rewrote those rules. Instead of banning the military as a customer, they just banned specific actions like building weapons. That opened the floodgates. Within months, OpenAI was running pilot programs with the Pentagon worth up to $200 million. By early 2026, over a million military personnel were already using OpenAI’s tools daily through a platform called GenAI.mil.
What the deal actually says (and doesn’t)
OpenAI CEO Sam Altman claimed the contract includes three “red lines”: no mass domestic surveillance, no autonomous weapons, and no high-stakes automated decisions.
But the contract only bans “intentional” surveillance. Under existing intelligence laws, the government already collects enormous data on Americans “incidentally” while targeting foreign subjects. Nothing stops the military from running that data through AI. A former Department of Justice official told The Intercept the word “intentional” is the “get out of jail free card.”
On weapons, OpenAI says their models run on cloud servers only, not on battlefield devices. But if an AI generates targeting data and a stressed commander rubber-stamps the recommendation without truly checking it, does “human in the loop” mean anything?
The full contract has never been made public.
The fallout
Caitlin Kalinowski, OpenAI’s head of robotics, resigned on March 7. She wrote that “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
Over 200 employees from OpenAI and Google signed an open letter standing with Anthropic. ChatGPT uninstalls spiked 295%. Anthropic’s Claude app shot to number one on the App Store.
At an all-hands meeting, Altman reportedly told staff: “You do not get to make operational decisions. So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don’t get to weigh in on that.”
Why OpenAI took the deal
OpenAI said it signed the contract to support the military and to calm tensions between the government and the AI industry after the Anthropic blowup. But there’s more to the picture.
In October 2025, OpenAI completed a major corporate restructuring. It went from a nonprofit with a capped-profit subsidiary to a for-profit public benefit corporation valued at $500 billion. For the first time, CEO Sam Altman received direct equity in the company. A massive defense contract doesn’t just bring in revenue. It boosts the company’s valuation, and that benefits everyone holding shares.
There’s also the political angle. OpenAI co-founder Greg Brockman donated $25 million to MAGA Inc., Trump’s super PAC, in September 2025. The AI industry has poured money into politics more broadly: OpenAI helped launch a Super PAC effort aimed at blocking state-level AI safety laws in favor of lighter federal rules. By becoming the Pentagon’s go-to AI provider, OpenAI positions itself squarely inside the Trump administration’s “AI Action Plan,” which pushes for fast deployment and minimal regulation.
None of this means the deal was purely about money or politics. National security is a real concern, and AI will inevitably play a role in defense. But it’s worth noticing that the company saying yes to the Pentagon also happens to be the one with the most to gain from saying yes.
Why this matters
Right now, AI is being used in actual military operations. Anthropic’s Claude was reportedly used for intelligence during U.S. strikes against Iran before getting kicked off military systems. OpenAI is replacing it. And they’re already in talks with NATO to expand into 32 countries.
We’re growing up in a moment where the lines between tech companies and the military are disappearing. There aren’t clear laws governing any of this. Both companies have asked Congress to write AI legislation, but right now the rules are being set in classified contracts nobody can read.
This isn’t just a tech story. It’s a story about what kind of world we’re building, and whether anyone’s actually steering.