OpenAI Signed Defense Contract Hours Before US-Israel War on Iran. GPT Suffers Mass Exodus 

On March 7, OpenAI faced internal and public backlash after signing a massive Pentagon defense contract allowing military use of its AI for defense and intelligence, igniting high-level resignations and widespread consumer boycott of ChatGPT, losing almost 2 million users in the early hours of the announcement. 

Large language models (LLM), such as ChatGPT, Anthropic’s Claude, and Big Tech’s intelligent models are altering the balance of power in wartime analysis. For civilians, this is a matter that demands urgent answering, and retributions, as their subscriptions help to financially support the sustainability of such intelligent models. 

Users are demanding answers as to why AI for defense and intelligence is expanding at such a scale. In parallel, the broligarchs – tech oligarchs – don’t seem to care about the ethical and moral limitations of societies and international law and are nothing but blinded by revenue growth and AI world domination. 

Since 2023, the world is experiencing first-hand how rapidly AI has entered national security systems, and OpenAI’s defense agreement stands under that very same umbrella and it’s birthing an even deeper issue.  

Why are governments investing heavily in AI for defense and intelligence and how do these systems influence how civilians receive information when demanding about the use of AI warfare, especially in third world countries – or just countries in the war-torn Middle East? 

Militaries Desperately Need LLMs During War 

LLMs are attractive to military planners due to their ability to process volumes of information faster than human analysts. And for governments, it can’t be more appealing than that. 

The Pentagon, or as Secretary Pete Hegseth likes to call, the Department of War, views the adoption AI in defense industry, specifically LLMs, as extremely intelligence tools for analysis on battlefield decisions and rapid interpretation of complex data streams.  

In the US and Israel war against Iran, AI for defense and intelligence became central to the execution of the attacks on Iranian – and Lebanese – soil. The Pentagon’s Golden Child, Palantir, helped US commanders identity 1,000 Iranian targets through the data analytics giant’s Maven Smart System. 

Containing Anthropic’s Claude, Maven helped target Iranians within the war’s first 24 hours by simply integrating the LLM into the Pentagon’s AI in defense systems. 

Within the US, these systems are tied to the AI department of defense strategy, where policymakers see generative models as part of a revolutionary war AI vision that integrates data analysis, cyber operations and automated decision support into military planning. 

OpenAI in Cahoots with Department of War 

With Sam Altman’s OpenAI, the story gets messier. 

On February 28 – on the day of the US-Israel attack on Iran and just hours before the US strike on Tehran – President Donald Trump directed federal agencies to drop Anthropic and OpenAI announced a new deal with the Department of War to deploy its models in classified settings. 

OpenAI confirmed its models could be deployed in classified environments under strict safeguards, arguing the US military needs advanced AI systems “especially in the face of growing threats from potential adversaries who are increasingly integrating AI technologies into their systems.” 

Even though CEO Altman admitted the negotiations were “definitely rushed,” the GPT-parent insisted its agreement protected against autonomous weapons use and mass domestic surveillance (only in the US). 

According to The Jerusalem Post, former Under Secretary of the Army, Brad Carson, told The Intercept he was “not confident in the language at all.” Meaning, the use of AI for defense and intelligence leads to blocking spy agencies, such as the National Security Agency (NSA), would actually hinder intelligence analysis during the war with Iran. 

OpenAI’s robotics chief, Caitlin Kalinowski, who previously led Meta’s augmented reality (AR) glasses program, resigned over the speed and governance of the deal. 

“Surveillance of Americans without judicial oversight and lethal autonomy without human authorisation are lines that deserved more deliberation than they got,” Kalinowski said in a post. 

Her resignation came just hours after rival, Anthropic, refused to authorize unconditional military use of its Claude model, fueling tensions between AI companies and government agencies and highlighting competition among emerging AI defense contractor players seeking to shape AI for defense and intelligence capabilities. 

The framework, OpenAI argues, fits within its AI department of defense strategy approach and supports a broader agentic AI cyber defense strategy designed to strengthen AI and cyber defense capabilities while maintaining oversight. 

 How Wartime AI Partnerships May Influence Public Answers 

The growing integration between AI companies and defense institutions raises concerns about how these systems could shape information flows during conflicts. Analysts say these partnerships increasingly blur the line between civilian technology platforms and defense AI solutions used for national security. 

Large language models like ChatGPT are already widely used by civilians to ask questions about geopolitics, wars, and military developments. When these systems are connected to national security partnerships, critics worry about potential biases, restricted information or subtle framing changes in responses related to war, particularly as governments integrate them into department of defense AI initiatives and broader agentic AI cyber defense strategy programs. 

According to tech data cited by TechCrunch, ChatGPT uninstalls surged 295% after the Pentagon deal became public, while Anthropic’s Claude rose to the top of the US App Store’s free charts. 

The episode revealed a new dynamic in the AI industry: public trust can shift quickly when users believe AI systems may influence surveillance policy or wartime narratives. For governments building AI for defense and intelligence, the technology promises faster analysis and strategic advantages. 

For civilians, however, the same systems are increasingly tied to emerging agentic AI cyber defense strategy frameworks, meaning the answers they receive from AI tools may be shaped by technologies simultaneously advancing military AI and cyber defense capabilities. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.