Anthropic’s AI chatbot Claude didn’t just climb to the top of Apple’s US App Store in late February, it became the flashpoint in a high-stakes battle over whether consumer AI companies can resist military contracts without losing ground in the marketplace.
The surge, which multiple reports tied to a heated debate over AI weaponization and its use in Pentagon operations, quickly turned the app rankings into a proxy fight over whether consumer AI companies can draw bright lines on defense work without paying a market penalty.
According to data from Sensor Tower, one of the world’s leading app-ranking firms, Claude began the year outside the US top 100 but rapidly climbed the charts, reaching No. 6 on Wednesday, Feb. 25, and then No. 1 on Saturday, Feb. 28.
Anthropic said its free user base grew 60%, while paid subscriptions doubled, posting daily record numbers.
That momentum came alongside the company’s hard-line stance on military uses, as Anthropic sought safeguards to prevent the US Defense Department from using its models for “mass surveillance” or “autonomous weapons” that kill without a human decision.
The stance drew a sharp response from President Donald Trump’s administration, which instructed federal agencies to stop doing business with the company, while Defense Secretary Pete Hegseth described it as a “Supply-Chain Risk to National Security,” a move aimed at boxing it out of defense contracting.
OpenAI, by contrast, took a different path. The company, via ChatGPT, moved quickly to unveil its own deal with the Pentagon to deploy its models inside “classified military networks,” meaning systems operating in highly sensitive security environments.
The company said in a blog post explaining its approach that its rules prohibit mass domestic surveillance, autonomous weapons systems, and high-stakes automated decisions such as “social credit” scores.
Some experts questioned how meaningful those promises are. Mike Masnick, founder of Techdirt, which analyzes the intersection of technology with law, public policy, and civil liberties, argued that contract language could create back doors for surveillance under existing intelligence frameworks, such as Executive Order 12333 and other rules governing information collection in contexts framed as foreign but that can touch data tied to people inside the United States, potentially widening data collection even if procedures remain legally defensible on paper.
OpenAI pushed back on that broader framing. National security partnerships lead Katrina Mulligan said limiting operations to “cloud interfaces” technically prevents embedding AI directly into field hardware or weapons, arguing that deployment architecture matters more than the contract text.
The dispute then spilled into the tech companies themselves, with more than 360 employees at Google and OpenAI signing an open letter urging leadership at both firms to support Anthropic and reject what the signatories called a “divide and conquer” dynamic that pushes each company to compromise its principles in pursuit of government contracts.
The controversy also quickly reached social media, with reports tracking calls to boycott ChatGPT and switch to Claude as a “more ethical” option. The backlash intensified after OpenAI CEO Sam Altman said the Defense deal was “definitely rushed,” and that the “optics don’t look good.”
Despite Trump’s move to cut ties with Anthropic, press reports said the US military had already used Claude in intelligence operations during the joint US-Israeli attack on Iran on Feb. 28.
According to the Guardian’s reporting, the tool played a role in intelligence work, helped select targets, and supported field simulations, underscoring how hard it can be to remove a technology already embedded in operational workflows even when a political decision orders it stopped.
Other reports tied the rift in part to Claude’s use in a prior US operation targeting Venezuelan President Nicolás Maduro, a backdrop that helped ignite the dispute over who gets to define “acceptable” national security use: the state, companies, or some mix of the two under legal oversight.
The ethical and political fight comes as investment pours in aggressively. OpenAI disclosed a massive $110 billion funding round that lifted its valuation to $840 billion, signaling that the contest is not only about ethics and policy, but also about who can finance the enormous computing infrastructure needed to run and update these models, and who secures an early seat inside both government and markets.