Blockchain CouncilGlobal Technology Council
ai7 min read

Why Everyone Is Uninstalling ChatGPT

Michael WillsonMichael Willson
Why Everyone Is Uninstalling ChatGPT

The AI Industry’s Biggest Consumer Revolt — And What It Means for the Future

Something extraordinary happened in the last week of February 2026. In the span of just 48 hours, uninstalls of the ChatGPT mobile app surged by 295 percent in a single day. One-star reviews on the App Store jumped by 775 percent. A grassroots campaign called “Cancel ChatGPT” claimed over 1.5 million participants. And for the first time in its history, Anthropic’s Claude dethroned ChatGPT to claim the number one spot on the Apple App Store in the United States.

This was not a product failure. It was a values revolt — and it began with a $200 million government contract.

The Contract That Changed Everything

At the heart of this story is a dispute between Anthropic, the AI safety company behind Claude, and the U.S. Department of Defense (DoD). In July 2025, the Pentagon awarded Anthropic a contract worth up to $200 million to provide AI capabilities — notably making Claude the first AI model ever deployed on classified military networks.

However, Anthropic had long maintained two critical restrictions on the use of its technology: Claude could not be used for mass domestic surveillance of American citizens, and it could not power fully autonomous weapons systems. These were not fine-print limitations — they were central to Anthropic’s foundational safety philosophy.

In January 2026, Defense Secretary Pete Hegseth issued a directive requiring all DoD AI contracts to adopt “any lawful use” language, effectively demanding the removal of Anthropic’s restrictions. Hegseth then met directly with Anthropic CEO Dario Amodei, delivering an ultimatum: drop the guardrails, or lose the contract and be designated a national security “supply chain risk” — a label typically reserved for companies with ties to foreign adversaries.

Amodei’s response was unambiguous. “Threats do not change our position: we cannot in good conscience accede to their request,” he wrote in a public statement. Anthropic held firm, even as the Pentagon made clear that the financial and reputational consequences would be severe.

OpenAI Steps In — And the Internet Explodes

On Friday, February 27, 2026, President Donald Trump ordered every federal agency to immediately cease use of Anthropic’s technology. Defense Secretary Hegseth followed by formally designating Anthropic a “supply chain risk to national security” — an unprecedented action against a domestic American company.

Within hours, OpenAI CEO Sam Altman announced that his company had reached its own deal with the DoD to deploy ChatGPT on classified military networks. Altman maintained that the agreement included safeguards against mass surveillance and autonomous weapons — restrictions that were, ironically, nearly identical to the ones for which Anthropic had just been punished. The internet, however, was not convinced. For millions of users, the optics were damning: while Anthropic had taken a principled stand and paid for it, OpenAI had moved swiftly to fill the void.

The backlash was immediate and historic. Posts flooded Reddit and X with the message “Cancel ChatGPT.” The r/ChatGPT subreddit saw one of its most upvoted posts of all time: “You are now training a war machine. Let’s see proof of cancellation.” By Saturday, Claude’s installs had surged 51 percent day-over-day — while ChatGPT’s downloads fell by 14 percent.

Why Users Were Already Frustrated

The Pentagon deal was the tipping point, but the frustration had been building for months. FEC filings in January 2026 revealed that OpenAI President Greg Brockman and his wife had each donated $12.5 million to MAGA Inc., the pro-Trump super PAC — one of the largest individual contributions in the campaign’s history. Around the same time, it emerged that U.S. Immigration and Customs Enforcement was using a resume screening tool powered by ChatGPT-4. For many users, the combination was deeply uncomfortable: the product they paid $20 a month for was now intertwined with both partisan politics and immigration enforcement.

Then came the Pentagon announcement. The dam broke.

“How did you go from ‘a tool for the betterment of the human race’ to ‘let’s work with the department of WAR’?” one user posted, a sentiment that resonated across platforms. Even pop star Katy Perry publicly shared a screenshot of Claude’s pricing page on X, appearing to signal a switch.

Claude’s Masterstroke: Solving the Migration Problem

As the “Cancel ChatGPT” campaign went viral and Claude rocketed to the top of the App Store, Anthropic faced a practical challenge that has historically prevented users from switching AI tools: the context problem.

Many loyal ChatGPT users had spent two or more years building a conversational memory with the tool — sharing personal preferences, professional context, communication styles, and countless hours of productive conversation history. Starting over with a new AI assistant essentially means starting from scratch. That friction has kept users loyal to AI tools even when they’ve grown frustrated with the platform.

Anthropic addressed this directly. The company launched a memory import tool that allows users to extract their saved memories, custom instructions, and conversation context from ChatGPT, Gemini, Copilot, and other tools, then feed that data directly into Claude. The process is straightforward: export your ChatGPT data, review your stored memories, and paste the relevant context into Claude with a simple prompt. Claude then takes approximately 24 hours to assimilate the new context and begins to know you the way your previous assistant did.

According to Anthropic, daily sign-ups hit record highs, free users grew by more than 60 percent since January, and paid subscribers more than doubled over the course of the year. Claude stood at number one on the Apple App Store not just in the U.S., but also in Germany and Canada.

You can find a step-by-step guide to migrating your ChatGPT memories to Claude here: Migration Guide

What This Means for the AI Industry

The Cancel ChatGPT movement marks something genuinely new in the AI landscape: the first time a consumer backlash has materially shifted market share in the industry. Previous controversies — copyright lawsuits, deepfake concerns, data privacy disputes — generated headlines but rarely moved the needle on downloads or subscriptions. This one did.

Several lessons are already emerging. First, the ethics premium is real. For years, AI safety advocates argued that responsible development would eventually become a competitive advantage. The events of late February 2026 are the most powerful evidence yet that this thesis holds in the consumer market.

Second, switching costs in AI are lower than the industry assumed. Despite the belief that users would never abandon tools they had trained over years, Anthropic’s memory import feature demonstrated that migration barriers can be dissolved almost overnight — and that users are willing to make the switch when their values are at stake.

Third, the Anthropic-DoD standoff has broader implications for the entire technology sector. The designation of a major domestic AI company as a “supply chain risk” — a label with no legal precedent against an American firm — has alarmed legal scholars and set the stage for a landmark court battle. Anthropic has vowed to challenge the designation.

The Bigger Picture

To be clear, the situation is not without complexity. Critics have pointed out that Anthropic’s own existing partnerships — including a 2024 deal with Palantir and Amazon Web Services granting U.S. intelligence agencies access to Claude — raise similar questions about how AI is used in national security contexts. Sam Altman, for his part, later admitted the Pentagon deal was “rushed” and amended the contract. The full story is still unfolding.

But in the court of public opinion, the narrative is clear: an AI company drew a line in the sand to protect civil liberties, paid an enormous price for it, and gained the trust of millions of users in return. The AI race is no longer just about who builds the smartest model. Increasingly, it is about who users trust to build the right one.

Why Everyone Is Uninstalling ChatGPT