OpenAI’s decision to accept a government contract that many people believed had originally been intended for Anthropic triggered a backlash that spread quickly across social platforms. What might have been a tense but manageable announcement instead collided with a pair of online campaigns that gave the moment real momentum.
“Cancel ChatGPT” and “QuitGPT” became rallying points for users who felt the company had crossed a line by cooperating with the current administration on a project tied to national security. The result was a nearly 300 percent spike in uninstalls, a number that reflected not just frustration but a deeper fear that OpenAI was capitulating in ways that could have long-term consequences.
The uninstall surge did not happen in a vacuum. Users were already uneasy about the idea of an AI company working closely with the federal government, especially after reports that Anthropic had declined the same contract due to concerns about domestic surveillance and weapons development. When OpenAI stepped in, the timing made it look like the company was willing to accept terms another lab had rejected on principle.
That perception fueled the online campaigns. “Cancel ChatGPT” threads filled with users posting screenshots of their cancellations, while “QuitGPT” encouraged people to uninstall the app entirely and switch to alternatives. These campaigns framed the contract as a dangerous alignment with government power rather than a neutral business decision. For many participants, uninstalling was a way to signal that they did not want their everyday tools tied to policies they viewed as overreaching.
The government’s interest in advanced AI systems has been growing for years, and both OpenAI and Anthropic have been navigating that relationship carefully. Anthropic’s refusal to loosen restrictions on surveillance and autonomous weapons became a defining moment in its public identity. When the company was dropped from the contract, it reinforced the idea that it was willing to walk away from lucrative opportunities to maintain its principles.
OpenAI’s acceptance of the contract immediately afterward created a sharp contrast. Even if the company believed it could shape the work responsibly, the sequence of events made it appear as though it had stepped into a role another lab had rejected for ethical reasons. That narrative spread quickly, and once it took hold, the online campaigns amplified it far beyond the initial announcement.
Sam Altman’s attempt to regain control of the story
Sam Altman responded by acknowledging that the rollout had been mishandled. He described the announcement as rushed and poorly communicated, and he emphasized that OpenAI was not trying to undercut Anthropic or compromise its own values. The company revised the contract language to include explicit limits on domestic surveillance and to restrict certain intelligence agencies from using its systems without separate agreements.
These changes were meant to reassure users, but reactions were mixed. Some appreciated the clarification. Others argued that the revisions did not address the core concern, which was the company’s willingness to take the contract in the first place. Former OpenAI researchers added to the skepticism by urging the company to release more details if it wanted to rebuild trust.
Users are no longer treating these platforms as neutral tools. They are treating them as political and ethical actors whose decisions carry real weight. Understandably, the combination of timing, optics, and online organizing created such a strong reaction. It is also clear that OpenAI is trying to balance its desire to contribute to national security with its stated commitment to safety and civil liberties.
Whether this becomes a lasting turning point depends on how the company follows through. Transparency, clearer communication, and a willingness to explain the boundaries of its government work will matter more now than they did before the backlash began. The public has shown that it is paying close attention, and that trust can shift quickly when actions do not align with expectations.
