As someone who covers Microsoft for a living, I’ve gotten used to watching the company navigate complicated partnerships. But few relationships look as increasingly radioactive and frankly unnecessary as Microsoft’s ongoing ties to X.
The platform’s latest controversies, from Grok AI’s unhinged outputs to the resurfacing of child sexual abuse material (CSAM) that should never have been allowed to persist, have pushed X past the point of plausible deniability. And yet, major tech companies continue to funnel ad dollars into the platform as if nothing has changed. Dell, Amazon, IBM, and others are still there, still spending, still lending legitimacy to a platform that has repeatedly demonstrated it cannot be trusted with even the most basic responsibilities of content safety.
At this point, CSAM should be the final, non-negotiable line in the sand. If a platform cannot reliably prevent the circulation of child exploitation material, then no amount of “brand safety tools,” no AI-powered moderation dashboards, and no corporate talking points can justify continued support.
Microsoft’s X Partnership: A Liability Hiding in Plain Sight
Microsoft’s relationship with X is complicated, but the most visible thread is its integration of X data into various AI and advertising products. On paper, it’s a mutually beneficial arrangement: Microsoft gets access to a firehose of real-time content, and X gets the credibility of being tied to one of the world’s most trusted enterprise companies.
But in practice, the partnership has become a liability.
Every time X stumbles, and lately, that’s been weekly, Microsoft gets dragged into the narrative. When Grok AI produces harmful or misleading content, the public doesn’t parse the nuance of who built what. When CSAM resurfaces on the platform, the question becomes why any reputable company is still connected to X at all.
Microsoft has spent years cultivating an image of responsibility, trust, and enterprise-grade safety. Continuing to stand next to X undermines that work. It’s not just a reputational risk; it’s a strategic contradiction.
The Broader Tech Sector Is Still Pretending This Is Fine
What’s even more baffling is that Microsoft isn’t alone. Dell, Amazon, IBM, and a long list of other tech giants continue to advertise on X as if the platform hasn’t become a case study in what happens when content moderation is treated as an optional expense.
These companies are not naïve. They know exactly what’s happening on X. They’ve seen the reports. They’ve watched the platform’s trust and safety teams shrink. They’ve watched advertisers flee, return, flee again, and then quietly trickle back in.
And yet, they stay.

Why? Because X still delivers impressions. Because marketing teams are conditioned to chase reach. Because pulling ads requires someone in a boardroom to say, “We’re willing to take a stand even if it costs us a few million eyeballs.”
But at some point, the moral calculus has to matter more than the CPM.
CSAM Should Be the Last Hurdle. There Shouldn’t Even Be a Debate.
Let’s be clear: CSAM is not a “content moderation challenge.” It is not a “policy gap.” It is not a “scaling issue.”
It is a failure, full stop.
The recent CSAM scandal involving xAI’s Grok chatbot is one of the most serious AI‑safety failures to date, not because it was unexpected, but because it was predictable, preventable, and ignored until it became a public crisis.
According to reporting from Ars Technica, Grok admitted, via a user‑prompted apology, that it had generated sexualized AI images of minors, describing the subjects as “two young girls (estimated ages 12–16)” and acknowledging that the output may have violated U.S. laws on child sexual abuse material (CSAM).
Axios further reported that one user successfully prompted Grok to remove clothing from an image of a 14‑year‑old Stranger Things actress. Additional coverage identified the actress as Nell Fisher, who is 14 years old, and confirmed that Grok generated explicit images of her after users bypassed safeguards.
This is not a gray area. AI‑generated CSAM is illegal in the United States, and platforms that knowingly allow it, or fail to act after being alerted, can face criminal or civil liability. Grok itself acknowledged this in its apology, noting that companies “could face criminal or civil penalties” if they fail to prevent AI‑generated CSAM after being warned.
The Grok incident makes one thing painfully clear: xAI’s safeguards were either inadequate or effectively nonexistent. Even Grok’s own apology admitted that the company had “identified lapses in safeguards” and was “urgently fixing them,” but that admission only surfaced after users spent days trying, and failing, to get anyone at xAI to respond. The company didn’t acknowledge the issue; the chatbot did, and only after being coaxed into it. There was no statement from xAI, no comment from X Safety, and no acknowledgment from Elon Musk. That silence has become its own indictment, reinforcing the perception that xAI is not taking the situation seriously.
What makes this even more alarming is that the victim wasn’t a hypothetical or synthetic child. She is a real, identifiable 14‑year‑old actress with a public profile. The emotional, reputational, and legal stakes are enormous, and the harm is not abstract. If a mainstream AI model can be manipulated into generating sexualized images of a well‑known minor, it underscores just how easily these systems can be weaponized against children who don’t have publicists, lawyers, or media attention to protect them. The implications extend far beyond one model or one platform.
What This Means for the AI Industry’s “Safety” Narrative
The broader AI industry has spent the past two years insisting that its systems are safe, enterprise‑ready, and trustworthy. This incident cuts directly against that narrative. Safety layers only work when a company prioritizes them, and xAI has made clear through its emphasis on speed, openness, and ideological branding that safety is not at the top of its agenda. No amount of marketing can compensate for that kind of structural neglect.
The failure is even more glaring when you consider that Grok’s apology told users to report its outputs to the FBI or NCMEC. When an AI model is effectively telling people to contact federal law enforcement because of what it produced, the product is not safe. And this isn’t just xAI’s problem. Advertisers and enterprise partners, including Microsoft, Dell, Amazon, and IBM, are now exposed to legal and reputational risk simply by remaining on the platform. When a flagship AI model on X generates illegal content involving minors, the fallout doesn’t stay neatly contained to the company that built it. Every partner is implicated by proximity.
All of this deepens the industry’s credibility crisis. Companies love to talk about “responsible AI,” “alignment,” and “trust,” but the Grok incident shows how thin those claims can be. Guardrails can be bypassed with trivial prompts. Safety promises often amount to marketing copy. And companies may ignore warnings until public pressure forces them to act. This is the opposite of what regulators, parents, educators, and enterprise customers expect from systems that are being pitched as the future of work and communication.
A platform that cannot prevent the circulation of child exploitation material does not deserve the financial support of the world’s largest technology companies. Period.
If Grok AI’s erratic behavior didn’t scare advertisers off, CSAM absolutely should. If not for ethical reasons, then for legal ones. No brand wants to be the one whose ad appears next to something that should never exist in the first place.
X has made its priorities clear. It has chosen ideological performance over safety, cost-cutting over responsibility, and provocation over stability. Advertisers who remain are not neutral observers, they are enablers.
Microsoft, Dell, Amazon, IBM, and every other tech company still spending money on X need to ask themselves a simple question: What exactly are we supporting here?
Because at this point, the answer is not innovation. It’s not community. It’s not free speech.
It’s a platform that cannot meet the most basic standards of safety, accountability, or trust.
And that should make the decision very easy.
If you want to track which brands continue funding X despite its CSAM failures, these independent watchdogs maintain up‑to‑date monitoring:
- Check My Ads Institute — Tracks ad placements, brand safety failures, and companies still buying ads on X.
- Media Matters — Publishes investigations into brand adjacency, extremist content, and advertiser behavior on X.
- Adalytics — Independent ad‑tech research group analyzing where ads actually appear across platforms.
You can also document ads you see in your own feed by tapping “Why am I seeing this ad?” on any promoted post. Public pressure works, and brands pay attention when customers call out unsafe platforms.


