Following a public relations black eye that involved its pre-generative artificial intelligence and porn deepfakes of pop superstar Taylor Swift, Microsoft is looking to learn by putting in new safeguards.
After the social media site X (formerly Twitter) was flooded with porn deepfakes of celebrity personality Taylor Swift, it wasn’t long before the finger was pointed at Microsoft and its Microsoft Designer tool as the nexus of controversy. According to a report from 404 Media, users of 4chan and Telegram were coordinating text-to-image prompts to produce photorealistic images of Swift using Microsoft Designer.
Despite Microsoft Designer donning a ‘preview tag’ and ultimately being powered by OpenAI’s DALL-E AI system, the blame lies with Microsoft and its CEO Satya Nadella who took to NBC News last week to do as much PR clean up as possible.
When asked about the potential for Microsoft’s AI technology to cause additional harm in the future, Nadella admitted that the news about Swift was “alarming and terrible,” and that the company must “move fast” to prevent nonconsensual sexually explicit deepfake images by putting in some undisclosed guardrails.
“I think we all benefit when the online world is a safe world. And so I don’t think anyone would want an online world that is completely not safe for both content creators and content consumers. So therefore, I think it behooves us to move fast on this.”
Microsoft CEO Satya Nadella
While Microsoft hasn’t publicly disclosed what new safeguards or filters it’s put in place since the incident, Nadella has claimed that the company’s AI engineers have made some adjustments to the text filtering prompts and have not been able to reproduce similar images of Swift with the updated explicit content filters in place.
404 Media has also independently confirmed that it could not reproduce the deepfake images of Swift or other celebrities since Microsoft applied its new filtering guardrails.
Prior to Microsoft’s late changes to its Microsoft Designer service, deepfake images produced by the AI tool had been seen 27 million times on X, and the company’s previous filters were easily bypassed by misspellings, nor did they require any explicitly sexual language to create.
Perhaps, due to the nature of its partnered application and not owning the full stack of technology, Microsoft has been relatively reckless with its AI technologies as it looks to gain both mind and market share among potential users. Unfortunately, Nadella may be trotted out to do several more supplemental interviews and answer further privacy and security questions if Microsoft can successfully push AI usage mainstream.