As Microsoft attempts to right its massive software portfolio ship into more secure waters, former company security architect poked holes in its Copilot AI vessel.
During the Black Hat USA 2024 cybersecurity conference, former Microsoft security architect Michael Bargury held two sessions on how a hacker could exploit specific loopholes in the company’s flagship AI platform security firewalls to leverage commercial users sensitive information.
Bargury’s sessions, titled 15 Ways to Break Your Copilot and Living Off Microsoft Copilot were accompanied by supplemental information from the Zenity Labs website that highlighted successful instances of spear-phishing enabled by Copilot’s unique generative features.
Bargury’s spear-phishing hack dubbed LOLCopilot was a self-replicating mass email distribution loop that leverages Copilot’s ability to generatively create automated emails that mimic the hosts writing style and lends credibility to their presence in the recipient’s inboxes. Beyond annoyingly dropping spammed email content into other’s inboxes, the LOLCopilot hack requests sensitive user information in compelling prompts that include asking for bank or insider trade info among other things.
“I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf. A hacker would spend days crafting the right email to get you to click on it, but they can generate hundreds of these emails in a few minutes.”
Michael Bargury – former Microsoft Security Architect
Like the ethical hacker who pointed out the security flaws in Microsoft’s other AI-led effort with Windows Recall, Bargury withheld going public with the specifics of his hack and its necessary exploits he took advantages of two by the company’s current security measures. Instead, Barugty met with Microsoft privately to discus and highlight his LOLCopilot hack.
Microsoft’s AI Incident Detection head Philip Minser graciously appreciated Bargury’s work and essentially thanked him for pointing out where the company was lacking in terms of securing Copilot.
While Microsoft already implements its own internal hacking scenarios in its red-team emulations, there are esoteric security circumstances it still cannot fathom, which is why securing the help of outside actors at legitimate conferences such as Red Hat are vital to the company.
It should be noted that Bargury’s Copilot hack does require that a system supporting Copilot be previously compromised.
“The risks of post-compromise abuse of AI are similar to other post-compromise techniques. Security prevention and monitoring across environments and identities help mitigate or stop such behaviors.
Philip Minser – Microsoft AI Incident Detection executive
Over the past four months, Microsoft CEO and accompanying executives have rung the bell about the company realigning its development efforts towards delivering more secure software and services to users and customers, however, its new push into AI opens the Redmond business to yet another vulnerability vector.
It’ll be interesting to see how quickly Microsoft responds to AI exploited threats and what new measures can be put into place that mitigate Copilots inherent generative abilities without compromising on the expansion of its future potential.