Every time Microsoft tries to prove it is leading the AI revolution, it somehow stumbles into reminding everyone why the old Microslop nickname still has life. The latest example is almost poetic. The company used AI to recreate a well-known Git branching diagram and published the result on Microsoft Learn. The AI did not just distort the diagram. It coined a new term: “continvoucly morged.”
This is not the kind of AI innovation anyone asked for.
The Diagram That Should Have Been Impossible to Mess Up
Vincent Driessen, the developer behind the influential Git branching model, has spent fifteen years watching his diagram travel the world. It appears in books, talks, wikis, and tutorials. He even released the source file so others could adapt it. He never minded. That was the point.
What he did mind was waking up to messages from Bluesky and Hacker News pointing him to a Microsoft Learn page featuring a warped, AI-generated imitation of his work. The colors were wrong. The arrows were wrong. The layout was wrong. And the text was not just wrong. It was surreal.
Driessen described the whole thing as “careless, blatantly amateuristic, and lacking any ambition… Microsoft unworthy.” He also called it “proper AI slop,” which feels like the most accurate label possible.
Enter the Newest Microsoft AI Term: “Morge”
Microsoft has spent the last two years trying to coin serious AI vocabulary. Copilot. Recall. Phi. Prism. All very futuristic. All very intentional.
Then along comes “morged,” a word that sounds like a typo, a threat, and a medieval medical procedure. And it came not from a branding team but from an AI model that was supposed to be improving documentation quality.
If Microslop ever needed a mascot for its AI era, “morge” might be it.
Driessen is not upset that Microsoft used his diagram. He has seen it reused for years. What bothers him is the process. Someone took a clean, well-crafted diagram, ran it through a machine to scrub off the fingerprints, and shipped the result without attribution or oversight.

He points out that this case was easy to catch because the original is famous and the AI output was hilariously bad. The next time, the source might be obscure and the AI might be better at hiding its tracks. That is the part that should worry people.
A Pattern That Keeps Repeating
This incident fits neatly into a broader pattern. Microsoft is racing to inject AI into everything it touches. In the process, quality control sometimes takes a vacation. We have seen AI-generated support articles that tell users to delete system files. We have seen Windows updates that break basic functionality. We have seen Copilot hallucinate legal citations with confidence.
Now we have AI documentation that invents new verbs.
It is not the kind of innovation that inspires trust.
Driessen is not asking for a parade. A link back to his original work would have been enough. A quick review of the AI output would have prevented the “morging” of Git history. A basic editorial check would have caught the wrong-way arrows.
Instead, Microsoft published the diagram as if nothing was wrong. The result is a perfect example of how AI can amplify sloppiness when no one is watching.
This is not just about one diagram. It is about the tension between human-made work and AI-generated shortcuts. When a company as large as Microsoft treats AI as a content-washing machine rather than a tool for amplification, it signals something troubling. It suggests that quality, authorship, and care are optional in the rush to scale.
Developers notice. Designers notice. Users notice. And they are right to wonder whether the future of documentation will be filled with diagrams that are not just wrong but “continvoucly morged” into something unrecognizable.

