The artificial intelligence revolution is still waiting to find its ‘ah-ha’ moment as companies continue to throw AI-powered spaghetti at the wall. At Ignite 2024, Microsoft is offering new tools to developers to help throw their own AI spaghetti at the wall as well.
Earlier today, Microsoftcorporatevice president of Windows + Devices, Pavan Davuluri updated developers at its Ignite 2024 conference that the company has new tools to help developers take advantage of AI integrations in Windows.
Beyond the Windows Copilot Runtime (WCR) that was introduced during Build 2024 and gives developers access to silicon such as GPUs and NPUs to accelerate their user experiences, but Microsoft is also going to give developers access to new imaging APIs via the WCR.
In the Windows App SDK 1.7 Experimental 2 release scheduled to drop sometime in January 2025, developers will get access to some of the imaging APIs Windows users have been testing in in-box apps such as Photos and Paint that are now powered by AI such as:
- Image super resolution: API increases fidelity of the image as well as upscaling the resolution of the image. This API can be used to enhance clarity of blurry images.
- Image segmentation: API enables separating foreground and background of an image, as well as removing specific objects or regions within an image. Creativity apps like image editing or video editing can easily bring background removal capabilities in their apps using this API.
- Object erase: This API enables erasing unwanted objects from the image and blends the erased area with the rest of the background.
- Image description: API provides a text description of an image.
Microsoft is also tweaking its Windows Subsystem for Linux with AI goodies for developers who use WSL and WinGet to manage their business applications.
As a refresher, Microsoft’s WCR already offered a set of APIs that accessed over forty on-device models for Windows via the company’s Phi 3.5 Silica, a derivative of its Phi Series Small Language Models (SLM). Phi 3.5 Silica APIs make use of Optical Character Recognition (OCR), text completion, summarization, completion, and prediction.
Developers will definitely be receiving a leg up if they choose to leverage these new tools in January, and it’ll be interesting to see if they can provide the AI space with its flagship use case finally.


