In what feels like a no-brainer move, Microsoft is adding web search query citations to help foster a sense of ‘trustworthiness’ with its AI-powered Copilot service.
While Microsoft Copilot already offered links in its footnote section when responding to certain prompts, the company just announced its new ‘web search query citations’ resource that will provide users with the exact web search query Copilot used to provide its response as well as the sites sourced.
Microsoft sees its new web query search feature as a happy two-way street between its AI service and users that will help provide thew transparency the company is seeking to leverage as a differentiator between itself and competitors as well as a way for users to understand how to optimize their prompts over time.
This provides users with valuable feedback as to exactly how their prompts are used to generate web queries that are sent over to Bing search. This transparency helps users with the information they need to improve prompts and use Copilot more effectively.
Microsoft Tech Community
Microsoft plans to roll its new web query feature out next month to both Microsoft 365 and consumer-grade versions of Copilot, however, admins overseeing prosumer accounts with Copilot support will also be able to search, audit and utilize eDiscovery on web search queries resulting from user prompts. Audit logging is on Microsoft’s roadmap for some time in Q4 and will offer admins using CopilotInteraction a new audit log that can be filtered and exported via the AISystemPlugin.Id offline.
To make use of the eDiscovery tool, admins can use Copilot Activity filters in the condition builder to examine Copilot conversations in greater detail that also include Bing search queries for reference.
In a broader effort to deliver on its promise of fortifying its software and services, Microsoft also listed a handful of new features that assist its AI efforts while also expanding its security-focused pledge to customers and users.
Among the recently listed security, safety, and privacy features from Microsoft include Evaluations in Azure AI Studio, Corrections, Embedded Content Safety, Protected Material Detection Code, Confidential Inferencing, Confidential VMs that support NVIDIA’s H100 Tensor Core GPUs, and Azure OpenAI Data Zones.
Safety
- A Correction capability in Microsoft Azure AI Content Safety’s Groundedness detection feature that helps fix hallucination issues in real time before users see them.
- Embedded Content Safety, which allows customers to embed Azure AI Content Safety on devices. This is important for on-device scenarios where cloud connectivity might be intermittent or unavailable.
- New evaluations in Azure AI Studio to help customers assess the quality and relevancy of outputs and how often their AI application outputs protected material.
- Protected Material Detection for Code is now in preview in Azure AI Content Safety to help detect pre-existing content and code. This feature helps developers explore public source code in GitHub repositories, fostering collaboration and transparency, while enabling more informed coding decisions.
Privacy
- Confidential inferencing in preview in our Azure OpenAI Service Whisper model, so customers can develop generative AI applications that support verifiable end-to-end privacy. Confidential inferencing ensures that sensitive customer data remains secure and private during the inferencing process, which is when a trained AI model makes predictions or decisions based on new data. This is especially important for highly regulated industries, such as healthcare, financial services, retail, manufacturing and energy.
- The general availability of Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs, which allow customers to secure data directly on the GPU. This builds on our confidential computing solutions, which ensure customer data stays encrypted and protected in a secure environment so that no one gains access to the information or system without permission.
- Azure OpenAI Data Zones for the EU and U.S. are coming soon and build on the existing data residency provided by Azure OpenAI Service by making it easier to manage the data processing and storage of generative AI applications. This new functionality offers customers the flexibility of scaling generative AI applications across all Azure regions within a geography, while giving them the control of data processing and storage within the EU or U.S.
Security
Evaluations in Azure AI Studio to support proactive risk assessments.
Today’s announcements are an effort to coalesce a bunch of initiatives Microsoft has begun over the past six months to address mounting concerns from investors, users, customers, and governments over the company’s perceived lackadaisical approach to security over the past few years as it charts a pivot towards cloud service and AI provider.