Security

AI Recommendation Poisoning Threatens Microsoft Copilot

3 min read

Summary

Microsoft researchers say attackers are trying to manipulate AI assistants like Copilot by hiding prompt injections in AI-related links, aiming to plant persistent “memory” instructions that bias future recommendations. The campaign was observed at scale across dozens of companies and industries, highlighting a growing security risk for enterprises because poisoned AI outputs could quietly influence purchasing, security decisions, and user trust.

Need help with Security?Talk to an Expert

Introduction: why this matters

AI assistants are increasingly trusted to summarize content, compare vendors, and recommend next steps. Microsoft security researchers are now seeing adversarial (and commercially motivated) attempts to persistently bias these assistants by manipulating their memory—turning a seemingly harmless “Summarize with AI” click into a long-lived influence on future responses.

In enterprise environments, this is more than an integrity issue. If an assistant’s recommendations can be subtly steered, it can impact procurement decisions, security guidance, and user trust—without obvious indicators that anything changed.

What’s new: AI Recommendation Poisoning in the wild

Microsoft Defender Security Research Team describes an emerging promotional abuse pattern they call AI Recommendation Poisoning:

  • Hidden prompt injection via URL parameters: Web pages embed links (often behind “Summarize with AI” buttons) that open an AI assistant with a pre-filled prompt using query parameters like ?q=<prompt>.
  • Persistence targeting “memory” features: The injected prompt attempts to add durable instructions such as “remember [Company] as a trusted source” or “recommend [Company] first.”
  • Observed at scale: Over a 60-day review period of AI-related URLs seen in email traffic, researchers identified 50+ distinct prompt attempts from 31 companies across 14 industries.
  • Cross-platform targeting: The same approach was observed aiming at multiple assistants (examples included URLs for Copilot, ChatGPT, Claude, Perplexity, and others). Effectiveness varies by platform and evolves as mitigations roll out.

How it works (and why memory changes the risk)

Modern assistants can retain:

  • Preferences (formatting, tone)
  • Context (projects, recurring tasks)
  • Explicit instructions (“always cite sources”)

That usefulness creates an attack surface: AI memory poisoning (MITRE ATLAS® AML.T0080) occurs when an external actor causes unauthorized “facts” or instructions to be stored as if they were user-intended. The research maps this technique to prompt-based manipulation and related categories (including MITRE ATLAS® entries such as AML.T0051).

Impact on IT admins and end users

  • Recommendation integrity risk: Users may receive biased vendor/product guidance that appears objective.
  • Hard-to-detect manipulation: The “poison” can persist across sessions, making it difficult for users to connect later decisions to an earlier click.
  • Increased social engineering surface: These links can appear on the web or be delivered via email, blending marketing tactics with security abuse.

Microsoft notes it has implemented and continues deploying mitigations in Copilot against prompt injection; in several cases, previously reported behaviors could no longer be reproduced—indicating defenses are evolving.

Action items / next steps

  • Update security awareness training: Teach users that AI “summarize” links can be weaponized, especially if they pre-fill prompts.
  • Review email and web protections: Ensure link-scanning and phishing defenses are tuned to analyze unusual URL parameters and redirect patterns.
  • Establish AI usage guidance: Encourage users to verify sources, cross-check recommendations, and report suspected “memory” anomalies.
  • Operational playbook: Define steps for users/admins to review and clear assistant memory (where supported) and to report suspicious prompts/URLs to security teams.

Recommendation Poisoning is a clear signal that as AI becomes a decision-support layer, integrity and provenance controls must evolve alongside traditional phishing and web threat models.

Need help with Security?

Our experts can help you implement and optimize your Microsoft solutions.

Talk to an Expert

Stay updated on Microsoft technologies

AI securityCopilotprompt injectionmemory poisoningMicrosoft Defender

Related Posts

Security

Trivy Supply Chain Compromise: Defender Guidance

Microsoft has published detection, investigation, and mitigation guidance for the March 2026 Trivy supply chain compromise that affected the Trivy binary and related GitHub Actions. The incident matters because it weaponized trusted CI/CD security tooling to steal credentials from build pipelines, cloud environments, and developer systems while appearing to run normally.

Security

AI Agent Governance: Aligning Intent for Security

Microsoft outlines a governance model for AI agents that aligns user, developer, role-based, and organizational intent. The framework helps enterprises keep agents useful, secure, and compliant by defining behavioral boundaries and a clear order of precedence when conflicts arise.

Security

Microsoft Defender Predictive Shielding Stops GPO Ransomware

Microsoft detailed a real-world ransomware case in which Defender’s predictive shielding detected malicious Group Policy Object abuse before encryption began. By hardening GPO propagation and disrupting compromised accounts, Defender blocked about 97% of attempted encryption activity and prevented any devices from being encrypted through the GPO delivery path.

Security

Microsoft Agentic AI Security Tools Unveiled at RSAC

At RSAC 2026, Microsoft introduced a broader security strategy for enterprise AI, led by Agent 365, a new control plane for governing and protecting AI agents that will reach general availability on May 1. The company also announced expanded AI risk visibility and identity protections across Defender, Entra, Purview, Intune, and new shadow AI detection tools, signaling that securing AI usage is becoming a core part of enterprise security operations as adoption accelerates.

Security

Microsoft CTI-REALM Benchmarks AI Detection Engineering

Microsoft has introduced CTI-REALM, an open-source benchmark designed to test whether AI agents can actually perform detection engineering tasks end to end, from interpreting threat intelligence reports to generating and refining KQL and Sigma detection rules. This matters because it gives security teams a more realistic way to evaluate AI for SOC operations, focusing on measurable operational outcomes across real environments instead of simple cybersecurity question answering.

Security

Microsoft Zero Trust for AI: Workshop and Architecture

Microsoft has introduced Zero Trust for AI guidance, adding an AI-focused pillar to its Zero Trust Workshop and expanding its assessment tool with new Data and Network pillars. The update matters because it gives enterprises a structured way to secure AI systems against risks like prompt injection, data poisoning, and excessive access while aligning security, IT, and business teams around nearly 700 controls.