Security

AI Recommendation Poisoning Threatens Microsoft Copilot

3 min read

Summary

Microsoft researchers say attackers are trying to manipulate AI assistants like Copilot by hiding prompt injections in AI-related links, aiming to plant persistent “memory” instructions that bias future recommendations. The campaign was observed at scale across dozens of companies and industries, highlighting a growing security risk for enterprises because poisoned AI outputs could quietly influence purchasing, security decisions, and user trust.

Need help with Security?Talk to an Expert

Introduction: why this matters

AI assistants are increasingly trusted to summarize content, compare vendors, and recommend next steps. Microsoft security researchers are now seeing adversarial (and commercially motivated) attempts to persistently bias these assistants by manipulating their memory—turning a seemingly harmless “Summarize with AI” click into a long-lived influence on future responses.

In enterprise environments, this is more than an integrity issue. If an assistant’s recommendations can be subtly steered, it can impact procurement decisions, security guidance, and user trust—without obvious indicators that anything changed.

What’s new: AI Recommendation Poisoning in the wild

Microsoft Defender Security Research Team describes an emerging promotional abuse pattern they call AI Recommendation Poisoning:

  • Hidden prompt injection via URL parameters: Web pages embed links (often behind “Summarize with AI” buttons) that open an AI assistant with a pre-filled prompt using query parameters like ?q=<prompt>.
  • Persistence targeting “memory” features: The injected prompt attempts to add durable instructions such as “remember [Company] as a trusted source” or “recommend [Company] first.”
  • Observed at scale: Over a 60-day review period of AI-related URLs seen in email traffic, researchers identified 50+ distinct prompt attempts from 31 companies across 14 industries.
  • Cross-platform targeting: The same approach was observed aiming at multiple assistants (examples included URLs for Copilot, ChatGPT, Claude, Perplexity, and others). Effectiveness varies by platform and evolves as mitigations roll out.

How it works (and why memory changes the risk)

Modern assistants can retain:

  • Preferences (formatting, tone)
  • Context (projects, recurring tasks)
  • Explicit instructions (“always cite sources”)

That usefulness creates an attack surface: AI memory poisoning (MITRE ATLAS® AML.T0080) occurs when an external actor causes unauthorized “facts” or instructions to be stored as if they were user-intended. The research maps this technique to prompt-based manipulation and related categories (including MITRE ATLAS® entries such as AML.T0051).

Impact on IT admins and end users

  • Recommendation integrity risk: Users may receive biased vendor/product guidance that appears objective.
  • Hard-to-detect manipulation: The “poison” can persist across sessions, making it difficult for users to connect later decisions to an earlier click.
  • Increased social engineering surface: These links can appear on the web or be delivered via email, blending marketing tactics with security abuse.

Microsoft notes it has implemented and continues deploying mitigations in Copilot against prompt injection; in several cases, previously reported behaviors could no longer be reproduced—indicating defenses are evolving.

Action items / next steps

  • Update security awareness training: Teach users that AI “summarize” links can be weaponized, especially if they pre-fill prompts.
  • Review email and web protections: Ensure link-scanning and phishing defenses are tuned to analyze unusual URL parameters and redirect patterns.
  • Establish AI usage guidance: Encourage users to verify sources, cross-check recommendations, and report suspected “memory” anomalies.
  • Operational playbook: Define steps for users/admins to review and clear assistant memory (where supported) and to report suspicious prompts/URLs to security teams.

Recommendation Poisoning is a clear signal that as AI becomes a decision-support layer, integrity and provenance controls must evolve alongside traditional phishing and web threat models.

Need help with Security?

Our experts can help you implement and optimize your Microsoft solutions.

Talk to an Expert

Stay updated on Microsoft technologies

AI securityCopilotprompt injectionmemory poisoningMicrosoft Defender

Related Posts

Security

Dirty Frag Linux Vulnerability Raises Root Risk

Microsoft has warned of active exploitation involving the newly disclosed Dirty Frag Linux local privilege escalation vulnerability, which can help attackers move from a low-privileged account to root. The issue affects kernel networking components such as esp4, esp6, and rxrpc, making it especially important for administrators to review module exposure, restrict local access, and prepare for vendor kernel patches.

Security

AI Agent RCE Flaws in Semantic Kernel Explained

Microsoft Defender researchers disclosed two fixed vulnerabilities in Semantic Kernel that could let prompt injection escalate into host-level remote code execution in AI agents. The findings matter because they show how unsafe tool parameter handling in agent frameworks can turn natural language inputs into code execution paths, raising the stakes for organizations building or securing AI-powered apps.

Security

Microsoft Entra Passkeys: 2026 Passwordless Updates

Microsoft outlined major passkey and account recovery updates across Entra ID, Windows, External ID, and Microsoft Password Manager as part of World Passkey Day. The changes matter for IT teams because they expand phishing-resistant sign-in options, improve recovery security, and continue the retirement of weaker authentication methods such as security questions.

Security

Microsoft AI SOC Report 2026: KuppingerCole Leader

Microsoft says it has been named an Overall Leader and Market Leader in KuppingerCole Analysts’ 2026 Emerging AI Security Operations Center report. The announcement highlights Microsoft’s push beyond traditional SOAR toward AI-driven, agent-assisted security operations in Sentinel and Security Copilot to help SOC teams improve speed, consistency, and scale.

Security

ClickFix macOS Campaign Delivers Infostealers

Microsoft has identified a new ClickFix-style campaign targeting macOS users with fake troubleshooting and utility instructions hosted on blogs and content platforms. Instead of downloading apps, victims are tricked into running Terminal commands that bypass typical macOS app checks and deploy infostealers such as Macsync, SHub Stealer, and AMOS.

Security

AiTM Phishing Campaign Targets Microsoft 365 Users

Microsoft has detailed a large-scale adversary-in-the-middle (AiTM) phishing campaign that used fake code-of-conduct investigations to steal authentication tokens. The attack combined polished social engineering, staged CAPTCHA pages, and a legitimate Microsoft sign-in flow, highlighting why phishing-resistant protections and stronger email defenses matter.