Security

Malicious AI Browser Extensions Steal LLM Chats

3 min read

Summary

Microsoft Defender found malicious Chromium browser extensions masquerading as popular AI assistant add-ons that can harvest sensitive ChatGPT and DeepSeek prompts, responses, visited URLs, and internal browsing context, then quietly exfiltrate that data over routine-looking HTTPS traffic. The discovery matters because these extensions reportedly reached about 900,000 installs and appeared across more than 20,000 enterprise tenants, turning trusted browser marketplaces and everyday AI workflows into a significant data-loss risk for organizations.

Need help with Security?Talk to an Expert

Introduction: Why this matters

AI assistant browser extensions are becoming common “productivity” add-ons for knowledge workers, especially for quick access to tools like ChatGPT and DeepSeek. Microsoft Defender’s investigation shows how this convenience can become an enterprise data-loss channel: a look-alike extension installed from a trusted marketplace can continuously collect and exfiltrate sensitive prompts, responses, and internal URLs without behaving like traditional malware.

What’s new / key findings

Microsoft Defender investigated malicious Chromium extensions that:

  • Impersonate legitimate AI assistant extensions using familiar branding and permission prompts (Defender notes imitation of well-known tools such as AITOPIA).
  • Collect LLM chat content and browsing telemetry, including:
    • Full visited URLs (including internal sites)
    • Chat snippets (prompts and responses) from platforms such as ChatGPT and DeepSeek
    • Model identifiers, navigation context, and a persistent UUID
  • Persist like normal extensions (auto-reloading on browser start, storing telemetry in local extension storage).
  • Exfiltrate data periodically via HTTPS POST, which can resemble routine web traffic.

Defender reporting indicates the extensions reached ~900,000 installs, with Defender telemetry confirming activity across 20,000+ enterprise tenants.

How the attack works (high level)

  • Delivery: Published in the Chrome Web Store with AI-themed descriptions. Because Microsoft Edge supports Chrome Web Store extensions, the same listing enables cross-browser reach.
  • Exploitation of trust & permissions: Broad Chromium extension permissions allowed observation of page content and browsing activity. A misleading consent mechanism was used, and updates could re-enable telemetry by default even after users opted out.
  • Command and control: Periodic uploads to attacker-controlled domains such as deepaichats[.]com and chatsaigpt[.]com, clearing local buffers after transmission to reduce artifacts.

Impact on IT administrators and end users

  • Data leakage risk: Prompts often include proprietary code, internal processes, strategy discussions, credentials copied into chats, and other confidential content. Exfiltration of full URLs can reveal internal application structure and sensitive query strings.
  • Compliance and privacy exposure: Captured chat history can contain regulated or personal data, complicating retention, eDiscovery, and data residency obligations.
  • Extension risk becomes “always on”: Unlike a one-time phishing event, a malicious extension can create continuous visibility into user activity across sessions.
  1. Hunt and block known exfiltration endpoints by monitoring outbound HTTPS POST traffic and applying network controls for:
    • *.chatsaigpt.com
    • *.deepaichats.com
    • *.chataigpt.pro
    • *.chatgptsidebar.pro
  2. Inventory and audit browser extensions across managed endpoints, focusing on AI/sidebar tools with broad site permissions.
  3. Tighten extension governance:
    • Use allowlisting for approved extensions
    • Restrict installation sources where possible
    • Review update behaviors and permissions drift
  4. Use Microsoft Defender Vulnerability Management to run Browser extensions assessment, identify risky extensions, and prioritize remediation.
  5. Educate users: treat AI chats like sensitive communications—avoid pasting secrets, tokens, or proprietary content unless the tool and access path are explicitly approved.

Need help with Security?

Our experts can help you implement and optimize your Microsoft solutions.

Talk to an Expert

Stay updated on Microsoft technologies

Microsoft Defenderbrowser extensionsdata exfiltrationAI securityChromium

Related Posts

Security

Trivy Supply Chain Compromise: Defender Guidance

Microsoft has published detection, investigation, and mitigation guidance for the March 2026 Trivy supply chain compromise that affected the Trivy binary and related GitHub Actions. The incident matters because it weaponized trusted CI/CD security tooling to steal credentials from build pipelines, cloud environments, and developer systems while appearing to run normally.

Security

AI Agent Governance: Aligning Intent for Security

Microsoft outlines a governance model for AI agents that aligns user, developer, role-based, and organizational intent. The framework helps enterprises keep agents useful, secure, and compliant by defining behavioral boundaries and a clear order of precedence when conflicts arise.

Security

Microsoft Defender Predictive Shielding Stops GPO Ransomware

Microsoft detailed a real-world ransomware case in which Defender’s predictive shielding detected malicious Group Policy Object abuse before encryption began. By hardening GPO propagation and disrupting compromised accounts, Defender blocked about 97% of attempted encryption activity and prevented any devices from being encrypted through the GPO delivery path.

Security

Microsoft Agentic AI Security Tools Unveiled at RSAC

At RSAC 2026, Microsoft introduced a broader security strategy for enterprise AI, led by Agent 365, a new control plane for governing and protecting AI agents that will reach general availability on May 1. The company also announced expanded AI risk visibility and identity protections across Defender, Entra, Purview, Intune, and new shadow AI detection tools, signaling that securing AI usage is becoming a core part of enterprise security operations as adoption accelerates.

Security

Microsoft CTI-REALM Benchmarks AI Detection Engineering

Microsoft has introduced CTI-REALM, an open-source benchmark designed to test whether AI agents can actually perform detection engineering tasks end to end, from interpreting threat intelligence reports to generating and refining KQL and Sigma detection rules. This matters because it gives security teams a more realistic way to evaluate AI for SOC operations, focusing on measurable operational outcomes across real environments instead of simple cybersecurity question answering.

Security

Microsoft Zero Trust for AI: Workshop and Architecture

Microsoft has introduced Zero Trust for AI guidance, adding an AI-focused pillar to its Zero Trust Workshop and expanding its assessment tool with new Data and Network pillars. The update matters because it gives enterprises a structured way to secure AI systems against risks like prompt injection, data poisoning, and excessive access while aligning security, IT, and business teams around nearly 700 controls.