Security

AI Agent RCE Flaws in Semantic Kernel Explained

3 min read

Summary

Microsoft Defender researchers disclosed two fixed vulnerabilities in Semantic Kernel that could let prompt injection escalate into host-level remote code execution in AI agents. The findings matter because they show how unsafe tool parameter handling in agent frameworks can turn natural language inputs into code execution paths, raising the stakes for organizations building or securing AI-powered apps.

Need help with Security?Talk to an Expert

AI Agent RCE Flaws in Semantic Kernel Explained

Introduction

AI agents are changing enterprise application design, but they also introduce a new execution risk. Microsoft Defender Security Research has detailed how prompt injection in AI agent frameworks can move beyond content manipulation and become host-level remote code execution (RCE) when tools and plugins trust model-generated parameters.

For security teams and developers using agent frameworks, this is an important reminder: once an LLM can call tools, weaknesses in framework logic can directly affect the underlying system.

What’s new

Microsoft disclosed two critical vulnerabilities in the open-source Semantic Kernel framework:

  • CVE-2026-26030: An RCE path involving the In-Memory Vector Store when used with the Search Plugin in its default configuration
  • CVE-2026-25592: An arbitrary file write issue through SessionsPythonPlugin

According to Microsoft, both vulnerabilities have been fixed.

The most notable finding is that exploitation did not require a browser exploit, malicious attachment, or memory corruption bug. In the demonstrated scenario, a single prompt injection was enough to influence tool parameters and trigger code execution on the host.

Why the issue occurred

The research highlights a broader design problem in AI agent frameworks:

  • Agents interpret natural language and map it to tool calls
  • Frameworks often trust parsed model output too much
  • Unsafe parameter handling can create execution sinks
  • Blocklist-based protections can be bypassed in dynamic languages like Python

In the Semantic Kernel case, Microsoft researchers found unsafe string interpolation in a Python lambda expression executed with eval(), combined with a validator that could be bypassed.

Impact on IT and security teams

Organizations experimenting with AI agents, copilots, or custom LLM apps should treat this as a framework security issue, not just an AI safety issue.

Potential exposure is highest where:

  • Semantic Kernel is used in production or internal apps
  • Agents can access plugins, scripts, files, or data stores
  • Prompt injection is possible through user input, documents, or connected content sources
  • Default Search Plugin and In-Memory Vector Store configurations are in use

This research also has implications beyond Semantic Kernel. Many teams use frameworks such as LangChain, CrewAI, or similar orchestration layers, and the same trust model concerns may apply.

Security and platform teams should:

  • Patch affected Semantic Kernel deployments immediately
  • Inventory AI agents and plugins that can execute code, read files, or access sensitive systems
  • Review tool-calling paths for unsafe deserialization, interpolation, or dynamic execution patterns
  • Harden prompt injection defenses and assume hostile input can reach agent tools
  • Audit logs and telemetry for suspicious plugin invocations or unexpected process execution
  • Reduce agent privileges so successful prompt injection cannot easily lead to system compromise

Bottom line

Microsoft’s research shows that in agentic applications, prompt injection can become an execution primitive when frameworks and tools over-trust model output. For defenders, the priority is clear: patch vulnerable frameworks, review plugin design, and apply least privilege before AI agents become a new RCE surface.

Need help with Security?

Our experts can help you implement and optimize your Microsoft solutions.

Talk to an Expert

Stay updated on Microsoft technologies

Semantic KernelAI agentsprompt injectionremote code executionMicrosoft Defender

Related Posts

Security

Dirty Frag Linux Vulnerability Raises Root Risk

Microsoft has warned of active exploitation involving the newly disclosed Dirty Frag Linux local privilege escalation vulnerability, which can help attackers move from a low-privileged account to root. The issue affects kernel networking components such as esp4, esp6, and rxrpc, making it especially important for administrators to review module exposure, restrict local access, and prepare for vendor kernel patches.

Security

Microsoft Entra Passkeys: 2026 Passwordless Updates

Microsoft outlined major passkey and account recovery updates across Entra ID, Windows, External ID, and Microsoft Password Manager as part of World Passkey Day. The changes matter for IT teams because they expand phishing-resistant sign-in options, improve recovery security, and continue the retirement of weaker authentication methods such as security questions.

Security

Microsoft AI SOC Report 2026: KuppingerCole Leader

Microsoft says it has been named an Overall Leader and Market Leader in KuppingerCole Analysts’ 2026 Emerging AI Security Operations Center report. The announcement highlights Microsoft’s push beyond traditional SOAR toward AI-driven, agent-assisted security operations in Sentinel and Security Copilot to help SOC teams improve speed, consistency, and scale.

Security

ClickFix macOS Campaign Delivers Infostealers

Microsoft has identified a new ClickFix-style campaign targeting macOS users with fake troubleshooting and utility instructions hosted on blogs and content platforms. Instead of downloading apps, victims are tricked into running Terminal commands that bypass typical macOS app checks and deploy infostealers such as Macsync, SHub Stealer, and AMOS.

Security

AiTM Phishing Campaign Targets Microsoft 365 Users

Microsoft has detailed a large-scale adversary-in-the-middle (AiTM) phishing campaign that used fake code-of-conduct investigations to steal authentication tokens. The attack combined polished social engineering, staged CAPTCHA pages, and a legitimate Microsoft sign-in flow, highlighting why phishing-resistant protections and stronger email defenses matter.

Security

CVE-2026-31431 Linux Root Escalation Threat Explained

Microsoft has detailed CVE-2026-31431, a high-severity Linux local privilege escalation flaw that can grant root access across major distributions and cloud-hosted workloads. The issue matters because it affects shared-kernel environments such as containers and Kubernetes, increasing the risk of container escape, lateral movement, and host compromise if systems are not patched quickly.