Security

Copilot Studio 代理安全:Defender 可检测 10 大错误配置

3分钟阅读

摘要

微软披露了 Copilot Studio 代理最常见的 10 类安全错误配置,并在 Microsoft Defender Advanced Hunting 的 Community Queries 中提供现成检测,帮助企业发现如过度共享、无需认证、高风险 HTTP 请求、邮件外泄、硬编码凭据和孤儿代理等问题。此事重要在于,AI 代理正快速进入业务流程,这些看似普通的配置失误可能绕过既有身份与数据治理控制,形成新的高风险访问路径,因此安全团队需要尽快基线化检查并优先修复高危项。

需要Security方面的帮助?咨询专家

Introduction: why this matters

Copilot Studio 代理正在快速嵌入到运营工作流中——查询数据、触发操作,并以规模化方式与系统交互。Defender Security Research Team 警告称,一些看似合理、出于善意的配置选择(过度共享、弱认证、高风险操作)可能会在无声无息中变成高影响的暴露点。好消息是:Microsoft Defender 可以通过 Advanced Hunting Community Queries 帮助你及早检测这些情况。

What’s new: 10 misconfigurations to hunt for

Microsoft 发布了一个“one-page view”,汇总其在真实环境中观察到的最常见 Copilot Studio 代理风险,并在 Microsoft Defender Advanced Hunting 中提供对应的检测(Security portal → Advanced hunting → Queries → Community Queries → AI Agent folder)。

重点风险包括:

  • 过度宽泛的共享(共享给整个组织或大范围群组):扩大攻击面并增加非预期使用。
  • 无需认证:使代理成为公共/匿名入口点,可能暴露内部数据或逻辑。
  • 高风险 HTTP Request 操作:使用非 HTTPS、非标准端口,或直接调用本应通过 connector 治理的端点——从而绕过策略与身份保护。
  • 基于电子邮件的数据外泄路径:代理可向攻击者控制的输入或外部邮箱发送邮件(在 prompt injection 场景下尤其危险)。
  • 休眠的代理、操作或连接:“被遗忘”的已发布代理与陈旧连接会形成隐蔽的高权限访问。
  • 在生产环境使用作者(maker)认证:破坏职责分离,并可能以提升的 maker 权限运行。
  • 在 topics/actions 中硬编码凭据:存在直接泄露凭据的风险。
  • 配置了 Model Context Protocol (MCP) 工具:可能引入未记录的访问路径与非预期的系统交互。
  • 缺少指令的生成式编排(generative orchestration):增加行为漂移或被 prompt 驱动执行不安全操作的概率。
  • 孤儿代理(orphaned agents)(无活跃所有者):治理薄弱、无人负责维护,且更易出现过时逻辑带来的风险。

Impact on IT admins and security teams

对于负责 Power Platform 与 Microsoft 365 安全的管理员而言,核心结论是:代理安全态势现在已成为身份与数据治理的一部分。错误配置可能创建新的访问路径,而传统应用清单、Conditional Access 的既有假设或 connector 策略未必能完全覆盖——尤其是在 makers 快速创建代理的情况下。

Action items / next steps

  1. 在 Defender 的 Advanced Hunting 中运行 Community Queries(AI Agent folder),并对各环境的发现进行基线化。
  2. 优先修复:无需认证的代理、组织范围共享、maker-auth 代理,以及任何具备对外发邮件能力的场景。
  3. 审查 HTTP Request 使用,尽可能替换为受治理的 connector;强制使用 HTTPS 与标准端口。
  4. 清理休眠/孤儿资产:下线未使用的代理/操作,并轮换/移除陈旧连接。
  5. 建立运营护栏:要求具名所有者、记录用途、最小权限连接,以及为 generative orchestration 设置强制指令。

需要Security方面的帮助?

我们的专家可以帮助您实施和优化Microsoft解决方案。

咨询专家

获取微软技术最新资讯

Copilot StudioMicrosoft DefenderAdvanced HuntingPower PlatformAI security

相关文章

Security

Trivy Supply Chain Compromise: Defender Guidance

Microsoft has published detection, investigation, and mitigation guidance for the March 2026 Trivy supply chain compromise that affected the Trivy binary and related GitHub Actions. The incident matters because it weaponized trusted CI/CD security tooling to steal credentials from build pipelines, cloud environments, and developer systems while appearing to run normally.

Security

AI Agent Governance: Aligning Intent for Security

Microsoft outlines a governance model for AI agents that aligns user, developer, role-based, and organizational intent. The framework helps enterprises keep agents useful, secure, and compliant by defining behavioral boundaries and a clear order of precedence when conflicts arise.

Security

Microsoft Defender Predictive Shielding Stops GPO Ransomware

Microsoft detailed a real-world ransomware case in which Defender’s predictive shielding detected malicious Group Policy Object abuse before encryption began. By hardening GPO propagation and disrupting compromised accounts, Defender blocked about 97% of attempted encryption activity and prevented any devices from being encrypted through the GPO delivery path.

Security

Microsoft Agentic AI Security Tools Unveiled at RSAC

At RSAC 2026, Microsoft introduced a broader security strategy for enterprise AI, led by Agent 365, a new control plane for governing and protecting AI agents that will reach general availability on May 1. The company also announced expanded AI risk visibility and identity protections across Defender, Entra, Purview, Intune, and new shadow AI detection tools, signaling that securing AI usage is becoming a core part of enterprise security operations as adoption accelerates.

Security

Microsoft CTI-REALM Benchmarks AI Detection Engineering

Microsoft has introduced CTI-REALM, an open-source benchmark designed to test whether AI agents can actually perform detection engineering tasks end to end, from interpreting threat intelligence reports to generating and refining KQL and Sigma detection rules. This matters because it gives security teams a more realistic way to evaluate AI for SOC operations, focusing on measurable operational outcomes across real environments instead of simple cybersecurity question answering.

Security

Microsoft Zero Trust for AI: Workshop and Architecture

Microsoft has introduced Zero Trust for AI guidance, adding an AI-focused pillar to its Zero Trust Workshop and expanding its assessment tool with new Data and Network pillars. The update matters because it gives enterprises a structured way to secure AI systems against risks like prompt injection, data poisoning, and excessive access while aligning security, IT, and business teams around nearly 700 controls.