Security

Copilot Studio Agents 安全:Defender 揭示 10 大错误配置

3分钟阅读

摘要

微软 Defender 安全研究团队披露了 Copilot Studio Agents 的 10 大常见错误配置,包括过度共享、未启用身份验证、高风险 HTTP 请求、硬编码凭据和孤儿/休眠代理等,并提供了可在 Microsoft Defender Advanced Hunting 中直接使用的检测查询。此事之所以重要,在于这些问题往往不会触发传统告警,却可能让 AI 代理成为数据泄露、权限提升和绕过治理控制的入口,企业需要尽快建立基线排查并收紧共享、认证与连接器使用策略。

需要Security方面的帮助?咨询专家

引言:为什么这很重要

Copilot Studio agents 正迅速嵌入各类运营工作流——大规模拉取数据、触发操作并与内部系统交互。同样的自动化也会在 agent 被错误共享、以过高权限运行或绕过标准治理控制时,带来新的攻击路径。Microsoft 的 Defender Security Research 团队正在“真实环境”中观察到这些问题,而且往往不会触发明显告警,因此,主动发现与安全态势管理变得至关重要。

最新进展:10 个常见的 Copilot Studio agent 风险(以及如何检测)

Microsoft 发布了一个实用的 agent 错误配置 Top 10 清单,并将每一项映射到 Microsoft Defender Advanced Hunting 的 Community Queries(Security portal → Advanced huntingQueriesCommunity queriesAI Agent 文件夹)。关键风险包括:

  1. 过度宽泛的共享(全组织或大型群组)——扩大攻击面并导致非预期使用。
  2. 不需要身份验证——形成公开/匿名入口并可能造成数据泄露。
  3. 高风险 HTTP Request 操作——调用 connector 端点、使用非 HTTPS 或非标准端口,可能绕过 connector 治理与身份控制。
  4. 基于邮件的数据外传路径——agent 将邮件发送到 AI 可控的变量或外部邮箱,可能导致由 prompt injection 驱动的数据外传。
  5. 休眠的 agents/actions/connections——陈旧组件会成为隐藏攻击面并保留残余权限。
  6. 作者(maker)身份验证——削弱职责分离并可能引发权限提升。
  7. 在 topics/actions 中硬编码凭据——提高凭据泄露与复用风险。
  8. 配置了 Model Context Protocol (MCP) 工具——可能引入未记录的访问路径与非预期系统交互。
  9. 缺少指令的 generative orchestration——更高的行为漂移与 prompt 滥用风险。
  10. 孤儿 agents(无活跃所有者)——治理薄弱,访问在时间推移中失控。

对 IT 管理员与安全团队的影响

  • 可见性缺口:这些错误配置在创建时通常并不显得恶意,也可能不会触发传统告警。
  • 身份与数据暴露:未认证访问、maker 凭据与广泛共享,可能让 agent 成为低摩擦的“跳板”,直达组织数据。
  • 绕过治理:直接 HTTP 操作可规避 Power Platform connector 保护(验证、限流、身份强制)。
  • 运营风险:孤儿或休眠 agents 会在所有权与意图不明的情况下,长期保留业务逻辑与访问权限。

行动项 / 下一步

  1. 立即运行 AI Agent Community Queries 并建立结果基线(优先从:全组织共享、无认证 agents、作者身份验证、硬编码凭据开始)。
  2. 收紧共享与身份验证:强制最小权限访问,并要求所有生产 agents 必须进行身份验证。
  3. 审查 HTTP Request 使用:优先采用受治理的 connectors;将非 HTTPS 与非标准端口标记为需立即修复。
  4. 管控对外邮件场景:限制外部收件人、校验动态输入,并监控类似 prompt-injection 的模式。
  5. 建立生命周期治理:盘点 agents,移除或重新指定孤儿 agents 的所有者,并下线休眠 connections/actions。

将 agent 配置纳入安全态势的一部分,并持续针对这些模式开展 hunting,可在攻击者将其“武器化”之前降低暴露风险。

需要Security方面的帮助?

我们的专家可以帮助您实施和优化Microsoft解决方案。

咨询专家

获取微软技术最新资讯

Copilot StudioMicrosoft DefenderAdvanced HuntingAI securityPower Platform governance

相关文章

Security

Trivy Supply Chain Compromise: Defender Guidance

Microsoft has published detection, investigation, and mitigation guidance for the March 2026 Trivy supply chain compromise that affected the Trivy binary and related GitHub Actions. The incident matters because it weaponized trusted CI/CD security tooling to steal credentials from build pipelines, cloud environments, and developer systems while appearing to run normally.

Security

AI Agent Governance: Aligning Intent for Security

Microsoft outlines a governance model for AI agents that aligns user, developer, role-based, and organizational intent. The framework helps enterprises keep agents useful, secure, and compliant by defining behavioral boundaries and a clear order of precedence when conflicts arise.

Security

Microsoft Defender Predictive Shielding Stops GPO Ransomware

Microsoft detailed a real-world ransomware case in which Defender’s predictive shielding detected malicious Group Policy Object abuse before encryption began. By hardening GPO propagation and disrupting compromised accounts, Defender blocked about 97% of attempted encryption activity and prevented any devices from being encrypted through the GPO delivery path.

Security

Microsoft Agentic AI Security Tools Unveiled at RSAC

At RSAC 2026, Microsoft introduced a broader security strategy for enterprise AI, led by Agent 365, a new control plane for governing and protecting AI agents that will reach general availability on May 1. The company also announced expanded AI risk visibility and identity protections across Defender, Entra, Purview, Intune, and new shadow AI detection tools, signaling that securing AI usage is becoming a core part of enterprise security operations as adoption accelerates.

Security

Microsoft CTI-REALM Benchmarks AI Detection Engineering

Microsoft has introduced CTI-REALM, an open-source benchmark designed to test whether AI agents can actually perform detection engineering tasks end to end, from interpreting threat intelligence reports to generating and refining KQL and Sigma detection rules. This matters because it gives security teams a more realistic way to evaluate AI for SOC operations, focusing on measurable operational outcomes across real environments instead of simple cybersecurity question answering.

Security

Microsoft Zero Trust for AI: Workshop and Architecture

Microsoft has introduced Zero Trust for AI guidance, adding an AI-focused pillar to its Zero Trust Workshop and expanding its assessment tool with new Data and Network pillars. The update matters because it gives enterprises a structured way to secure AI systems against risks like prompt injection, data poisoning, and excessive access while aligning security, IT, and business teams around nearly 700 controls.