Security

Microsoft Cyber Pulse:AI Agent 治理与 Zero Trust 安全加速

3分钟阅读

摘要

Microsoft 最新《Cyber Pulse》指出,AI agent 已在大型企业中快速普及,但许多组织的清点、治理与防护能力明显滞后,尤其是 29% 员工曾在工作中使用未经批准的 AI agent,带来数据暴露、权限滥用与合规风险。报告强调应将 Zero Trust 原则扩展到这类“非人类身份”,并优先建立统一清单、访问控制、可视化、互操作与安全防护五项能力,这对企业在加速采用 AI 的同时保持可审计性与安全性至关重要。

需要Security方面的帮助?咨询专家

引言:为何此刻至关重要

AI agent 已不再是实验性质——它们正嵌入销售、财务、安全运营与客户服务等日常工作流。Microsoft 最新的 Cyber Pulse 报告指出一个关键缺口:许多组织采用 agent 的速度,快于其对这些 agent 进行 清点、治理与保护 的能力。对 IT 与安全团队而言,眼下的首要挑战是可见性——因为你无法保护(或审计)你看不见的东西。

报告的新内容 / 关键要点

AI agent 已走向主流——且不再局限于开发者

  • 80%+ 的 Fortune 500 组织正在使用 活跃 AI agent,且往往通过 low-code/no-code 工具构建。
  • 采用覆盖多个行业(尤其是软件/技术、制造、金融服务与零售)以及全球各地区。
  • agent 越来越多地以 autonomous modes 运行,在几乎没有人工介入的情况下采取行动——相较传统应用,其风险画像发生变化。

新兴盲区:“shadow AI”

Microsoft 指出,许多领导者无法回答一些基础问题:

  • 企业范围内到底存在多少 agent?
  • 谁拥有它们?
  • 它们可访问哪些数据与系统?
  • 哪些是已批准(sanctioned)vs. 未经批准(unsanctioned)?

这并非纸上谈兵。报告提到 29% 的员工在工作任务中使用过未经批准的 AI agent——这为数据暴露、策略违规以及继承权限被滥用引入了新的路径。

Zero Trust 原则——如今需要规模化应用到非人类用户

报告强调,应将成熟的 Zero Trust 原则一致地应用到 agent:

  • Least privilege access(agent 仅获得其所需的权限)
  • Explicit verification(对访问请求验证身份与上下文)
  • Assume compromise(以已被攻破为前提进行设计,并实现快速遏制)

可观测性先行:五项必备能力

Microsoft 概述了构建 AI agent 真正可观测性与治理所需的五项核心能力:

  1. Registry:面向所有 agent 的集中式清单/单一事实来源(包括第三方与 shadow)
  2. Access control:以身份与策略驱动的控制,一致执行最小权限原则
  3. Visualization:通过仪表板/遥测理解行为、依赖关系与风险
  4. Interoperability:在 Microsoft、open-source 与第三方生态中实现一致治理
  5. Security:用于尽早检测滥用、漂移(drift)与入侵(compromise)的防护能力

对 IT 管理员与终端用户的影响

  • Identity 将成为 agent 的控制平面:将 agent 视作员工或 service account,为其提供受治理的访问与可追责性。
  • 合规与审计压力上升,尤其是在受监管行业(金融、医疗、公共部门)。
  • 如果没有可用的已批准方案,终端用户仍会持续采用工具——因此“赋能 + 护栏”缺一不可。

行动项 / 下一步

  • 立即建立 agent inventory/registry 方法(从已批准平台入手,并扩展到对未经批准使用的发现)。
  • 为 agent 明确定义 所有权与生命周期(创建、审批、变更控制、退役)——治理不等同于安全。
  • 对 agent identity 强制执行 least privilege(审查访问路径、secrets、connectors 与数据范围)。
  • 实施 monitoring 与 telemetry,以检测异常行为与访问漂移。
  • 组建跨职能团队(IT、安全、法务、合规、人力资源、业务负责人),将 AI 风险作为 enterprise risk 来管理。

需要Security方面的帮助?

我们的专家可以帮助您实施和优化Microsoft解决方案。

咨询专家

获取微软技术最新资讯

Zero TrustAI agentsgovernanceobservabilityrisk management

相关文章

Security

Trivy Supply Chain Compromise: Defender Guidance

Microsoft has published detection, investigation, and mitigation guidance for the March 2026 Trivy supply chain compromise that affected the Trivy binary and related GitHub Actions. The incident matters because it weaponized trusted CI/CD security tooling to steal credentials from build pipelines, cloud environments, and developer systems while appearing to run normally.

Security

AI Agent Governance: Aligning Intent for Security

Microsoft outlines a governance model for AI agents that aligns user, developer, role-based, and organizational intent. The framework helps enterprises keep agents useful, secure, and compliant by defining behavioral boundaries and a clear order of precedence when conflicts arise.

Security

Microsoft Defender Predictive Shielding Stops GPO Ransomware

Microsoft detailed a real-world ransomware case in which Defender’s predictive shielding detected malicious Group Policy Object abuse before encryption began. By hardening GPO propagation and disrupting compromised accounts, Defender blocked about 97% of attempted encryption activity and prevented any devices from being encrypted through the GPO delivery path.

Security

Microsoft Agentic AI Security Tools Unveiled at RSAC

At RSAC 2026, Microsoft introduced a broader security strategy for enterprise AI, led by Agent 365, a new control plane for governing and protecting AI agents that will reach general availability on May 1. The company also announced expanded AI risk visibility and identity protections across Defender, Entra, Purview, Intune, and new shadow AI detection tools, signaling that securing AI usage is becoming a core part of enterprise security operations as adoption accelerates.

Security

Microsoft CTI-REALM Benchmarks AI Detection Engineering

Microsoft has introduced CTI-REALM, an open-source benchmark designed to test whether AI agents can actually perform detection engineering tasks end to end, from interpreting threat intelligence reports to generating and refining KQL and Sigma detection rules. This matters because it gives security teams a more realistic way to evaluate AI for SOC operations, focusing on measurable operational outcomes across real environments instead of simple cybersecurity question answering.

Security

Microsoft Zero Trust for AI: Workshop and Architecture

Microsoft has introduced Zero Trust for AI guidance, adding an AI-focused pillar to its Zero Trust Workshop and expanding its assessment tool with new Data and Network pillars. The update matters because it gives enterprises a structured way to secure AI systems against risks like prompt injection, data poisoning, and excessive access while aligning security, IT, and business teams around nearly 700 controls.