Security

RSAC 2026 微软安全战略:Agentic AI 与环境式防护

3分钟阅读

摘要

微软在 RSAC 2026 上将把安全战略重点放在“Agentic AI”与“环境式/自主式防护”上,展示其 AI-first 安全平台如何基于每天超过 100 trillion 条安全信号,在身份、终端、数据、云与 SecOps 各层提供可观测性、治理与防护。此举之所以重要,是因为随着自主代理与 AI 驱动攻击同步扩张,企业安全团队必须转向更自动化、可治理且端到端可见的防御体系,以降低代理滥用、权限失控和运营复杂度带来的风险。

需要Security方面的帮助?咨询专家

Introduction: why this matters

AI 正在快速改变工作完成的方式,也在加速攻击的规模化。Microsoft 将这一转变定义为“Frontier Firm”的崛起——以人为主导、由 agents 运行的组织——在这种模式下,安全必须像所保护的 AI 系统一样自主且始终在线。在 RSAC™ 2026 上,Microsoft 将展示其 AI-first 安全平台如何在 AI stack 的每一层提供深度可观测性、治理与防护。

What’s new at RSAC 2026 (Microsoft highlights)

Microsoft Pre‑Day (Sunday, March 22)

Microsoft Pre‑Day 将在 Palace Hotel 举办,被定位为一周活动的“起跑点”。Microsoft 计划分享其如何推动“agentic defense”,该能力基于每天超过 100 trillion security signals 的洞察;并介绍 Agent 365 等产品如何提供跨层可观测性(身份、终端、数据、云与 SecOps)。内容将兼具战略视角与实务指导,聚焦 cyber resilience 与安全运营转型。

Keynote: Ambient and Autonomous Security

  • **Vasu Jakkal(CVP, Microsoft Security)**主题演讲:Ambient and Autonomous Security: Building Trust in the Agentic AI Era(周一,3 月 23 日)
  • 重点:安全平台如何演进,以通过自主化运营与无处不在的可观测性来应对 AI 驱动的威胁。
  • Security, Governance, and Control for Agentic AI(周一,3 月 23 日):介绍让 autonomous agents 保持可治理与安全的原则,以避免扩散(sprawl)、滥用与非预期行为。
  • Advancing Cyber Defense in the Era of AI Driven Threats(周二,3 月 24 日):探讨 AI 如何提升威胁复杂度,以及具韧性、以情报驱动的防御应当如何构建。

Booth experience (Microsoft Booth #5744)

Microsoft 在 Moscone Center 的展台将提供剧场式演讲与互动式 demos,面向现代安全运营——把身份、数据、云与终端防护连接到 AI 时代的治理与威胁响应。

Impact on IT administrators and security teams

  • Security architecture:预计将更强调 AI 系统的端到端控制——在 AI apps、agents 与数据流之间实现可见性、治理与策略执行。
  • SecOps workflows:自主化分诊与响应(agent-assisted)的情境,将影响团队如何设计 playbooks、升级路径与运营指标。
  • Identity and access:当 agents 代表用户与服务执行操作时,Conditional Access 优化与以身份为核心的安全仍将是关键。

Action items / next steps

  1. 识别 agents 的使用位置(或试点范围),并定义治理需求(允许的动作、数据边界、可审计性)。
  2. 盘点可观测性缺口:在身份、终端、云与数据层面找出可能阻碍 agent-driven workflows 中检测/响应的短板。
  3. 规划 RSAC 覆盖重点:优先关注 keynote 与治理导向的 sessions;安排展台时间,进行与你环境相关的 hands-on demos。
  4. 对齐相关方(SecOps、IAM、合规、应用团队),就 AI 风险归属与运营就绪建立一致的模型。

需要Security方面的帮助?

我们的专家可以帮助您实施和优化Microsoft解决方案。

咨询专家

获取微软技术最新资讯

Microsoft SecurityRSAC 2026agentic AIAI governancesecurity operations

相关文章

Security

Trivy Supply Chain Compromise: Defender Guidance

Microsoft has published detection, investigation, and mitigation guidance for the March 2026 Trivy supply chain compromise that affected the Trivy binary and related GitHub Actions. The incident matters because it weaponized trusted CI/CD security tooling to steal credentials from build pipelines, cloud environments, and developer systems while appearing to run normally.

Security

AI Agent Governance: Aligning Intent for Security

Microsoft outlines a governance model for AI agents that aligns user, developer, role-based, and organizational intent. The framework helps enterprises keep agents useful, secure, and compliant by defining behavioral boundaries and a clear order of precedence when conflicts arise.

Security

Microsoft Defender Predictive Shielding Stops GPO Ransomware

Microsoft detailed a real-world ransomware case in which Defender’s predictive shielding detected malicious Group Policy Object abuse before encryption began. By hardening GPO propagation and disrupting compromised accounts, Defender blocked about 97% of attempted encryption activity and prevented any devices from being encrypted through the GPO delivery path.

Security

Microsoft Agentic AI Security Tools Unveiled at RSAC

At RSAC 2026, Microsoft introduced a broader security strategy for enterprise AI, led by Agent 365, a new control plane for governing and protecting AI agents that will reach general availability on May 1. The company also announced expanded AI risk visibility and identity protections across Defender, Entra, Purview, Intune, and new shadow AI detection tools, signaling that securing AI usage is becoming a core part of enterprise security operations as adoption accelerates.

Security

Microsoft CTI-REALM Benchmarks AI Detection Engineering

Microsoft has introduced CTI-REALM, an open-source benchmark designed to test whether AI agents can actually perform detection engineering tasks end to end, from interpreting threat intelligence reports to generating and refining KQL and Sigma detection rules. This matters because it gives security teams a more realistic way to evaluate AI for SOC operations, focusing on measurable operational outcomes across real environments instead of simple cybersecurity question answering.

Security

Microsoft Zero Trust for AI: Workshop and Architecture

Microsoft has introduced Zero Trust for AI guidance, adding an AI-focused pillar to its Zero Trust Workshop and expanding its assessment tool with new Data and Network pillars. The update matters because it gives enterprises a structured way to secure AI systems against risks like prompt injection, data poisoning, and excessive access while aligning security, IT, and business teams around nearly 700 controls.