Azure

GPT-5.5 in Microsoft Foundry for Enterprise AI

3 min read

Summary

Microsoft is making OpenAI GPT-5.5 generally available in Microsoft Foundry, giving Azure customers a new frontier model designed for long-context reasoning, agentic execution, and lower token usage. The update matters for enterprises because Foundry adds the security, governance, identity, and deployment controls needed to run production AI agents at scale.

Need help with Azure?Talk to an Expert

Introduction

Microsoft is bringing OpenAI's GPT-5.5 to Microsoft Foundry, expanding Azure's enterprise AI platform with a model built for more reliable reasoning, agent execution, and production-scale efficiency. For IT leaders, developers, and platform teams, the bigger story is not just model access—it is the ability to operationalize advanced AI with governance, identity, and security built in.

What's new with GPT-5.5 in Microsoft Foundry

GPT-5.5 becomes generally available in Microsoft Foundry, alongside a premium GPT-5.5 Pro option for more demanding workloads.

Key improvements highlighted by Microsoft include:

  • Deeper long-context reasoning for large documents, codebases, and multi-session work
  • More reliable agentic execution for multi-step tasks and professional workflows
  • Improved computer-use accuracy when interacting with software interfaces
  • Better token efficiency to reduce cost and latency at scale
  • Stronger support for coding, research, document creation, and analysis

Microsoft positions GPT-5.5 for scenarios where precision matters, including software engineering, DevOps, legal, health sciences, and professional services.

Why Foundry matters for enterprises

Microsoft Foundry is the platform layer that helps organizations move from AI experiments to governed production use. Rather than just exposing a model endpoint, Foundry provides:

  • Enterprise-grade security, compliance, and governance
  • Broad model choice and interoperable agent frameworks
  • Integration with enterprise systems and productivity tools
  • Easier evaluation, deployment, and scaling of new models

Microsoft also emphasized Foundry Agent Service as the operating environment for running agents at scale. Hosted agents can run in isolated sandboxes with:

  • A persistent filesystem
  • A distinct Microsoft Entra identity
  • Scale-to-zero pricing
  • Support for frameworks such as LangGraph, OpenAI Agents SDK, Claude Agent SDK, Microsoft Agent Framework, and GitHub Copilot SDK

Pricing at a glance

Microsoft published token pricing for both models:

  • GPT-5.5: $5/M input, $0.50/M cached input, $30/M output
  • GPT-5.5 Pro: $30/M input, $3/M cached input, $180/M output

Impact on IT administrators and platform teams

For Azure and AI platform admins, this release means more choices for production AI workloads without giving up enterprise controls. Teams can standardize how agents are deployed, secured, and managed while supporting multiple frameworks and identities through Foundry.

Developers also benefit from a clearer path to production, especially for use cases involving coding assistants, research agents, and workflow automation.

Next steps

If your organization is already testing AI agents on Azure, now is the time to:

  • Evaluate GPT-5.5 for high-precision workflows
  • Review Foundry Agent Service for secure hosted agent deployment
  • Compare token costs against current model usage
  • Validate governance, identity, and compliance requirements before wider rollout

For enterprises moving beyond pilots, GPT-5.5 in Microsoft Foundry looks like a significant step toward scalable, governed agentic AI in production.

Need help with Azure?

Our experts can help you implement and optimize your Microsoft solutions.

Talk to an Expert

Stay updated on Microsoft technologies

AzureMicrosoft FoundryGPT-5.5AI agentsenterprise AI

Related Posts

Azure

Microsoft Discovery Expands Preview for Agentic R&D

Microsoft has expanded preview access to Microsoft Discovery, its Azure-based agentic AI platform for research and development. The update adds broader enterprise readiness, partner interoperability, governance controls, and integrations that help R&D teams accelerate hypothesis generation, validation, and scientific workflows at scale.

Azure

Azure Accelerate for Databases Boosts AI Readiness

Microsoft has launched Azure Accelerate for Databases, a new program designed to help organizations modernize database estates for AI with expert support, funding, credits, skilling, and database savings plans. The offering aims to reduce migration risk and cost while helping IT teams build a stronger, AI-ready data foundation on Azure.

Azure

Azure Cloud Cost Optimization Principles for AI

Microsoft highlights why cloud cost optimization remains essential as AI workloads introduce less predictable usage patterns and higher cost sensitivity. The guidance emphasizes visibility, governance, rightsizing, and continuous review so organizations can control Azure spend while still supporting performance and innovation.

Azure

Azure Smart Tier GA for Blob and Data Lake Storage

Microsoft has made Azure Storage smart tier generally available for Azure Blob Storage and Data Lake Storage in nearly all zonal public cloud regions. The feature automatically moves objects between hot, cool, and cold tiers based on access patterns, helping organizations reduce storage costs without managing lifecycle rules manually.

Azure

Azure AI Cost Optimization: Maximize ROI in 2026

Microsoft has launched a new Azure-focused guidance series on cloud cost optimization, starting with strategies to maximize ROI from AI while keeping spending under control. The post highlights why AI cost management differs from traditional cloud optimization and why organizations need lifecycle-based governance, visibility, and value tracking as AI adoption scales.

Azure

Azure Drasi Uses GitHub Copilot to Test Docs

The Drasi team built an automated documentation testing workflow using GitHub Copilot CLI, Dev Containers, Playwright, and GitHub Actions. By treating the AI agent as a synthetic new user, the project can now catch broken tutorials and documentation drift earlier, helping maintain reliable onboarding for developers.