The Frontier Model Arrives
OpenAI’s GPT-5.5 is now generally available in Microsoft Foundry, and this isn’t just another incremental model release. GPT-5.5 represents a clear evolution from “smart chatbot” to “production agent”—built specifically for sustained, multi-step professional work where the cost of imprecision is high.
The improvements matter for anyone building agentic systems. Better computer-use accuracy means fewer hallucinated UI actions. Deeper long-context reasoning means agents can hold architectural intent across large codebases. Token efficiency means lower costs at scale. GPT-5.5 Pro extends this further for the most complex enterprise workflows, though at a premium: $30/M input tokens and $180/M output tokens versus GPT-5.5’s $5 and $30 respectively.
But here’s what actually matters: Microsoft Foundry is positioning itself as the operating system for agents at scale. The blog post makes this explicit—you can define agents in YAML, LangGraph, Claude Agent SDK, OpenAI Agents SDK, GitHub Copilot SDK, or Microsoft Agent Framework, and Foundry Agent Service runs them all with isolated sandboxes, persistent filesystems, distinct Entra identities, and scale-to-zero pricing. That’s the infrastructure play that makes frontier models actually operationalizable.
MCP Arrives in Copilot Studio
Speaking of infrastructure plays, Copilot Studio is adding Model Context Protocol (MCP) support in public preview this month (May 2026), with general availability planned for October.
If you’ve been following the agent ecosystem, you know MCP is Anthropic’s open protocol for connecting AI systems to data sources and tools. It’s becoming the standard way to extend agents without building bespoke connectors for every integration. With MCP in Copilot Studio, you can point agent workflows at any MCP-compliant server—proprietary systems, dynamic knowledge sources, custom actions—and they’ll discover and invoke tools with structured inputs and outputs.
This is a smart move. Instead of Microsoft building walled-garden integrations, they’re embracing an emerging standard that already has ecosystem momentum. The same MCP server works across multiple agents and workflows, reducing duplication and accelerating extensibility while keeping workflow governance intact.
For context, I wrote about how GitHub Copilot SDK enables agents for every app—MCP support in Copilot Studio follows the same pattern of making agent capabilities composable and reusable across platforms.
Databricks Goes Agent-First
Azure Databricks shipped several updates this month that signal where data engineering workflows are headed. The Lakeflow Pipelines Editor is now GA, and it’s explicitly built as an “agent-first experience” with Genie Code integrated directly into the pipeline development flow.
You write ETL pipelines with AI assistance side-by-side with the pipeline graph and metrics. The GitHub connector for Lakeflow Connect hit beta, meaning you can now ingest GitHub data directly into Databricks. This matters if you’re building data pipelines that pull from code repositories—issue tracking, PR metadata, code metrics, contributor activity.
Databricks Runtime 18.2 went GA as well, and they added native data profiling for notebook result tables—small quality-of-life improvements that reduce context switching.
Storage Gets Smarter
Azure Blob and Data Lake Storage’s smart tier is now generally available. This is a fully managed auto-tiering capability that continuously optimizes data placement based on access patterns without operational overhead.
Since the public preview at Ignite 2025, over 50% of smart-tier-managed capacity has automatically moved to cooler (cheaper) tiers. You pay standard hot, cool, and cold capacity rates with no additional charges for tier transitions, early deletion, or retrieval. The only extra cost is a monitoring fee for orchestration.
If you’re managing large data estates and still manually lifecycle-managing blobs, smart tier is a no-brainer. Set it and forget it.
Reserved VM Instances: Act Before July 1
One pricing change to watch: Microsoft is discontinuing new purchases and renewals of Reserved VM Instances for select VM series starting July 1, 2026.
One-year RIs are ending for Av2, Amv2, Bv1, D, Ds, Dv2, Dsv2, F, Fs, Fsv2, G, Gs, Ls, and Lsv2. Both one-year and three-year RIs are ending for Dv3, Dsv3, Ev3, and Esv3. If you have workloads on these series and don’t take action before July 1, you’ll be billed at pay-as-you-go rates once your RI expires—even if auto-renew is enabled.
Existing RIs will honor their full term, but new purchases are done. This is Microsoft nudging customers toward newer VM families and potentially Azure Savings Plans, which offer more flexibility across compute services.
The Bottom Line
This week’s Azure updates reveal a consistent direction: agent infrastructure is becoming a first-class platform concern. GPT-5.5 in Foundry with hosted agent services, MCP support in Copilot Studio, and agent-first tooling in Databricks all point to the same shift—AI capabilities are moving from experimental notebooks to production systems with real governance, identity, and scale requirements.
The storage and pricing changes are table stakes, but the agent story is where Azure is making its biggest bet. If you’re building agentic systems, the infrastructure gap between “demo” and “production” just got a lot smaller. Foundry Agent Service, MCP extensibility, and isolated agent execution with Entra identities are the primitives that actually matter when you’re running thousands of agents, not dozens.
The race is on to see which cloud provider builds the best operating system for agentic AI. This week, Azure shipped real infrastructure.