Skip to main content

Microsoft Foundry Tackles the AI Agent Tool Problem Nobody Talks About

AIOps, applications, Tabnine AIOps
AIOps, applications, Tabnine AIOps

Building AI agents sounds straightforward until you actually do it. You need an agent to onboard a new employee. It has to create an Entra ID account, provision GitHub access, spin up cloud resources, create tasks in Azure DevOps, and send a welcome message in Teams. Five tools. Five different authentication models. Five different teams are managing those tools.

Now multiply that across every agent your organization is building.

That’s the problem Microsoft is addressing with Toolboxes in Foundry, now available in public preview.

What Toolboxes Actually Do

A Toolbox is a named, reusable bundle of tools managed in Microsoft Foundry. You define your tools once, configure authentication centrally, and expose everything through a single MCP-compatible endpoint. Any agent that can consume an MCP endpoint can use a Toolbox — regardless of the framework it was built on.

The endpoint looks like this:

https://zava.services.ai.azure.com/api/projects/<project>/toolbox/<toolbox-name>/mcp?api-version=v1

One endpoint. Every tool in the bundle. No per-tool wiring.

Today, Toolboxes support built-in tools like Web Search, Code Interpreter, File Search, and Azure AI Search. They also support Model Context Protocol (MCP), Agent-to-Agent (A2A), and OpenAPI integrations. Authentication is handled centrally through OAuth identity passthrough and Microsoft Entra managed identity — agents don’t manage credentials themselves.

Why This Matters for DevOps Teams

The problem Toolboxes solve isn’t just developer friction. It’s an operational and governance problem.

When every team wires its own tools, credentials get duplicated. Governance is inconsistent. There’s little visibility into which tools exist across the organization, who’s using them, or whether they’re being called correctly. Security teams can’t enforce consistent policies when every agent has its own integration pattern.

Toolboxes shift that model. Tool owners define and publish once. Consuming teams connect once. And when a Toolbox is updated, agents using it don’t need to change their code.

Microsoft has organized the Toolbox lifecycle into four pillars: Discover, Build, Consume, and Govern. Build and Consume are available today. Discover (which helps teams find existing approved tools rather than rebuild them) and Govern (which provides centralized observability and controls across all tool calls) are coming soon.

According to Mitch Ashley, VP and practice lead for software lifecycle engineering at The Futurum Group, “Tool wiring has become the operational bottleneck for enterprise agent deployment, and the integration layer is now control plane territory. Foundry Toolboxes position Microsoft to govern that layer regardless of which runtime executes the agent.”

Ashley continues, “The strategic value rests on the Discover and Govern pillars, still on the roadmap. Without centralized registries and observability across tool calls, enterprises retain credential sprawl and policy gaps that already constrain agent autonomy. Procurement should weigh Toolboxes against the governance roadmap, because preview tooling alone leaves those gaps in place.”

Not Locked to Foundry

One concern teams often raise with Microsoft-native tooling is vendor lock-in. Here, the answer is fairly clear. Toolboxes are created and governed in Foundry, but any agent runtime that supports MCP can consume them. That includes agents built with Microsoft Agent Framework, LangGraph, GitHub Copilot, Claude Code, or custom code.

You can manage tools centrally in Foundry and still use them across your existing agent infrastructure.

Getting Started

Getting a Toolbox up and running takes a few steps. You create an AIProjectClient, define your toolbox with the tools and authentication it needs, grab the unified MCP endpoint that Foundry generates, and attach it to your agent. Versioning is built in — you can promote a tested version to production, share the endpoint broadly, and continue iterating without breaking downstream agents.

Microsoft has published sample code for Microsoft Agent Framework, LangGraph, and Copilot SDK, along with an AZD CLI quickstart and support in the Foundry Toolkit for VS Code.

The Bigger Picture

Toolboxes reflect something the industry is starting to accept: the AI agent problem isn’t just about models. The integration and infrastructure layer matters just as much. Right now, tool wiring is often the bottleneck — not model capability.

If Toolboxes deliver on the governance and discoverability features still on the roadmap, this could be a meaningful step toward making enterprise agent development more reliable and repeatable. The public preview is a good place to test that assumption.

Documentation is available at Microsoft Learn, and the Foundry Portal is open for exploration at ai.azure.com.



from DevOps.com https://ift.tt/J3WOsYf

Comments

Popular posts from this blog

Claude Code’s Ultraplan Bridges the Gap Between Planning and Execution

Planning a complex code change is hard enough. Reviewing it in a terminal window shouldn’t make it harder. Anthropic is addressing that friction with a new capability called Ultraplan, currently in research preview as part of Claude Code. The feature moves the planning phase of a coding task from your local terminal to the cloud — and gives developers a richer environment to review, revise, and approve a plan before a single line of code changes. It’s a small workflow shift with real practical value, especially for teams working on large-scale migrations, service refactoring, or anything that requires careful coordination before execution begins. How it Works Ultraplan connects Claude Code’s command-line interface (CLI) to a cloud-based session running in plan mode. When a developer triggers it — either by running /ultraplan followed by a prompt, typing the word “ultraplan” anywhere in a standard prompt, or choosing to refine an existing local plan in the cloud — Claude picks u...

Claude Code Can Now Run Your Desktop

For most of its short life, Claude has lived inside a chat window. You type, it responds. That model is changing fast. Anthropic recently expanded Claude Code and Claude Cowork with a new computer use capability that lets the AI directly control your Mac or Windows desktop — clicking, typing, opening applications, navigating browsers, and completing workflows on your behalf. It’s available now as a research preview for Pro and Max subscribers. The short version: Claude can now do things at your desk while you’re somewhere else. How it Actually Works Claude doesn’t reach for the mouse first. It prioritizes existing connectors to services like Slack or Google Calendar. When no connector is available, it steps up to browser control. Only when those options don’t apply does it take direct control of the desktop — navigating through UI elements the way a human would. Claude always requests permission before accessing any new application, and users can halt operations at any point. T...

Java 26 Arrives With AI Integration and a New Ecosystem Portfolio — What It Means for DevOps Teams

Oracle released Java 26 on March 17, 2026, and while every six-month release comes with its own set of improvements, this one carries a broader message: Java isn’t just keeping pace with the AI era — it’s actively positioning itself as the infrastructure layer where AI workloads will run. For DevOps teams managing large Java estates, that’s worth paying attention to. The Scale of What You’re Already Running Before getting into what’s new, it helps to remember what’s already in place. According to a 2025 VDC study, Java is the number one language for overall enterprise use and for cloud-native deployments. There are 73 billion active JVMs running today, with 51 billion of those in the cloud. That scale matters when you’re thinking about where AI fits in. Most of the systems where agentic AI will eventually operate — transactional platforms, backend services, data pipelines — are already running on Java. The question for DevOps teams isn’t whether to adopt Java for AI. It’s how to ...