Skip to main content

SUSE Extends AI Agent Reach via MCP Server Integration

SUSE today revealed it is collaborating with multiple providers of artificial intelligence (AI) agents with the ability to manage IT infrastructure resources via integrations with the Model Context Protocol (MCP) server embedded in its platforms.

Announced at the SUSECON 2026 conference, AI agents from Fsas Technologies, n8n and Revenium, Stacklock and Amazon Web Services (AWS) can invoke the MCP server that SUSE has embedded in its Rancher Prime and SUSE Multi-Linux Manager offerings.

Rick Spencer, general manager, engineering at SUSE, said that capability makes it possible, for example, for the Amazon Quick AI agent that AWS developed to automate workflows for managing IT infrastructure resources such as Linux servers and Kubernetes clusters.

Ultimately, any AI agent that can access the SUSE MCP server should be able to, for example, identify system faults in Kubernetes clusters or Linux servers, correlate system logs, and submit a pull request (PR) or a patch to restart a service or apply updates.

The goal is to provide a secure way for AI agents to monitor, troubleshoot and optimize infrastructure across any distribution of Linux or Kubernetes using SUSE Multi-Linux Manager or the Rancher Prime management platform, said Spencer.

The degree to which organizations will enable AI agents to manage IT infrastructure will naturally vary by use case but it’s clear that with the advent of MCP, not every platform is going to require its own dedicated AI agent. In some instances, an AI agent developed by a third-party will simply invoke MCP to automate a task. In other instances, however, an AI agent might communicate with another AI agent that has been specifically trained to perform IT infrastructure tasks.

Regardless of approach, the days when IT teams relied on graphical tools to manage IT environments are coming to an end, said Spencer. Instead, IT teams will use natural language interfaces to instruct AI agents to perform specific tasks, he added.

In the meantime, MCP itself remains a work in progress. The next iteration of MCP will enable IT teams to deploy stateless servers that will make it easier to deploy AI tools and applications at higher levels of scale. The technical oversight committee for MCP, now being advanced under the auspices of the Agentic AI Foundation (AAIF), is also working on a task capability that will make it easier to run long-running autonomous workflows versus continuing to rely on a request and response mechanism that would otherwise need to be re-invoked multiple times.

Additionally, maintainers of the MCP project are working on adding a triggers capability that will enable servers to initiate an action, rather than always having to rely on an MCP client. Support for retry semantics, expiration policies, native streaming and reusable skills that are based on domain knowledge are additional capabilities that are expected to be added in 2026.

Finally, updates to the Python and TypeScript software development kits (SDKs) will provide access to more efficient MCP clients and servers.

In the meantime, DevOps teams should expect the number of AI agents that can manage IT infrastructure to rapidly proliferate in the days and months ahead.



from DevOps.com https://ift.tt/kT0NYo9

Comments

Popular posts from this blog

Claude Code’s Ultraplan Bridges the Gap Between Planning and Execution

Planning a complex code change is hard enough. Reviewing it in a terminal window shouldn’t make it harder. Anthropic is addressing that friction with a new capability called Ultraplan, currently in research preview as part of Claude Code. The feature moves the planning phase of a coding task from your local terminal to the cloud — and gives developers a richer environment to review, revise, and approve a plan before a single line of code changes. It’s a small workflow shift with real practical value, especially for teams working on large-scale migrations, service refactoring, or anything that requires careful coordination before execution begins. How it Works Ultraplan connects Claude Code’s command-line interface (CLI) to a cloud-based session running in plan mode. When a developer triggers it — either by running /ultraplan followed by a prompt, typing the word “ultraplan” anywhere in a standard prompt, or choosing to refine an existing local plan in the cloud — Claude picks u...

Security as Code is Becoming the New Baseline: Continuous Compliance in DevOps 

There was a time when compliance meant a quarterly ritual. Someone from security would walk over with a spreadsheet, ask a few questions, tick a few boxes and disappear until the next audit cycle. The infrastructure team would scramble to prove that yes, encryption was enabled, and no, that S3 bucket was not public anymore. Everyone felt relieved, went back to shipping features and quietly hoped nothing would drift before the next review.   That model is dead; it just hasn’t been buried yet.   The problem is not that teams lack security awareness. Most engineering organizations today understand that vulnerabilities need catching early and that production environments need hardening. The problem is that compliance has historically lived outside the delivery pipeline — treated as a checkpoint rather than a continuous practice. In a world where teams deploy dozens of...

Java 26 Arrives With AI Integration and a New Ecosystem Portfolio — What It Means for DevOps Teams

Oracle released Java 26 on March 17, 2026, and while every six-month release comes with its own set of improvements, this one carries a broader message: Java isn’t just keeping pace with the AI era — it’s actively positioning itself as the infrastructure layer where AI workloads will run. For DevOps teams managing large Java estates, that’s worth paying attention to. The Scale of What You’re Already Running Before getting into what’s new, it helps to remember what’s already in place. According to a 2025 VDC study, Java is the number one language for overall enterprise use and for cloud-native deployments. There are 73 billion active JVMs running today, with 51 billion of those in the cloud. That scale matters when you’re thinking about where AI fits in. Most of the systems where agentic AI will eventually operate — transactional platforms, backend services, data pipelines — are already running on Java. The question for DevOps teams isn’t whether to adopt Java for AI. It’s how to ...