Skip to main content

Red Hat Previews AI Agent Integration with Ansible Automation Platform

Red Hat today revealed it is extending the reach of its Ansible Automation Platform for IT operations to artificial intelligence (AI) agents, in addition to making it simpler to build AI agents using existing application development tools.

Announced at the Red Hat Summit conference, version 2.7 of the Ansible Automation Platform adds a technology preview of an orchestration engine for AI agents that are able to invoke capabilities via an integrated Model Context Protocol (MCP) server.

Sathish Balakrishnan, vice president and general manager for Ansible at Red Hat, said these capabilities provide AI agents with a trusted execution layer through which they can automate IT operations. The overall goal is to make new and existing libraries of automation playbooks available to AI agents in a way that can be governed using a set of policies enforced via the Red Hat Ansible Automation Platform, he added.

As part of that effort, the Red Hat Ansible Automation Platform can now serve as an OpenID Connect (OIDC) authentication provider for HashiCorp Vault, which is provided by Red Hat’s sister subsidiary of IBM. That capability makes it possible to issue short-lived, job-specific tokens for event-driven tasks to reduce potential risks.

At the same time, Red Hat announced that a version of its Red Hat Desktop tool for building applications now includes an instance of the Red Hat Podman tool for building and deploying containers that can be used to isolate AI agents in a sandbox environment.

Additionally, Red Hat has enhanced Red Hat Advanced Developer Suite to include a set of Red Hat Trusted Libraries and a set of AI tools to determine if known vulnerabilities in generated code are relevant to a specific application runtime.

Red Hat also announced the general availability of Red Hat Hardened Images to provide application developers with a set of more secure containers for building cloud-native applications, along with a Technical Supportability Review capability for Red Hat with AI that transforms manual environmental audits into a set of automated, self-service function that can validate more than 600 touchpoints.

Finally, Red Hat also announced it is making available a Red Hat Enterprise Linux Long-Life Add-On option for organizations that need support for longer than three years.

It’s not clear to what degree DevOps teams are adding AI agents to workflows, but it’s now more a question of what degree of trust they will place in them rather than if. The ultimate proof in the proverbial pudding will be determining to what degree AI agents will enable software engineers to reliably automate workflows at scale across increasingly complex application environments that have large numbers of interdependencies.

Ultimately, a small army of AI agents that are supervised by software engineers should enable organizations to deploy more applications than ever at higher levels of scale. The challenge, of course, will be ensuring that the AI agents confine their activities to the narrow set of tasks they have been assigned versus, for example, deleting a production database for one unexplained reason or another.



from DevOps.com https://ift.tt/gQDO5He

Comments

Popular posts from this blog

Cursor’s New SDK Turns AI Coding Agents Into Deployable Infrastructure

For most of its life, Cursor has been an IDE. A very good one. But with the public beta of the Cursor SDK, the company is making a different kind of move — one that should get the attention of DevOps teams. The Cursor SDK is a TypeScript library that gives engineers programmatic access to the same runtime, models, and agent harness that power Cursor’s desktop app, CLI, and web interface. In short, the agents that used to live inside an editor can now be invoked from anywhere in your stack. That’s a meaningful shift in how AI coding tools fit into software delivery pipelines. From the Editor to the Pipeline If you’ve used Cursor before, the workflow is familiar — you interact with an agent in real time, asking it to write functions, fix bugs, or review code. The SDK breaks that dependency on interactive use. Now you can call those same agents programmatically, from a CI/CD trigger, a backend service, or embedded inside another tool. Getting started is a single inst...

Mistral Moves Coding Agents to the Cloud — and Gets Out of Your Way

For the past year or so, AI coding agents have been tethered to your local machine. You kick off a task, watch the terminal, and babysit every step. It works — but it’s not exactly hands-free. Mistral just changed that. On April 29, the Paris-based AI company announced remote coding agents for its Vibe platform, powered by a new model called Mistral Medium 3.5. The idea is simple: Instead of running coding sessions on your laptop, they now run in the cloud — asynchronously, in parallel, and without you watching over them. What’s Actually New Coding sessions can now work through long tasks while you’re away. Many can run in parallel, and you no longer become the bottleneck at every step the agent takes. That’s the core pitch. You start a task from the Mistral Vibe CLI or directly from Le Chat — Mistral’s AI assistant — and the agent handles the rest. When it’s done, it opens a pull request on GitHub and notifies you, so you review the result inste...

OpenAI Debuts Symphony to Orchestrate Coding Agents at Scale

OpenAI has unveiled Symphony, an open-source specification that shifts how software development teams deploy AI in workflows, moving from interactive coding assistance toward continuous orchestration of autonomous agents. Symphony reframes project management tools as operational hubs for AI-driven coding. Rather than prompting an assistant for individual tasks, developers assign work through issue trackers, allowing agents to execute tasks in parallel and deliver outputs for human review. The change reflects a trend in enterprise AI in which systems are increasingly embedded into production pipelines rather than used as standalone tools. Symphony emerged from internal experimentation at   OpenAI , where engineers attempted to scale the use of   Codex   across multiple concurrent sessions. While the agents proved capable, human operators became the limiting factor. Engineers found they could only manage a handful of sessions before coordination overhead offset pro...