Skip to main content

Incredibuild Unveils Islo Sandbox to Isolate AI Coding Agents

Incredibuild this week developed a sandbox, dubbed Islo, that makes it possible to safely run artificial intelligence (AI) coding agents.

Company CEO Shimon Hason said Islo provides an isolated execution environment that enables DevOps teams to limit access to sensitive data, codebases, resources and services. Each AI coding agent is then provided with its own dedicated, isolated environment that operates independently and can be centrally managed by a DevOps team.

DevOps teams can deploy Islo independently of the Incredibuild platform or, alternatively, use the Incredibuild software development lifecycle (SDLC) management platform to apply policy controls, manage agent identities, enforce guardrails, enable observability, ensure performance and, most importantly, control costs, said Hason.

Collectively, these capabilities provide the added benefit of also making it simpler to assign long-running tasks to an AI agent that can be governed and managed without an application developer needing to be physically sitting in front of their machine, he added.

At the same time, Incredibuild also revealed today that it is partnering with the Harbor Framework community, a provider of open-source infrastructure for authoring and executing agent benchmarks and evaluations. Benchmark authors and engineers can run their tasks on Islo’s cloud sandboxes with a single configuration change to create reproducible environments that can execute tests in parallel.

Now that using AI to write code is all but a solved issue, the next major challenge is to ensure that the tools being used are being effectively managed, said Hason. The overall goal is to make it simpler to safely deploy AI coding agents at scale in a way that ensures DevOps teams are able to maintain control over the application development environment, he added.

It’s not clear to what degree organizations are now relying on AI coding tools to generate code that actually winds up in a production environment. A recent Futurum Group survey, however, found a full 60% of respondents said their organization is now actively using AI to build and deploy software. The top areas of investment over the same period are AI Copilot/AI code tools (38%), AI agent development (37%), AI-assisted testing (37%) followed closely by DevOps (37%), automated deployment (34%) and software security testing (31%).

Regardless of approach to application development in the AI era, the one thing that is clear is legacy approaches to managing software engineering are not going to scale to the level AI coding tools and associated agents will require. The challenge then becomes determining how best to modernize platforms and workflows in a way that eliminates bottlenecks rather than creating additional ones for application developers to navigate.

Hopefully, there will come a day when rather than simply generating more code, the pace at which applications are being developed and deployed will substantially improve in the AI era. The issue, of course, is that when it comes to building and deploying an application, the actual effort required to write the code required is only a relatively small element of a much larger complex workflow that in many organizations still desperately needs to be automated.



from DevOps.com https://ift.tt/9WkoIqH

Comments

Popular posts from this blog

Claude Code’s Ultraplan Bridges the Gap Between Planning and Execution

Planning a complex code change is hard enough. Reviewing it in a terminal window shouldn’t make it harder. Anthropic is addressing that friction with a new capability called Ultraplan, currently in research preview as part of Claude Code. The feature moves the planning phase of a coding task from your local terminal to the cloud — and gives developers a richer environment to review, revise, and approve a plan before a single line of code changes. It’s a small workflow shift with real practical value, especially for teams working on large-scale migrations, service refactoring, or anything that requires careful coordination before execution begins. How it Works Ultraplan connects Claude Code’s command-line interface (CLI) to a cloud-based session running in plan mode. When a developer triggers it — either by running /ultraplan followed by a prompt, typing the word “ultraplan” anywhere in a standard prompt, or choosing to refine an existing local plan in the cloud — Claude picks u...

Cursor’s New SDK Turns AI Coding Agents Into Deployable Infrastructure

For most of its life, Cursor has been an IDE. A very good one. But with the public beta of the Cursor SDK, the company is making a different kind of move — one that should get the attention of DevOps teams. The Cursor SDK is a TypeScript library that gives engineers programmatic access to the same runtime, models, and agent harness that power Cursor’s desktop app, CLI, and web interface. In short, the agents that used to live inside an editor can now be invoked from anywhere in your stack. That’s a meaningful shift in how AI coding tools fit into software delivery pipelines. From the Editor to the Pipeline If you’ve used Cursor before, the workflow is familiar — you interact with an agent in real time, asking it to write functions, fix bugs, or review code. The SDK breaks that dependency on interactive use. Now you can call those same agents programmatically, from a CI/CD trigger, a backend service, or embedded inside another tool. Getting started is a single inst...

OpenAI Debuts Symphony to Orchestrate Coding Agents at Scale

OpenAI has unveiled Symphony, an open-source specification that shifts how software development teams deploy AI in workflows, moving from interactive coding assistance toward continuous orchestration of autonomous agents. Symphony reframes project management tools as operational hubs for AI-driven coding. Rather than prompting an assistant for individual tasks, developers assign work through issue trackers, allowing agents to execute tasks in parallel and deliver outputs for human review. The change reflects a trend in enterprise AI in which systems are increasingly embedded into production pipelines rather than used as standalone tools. Symphony emerged from internal experimentation at   OpenAI , where engineers attempted to scale the use of   Codex   across multiple concurrent sessions. While the agents proved capable, human operators became the limiting factor. Engineers found they could only manage a handful of sessions before coordination overhead offset pro...