Skip to main content

CloudBees Delivers on AI Promise to Improve Application Testing

CloudBees has made generally available an add-on for continuous integration/continuous deployment (CI/CD) platforms that uses artificial intelligence (AI) to determine which tests should be run first based on the likelihood there will be a failure.

Shawn Ahmed, chief product officer at CloudBees, said CloudBees Smart Tests eliminates the need to run an entire testbed. Instead, this extension to a CI/CD platform surfaces which specific tests are likely to fail, which allows a DevOps team to run them first rather than waiting hours, or sometimes even days, to run an entire suite of tests, noted Ahmed.

Additionally, DevOps teams can run those tests in parallel to further reduce the amount of time required to vet an application workload, which in turn reduces the overall amount of CI/CD processing overhead, added Ahmed.

That capability is now especially critical in the age of AI as the amount of code being generated continues to exponentially increase, he noted. In fact, because most of that code has been generated by a machine, the only way to really understand how it was constructed is to apply machine learning (ML) algorithms at scale to test it, noted Ahmed.

Based on a Launchable platform that CloudBees acquired in 2024, CloudBees Smart Tests is based on ML algorithms that have been trained to identify patterns that enable it to predict which tests a workload is most likely to fail. Compatible with multiple CI/CD platforms, that approach ultimately makes it possible to complete testing as much as 30 to 50 times faster, said Ahmed.

Mitch Ashley, vice president and practice lead for software lifecycle engineering at the Futurum Group, said AI code generation is compressing commit-to-deployment timelines, and test execution is emerging as the bottleneck that determines whether that speed holds. CloudBees Smart Tests shifts test selection from sequential execution to risk-weighted intelligence, he added.

For teams absorbing higher AI-generated code volumes, running full test suites will compound delays, noted Ashley. Test selection is now a pipeline governance decision, and teams that lack ML-based prioritization will find CI/CD overhead growing in proportion to their AI adoption, said Ashley.

Testing, of course, is usually one of the things that DevOps teams most frequently cut back on whenever a deadline looms. As a result, the number of applications that have been deployed without running a complete battery of tests is much higher than most organizations care to admit. However, if it becomes simpler to run tests faster, the overall quality of the applications being deployed should improve. The challenge is in the short term the volume of code being generated today is already overwhelming existing workflows relied on to test software.

Each organization will need to determine to what degree to rework their DevOps workflows in the age of AI but change at this point is inevitable. In the meantime, using AI to accelerate testing is low-hanging fruit that doesn’t, hopefully, require as much reengineering to achieve. The challenge, of course, is summoning up the will to apply ML to testing in the face of existing inertia.



from DevOps.com https://ift.tt/YNCczjG

Comments

Popular posts from this blog

Claude Code’s Ultraplan Bridges the Gap Between Planning and Execution

Planning a complex code change is hard enough. Reviewing it in a terminal window shouldn’t make it harder. Anthropic is addressing that friction with a new capability called Ultraplan, currently in research preview as part of Claude Code. The feature moves the planning phase of a coding task from your local terminal to the cloud — and gives developers a richer environment to review, revise, and approve a plan before a single line of code changes. It’s a small workflow shift with real practical value, especially for teams working on large-scale migrations, service refactoring, or anything that requires careful coordination before execution begins. How it Works Ultraplan connects Claude Code’s command-line interface (CLI) to a cloud-based session running in plan mode. When a developer triggers it — either by running /ultraplan followed by a prompt, typing the word “ultraplan” anywhere in a standard prompt, or choosing to refine an existing local plan in the cloud — Claude picks u...

Java 26 Arrives With AI Integration and a New Ecosystem Portfolio — What It Means for DevOps Teams

Oracle released Java 26 on March 17, 2026, and while every six-month release comes with its own set of improvements, this one carries a broader message: Java isn’t just keeping pace with the AI era — it’s actively positioning itself as the infrastructure layer where AI workloads will run. For DevOps teams managing large Java estates, that’s worth paying attention to. The Scale of What You’re Already Running Before getting into what’s new, it helps to remember what’s already in place. According to a 2025 VDC study, Java is the number one language for overall enterprise use and for cloud-native deployments. There are 73 billion active JVMs running today, with 51 billion of those in the cloud. That scale matters when you’re thinking about where AI fits in. Most of the systems where agentic AI will eventually operate — transactional platforms, backend services, data pipelines — are already running on Java. The question for DevOps teams isn’t whether to adopt Java for AI. It’s how to ...

Gremlin Adds Detected Risk Tool to Chaos Engineering Service

Gremlin's risk detection capability in its chaos engineering service automatically identifies issues that could cause outages along with recommendations to resolve them. from DevOps.com https://ift.tt/iaw9Q7D