Skip to main content

IREN to Acquire Mirantis to Reduce IT Infrastructure Management Friction

IREN Ltd, a provider of cloud infrastructure services, today revealed its intent to acquire Mirantis, a provider of open source OpenStack and Kubernetes software that is deployed in both cloud computing and on-premises IT environments.

Under terms of the agreement valued at $625 million, Mirantis will operate as an independent subsidiary of IREN, a former provider of bitcoin mining services that now specializes in hosting artificial intelligence (AI) workloads.

Dominic Wilde, senior vice president of marketing for Mirantis, said that while Mirantis will continue to engage with the more than 1,500 IT organizations it supports directly, the combined entity will reduce friction as IT teams move to deploy AI workloads that require deeper levels of infrastructure integration. That level of alignment will benefit IT organizations that are customers of both companies, he added.

Privately held, Mirantis has long provided a distribution of OpenStack and more recently has been curating a distribution of Kubernetes for cloud-native application environments. Most recently, Mirantis launched a k0rdent AI platform to provide a control plane to integrate the management of infrastructure across bare metal, virtual machines and Kubernetes environments. Earlier this year, Mirantis also made available a reference architecture for building and deploying AI workloads on Kubernetes clusters that is based on the k0rdent AI platform.

The overall goal is to make it simpler to build multi-tenant environments spanning multiple classes of processors using a set of reusable templates spanning compute, storage, networking and graphical processor units (GPUs) from NVIDIA, AMD and Intel.

IREN, meanwhile, will be able to leverage those frameworks to deploy workloads faster in a way that also serves to optimize workload performance and reduces total costs.

It’s unclear to what degree the management of infrastructure will further unify in the age of AI, but there is a growing amount of cost sensitivity. The underlying graphical processor units (GPUs) needed to run AI workloads remain scarce. Paradoxically, overall GPU utilization rates remain relatively low, creating an opportunity to use platforms like Mirantis to distribute AI workloads across servers and clusters in a way that should reduce the total IT costs.

There is, of course, no shortage of providers of distributions of Kubernetes and OpenStack platforms. The challenge has always been determining which providers of these distributions have the best tools and frameworks that DevOps teams need to deploy and manage these open source platforms at scale.

Unfortunately, many IT teams lack the IT infrastructure management expertise required to deploy and manage AI workloads. In fact, many organizations are underestimating the total cost of AI by not factoring in the amount of IT infrastructure that will be required to run these types of workloads at scale. As the pressure to operationalize AI continues to increase, the number of infrastructure challenges that IT teams will encounter will only continue to expand, especially as responsibility for managing the IT infrastructure needed to run these application shifts from data science to IT operations teams. The issue then becomes not just finding the best way to streamline the management of that IT infrastructure, but also justifying the level of investment that is going to ultimately be required.



from DevOps.com https://ift.tt/a94ylRN

Comments

Popular posts from this blog

Claude Code’s Ultraplan Bridges the Gap Between Planning and Execution

Planning a complex code change is hard enough. Reviewing it in a terminal window shouldn’t make it harder. Anthropic is addressing that friction with a new capability called Ultraplan, currently in research preview as part of Claude Code. The feature moves the planning phase of a coding task from your local terminal to the cloud — and gives developers a richer environment to review, revise, and approve a plan before a single line of code changes. It’s a small workflow shift with real practical value, especially for teams working on large-scale migrations, service refactoring, or anything that requires careful coordination before execution begins. How it Works Ultraplan connects Claude Code’s command-line interface (CLI) to a cloud-based session running in plan mode. When a developer triggers it — either by running /ultraplan followed by a prompt, typing the word “ultraplan” anywhere in a standard prompt, or choosing to refine an existing local plan in the cloud — Claude picks u...

Cursor’s New SDK Turns AI Coding Agents Into Deployable Infrastructure

For most of its life, Cursor has been an IDE. A very good one. But with the public beta of the Cursor SDK, the company is making a different kind of move — one that should get the attention of DevOps teams. The Cursor SDK is a TypeScript library that gives engineers programmatic access to the same runtime, models, and agent harness that power Cursor’s desktop app, CLI, and web interface. In short, the agents that used to live inside an editor can now be invoked from anywhere in your stack. That’s a meaningful shift in how AI coding tools fit into software delivery pipelines. From the Editor to the Pipeline If you’ve used Cursor before, the workflow is familiar — you interact with an agent in real time, asking it to write functions, fix bugs, or review code. The SDK breaks that dependency on interactive use. Now you can call those same agents programmatically, from a CI/CD trigger, a backend service, or embedded inside another tool. Getting started is a single inst...

OpenAI Debuts Symphony to Orchestrate Coding Agents at Scale

OpenAI has unveiled Symphony, an open-source specification that shifts how software development teams deploy AI in workflows, moving from interactive coding assistance toward continuous orchestration of autonomous agents. Symphony reframes project management tools as operational hubs for AI-driven coding. Rather than prompting an assistant for individual tasks, developers assign work through issue trackers, allowing agents to execute tasks in parallel and deliver outputs for human review. The change reflects a trend in enterprise AI in which systems are increasingly embedded into production pipelines rather than used as standalone tools. Symphony emerged from internal experimentation at   OpenAI , where engineers attempted to scale the use of   Codex   across multiple concurrent sessions. While the agents proved capable, human operators became the limiting factor. Engineers found they could only manage a handful of sessions before coordination overhead offset pro...