Skip to main content

Posts

Showing posts from March, 2026

The SaaS Observability Era is Ending: Why BYOC Is the Future of Telemetry 

For years, observability was supposed to be the great equalizer . The way every team could understand their systems, debug faster, and ship with confidence. But somewhere along the way, it became the opposite: Complex, expensive, and increasingly constrained.   What was meant to empower developers has become a system governed by egress costs, ingestion pricing, and sampling limits. Teams do not stop observing because they want to. They stop because they are forced to make tradeoffs to stay within budget.   The good news? The pendulum is swinging back. A quiet architectural revolution is already underway. One that puts observability back inside your cloud, under your control. It’s called bring your own cloud (BYOC) and it’s redefining how telemetry is stored, processed, and paid for.   The Problem: Observability Got Too Expensive and Too Centralized   In the early days, sending all your telemetry to a SaaS platform felt like a superpower. Datadog, New Relic and ...

Secure Code Warrior AI Agent Applies Policies to AI Generated Code

Secure Code Warrior (SCW) this week added an artificial intelligence (AI) agent that both identifies code generated by an AI coding tool and automatically applies the appropriate governance policies. Company CEO Pieter Danhieux said the SCW Trust Agent makes it possible for DevSecOps teams to use AI to verify which AI models influenced specific commits, correlate that influence to vulnerability exposure, and take corrective action before insecure code is added to a production environment. DevSecOps teams can also use the AI agent to discover any Model Context Protocol (MCP) servers that might have been deployed without permission. Finally, SCW benchmark data can also be used to evaluate models and enforce approved AI usage policies based on measurable output, noted Danhieux. For example, a developer may be using one AI model to reduce costs without realizing they are also generating more vulnerabilities that would not otherwise be created if they relied on a different AI model. A...

SpyCloud’s 2026 Identity Exposure Report Reveals Explosion of Non-Human Identity Theft

Austin, TX, USA, March 19th, 2026, CyberNewswire New Report Highlights Surge in Exposed API Keys, Session Tokens, and Machine Identities, and more. SpyCloud , the leader in identity threat protection, today released its annual 2026 Identity Exposure Report , one of the most comprehensive analyses of stolen credentials and identity exposure data circulating in the criminal underground and highlighting a sharp expansion in non-human identity (NHI) exposure. Last year, SpyCloud saw a 23% increase in its recaptured identity datalake, which now totals 65.7B distinct identity records. The report shows attackers are increasingly targeting machine identities and authenticated session artifacts in addition to traditional username and password combinations and personally identifiable information (PII). “We’re witnessing a structural shift in how identity is exploited,” said Trevor Hilligoss, Chief Intelligence Officer at SpyCloud . “Attackers are no longer just targeting credentials. The...

Open SWE Captures the Architecture That Stripe, Coinbase and Ramp Built Independently for Internal Coding Agents

Stripe built Minions. Ramp built Inspect. Coinbase built Cloudbot. Three engineering organizations, working independently, arrived at similar architectural decisions for their internal AI coding agents. LangChain noticed the convergence and open-sourced the pattern. Open SWE, released March 17, is an open-source framework built on LangChain’s Deep Agents and LangGraph that provides the core architectural components for internal coding agents. The MIT-licensed project isn’t trying to be another AI coding assistant . It’s a customizable foundation for organizations that want to build their own — the way Stripe, Ramp and Coinbase already have. The Convergence What caught LangChain’s attention was that these independently developed systems share the same architectural decisions. Isolated cloud sandboxes where tasks run with full permissions inside strict boundaries. Curated toolsets — Stripe reportedly maintains around 500 carefully selected tools. Subagent orchestration where complex...

Arcjet Extends Runtime Policy Engine to Block Malicious Prompts

Arcjet today added an ability to detect and block risky prompts before they are shared with a large language model (LLM) embedded within an application. The Arcjet AI prompt injection protection capability is based on an LLM that the company has been specifically training to detect patterns indicative of risky prompts that can then be blocked using a runtime policy engine built using WebAssembly (Wasm). That approach makes it simpler to embed the Arcjet policy engine into application code and apply it to endpoints built with JavaScript, Python or frameworks such as the Vercel AI software development kit (SDK) or LangChain. Arcjet CEO David Mytton said that the overall goal is to prevent malicious prompts from being used to, for example, discover the underlying components of an application environment or delete data. Alternatively, a prompt might also expose sensitive data to an AI model in a way that shouldn’t be allowed. Initially, Arcjet is focused on prompt-extraction and shel...

Google, Microsoft and Peers Donate to Support Overloaded Open Source Maintainers

A coalition of major tech companies has committed $12.5 million to strengthen the security of open source software, an effort aimed at coordinating responses to the growing pressures created by AI. The funding is provided by Anthropic, AWS, GitHub, Google, Microsoft and OpenAI. It will be administered by the Linux Foundation through its Alpha-Omega Project and the Open Source Security Foundation (OpenSSF). The funding arrives at a moment when AI tools are reshaping both software development and cybersecurity. Automated systems can now identify vulnerabilities at a scale that was previously unattainable. While that offers huge benefits, it also creates new headaches for the developers who maintain widely used open source projects. Maintainers are increasingly inundated with security reports, many of them generated by AI systems. The volume of these findings has outpaced the ability of small teams (and in many cases, individual contributors) to assess and respond effectively. In re...

Spacelift Intelligence Vibe-Codes Infrastructure

Whether the DevOps shops like it or not, they are feeling the pressure from AI. They’re expected to move more quickly, alongside their dev counterparts. The gruntwork that used to take weeks can be automated away, leaving time for fast prototyping, so the managers think. According to Google Cloud’s 2025 DORA State of AI-assisted Software Development Report , 90% of developers now use AI tools, and 25% are now working alongside  AI assistants.  Users of the Spacelift Infrastructure-as-Code platform now have some help with this automation, thanks to a new feature offering a conversational interface that purports to explain what is going on with their IT operations, and even make changes on the user’s behalf if necessary.  “Platform teams are expected to respond at the speed of experimentation while still maintaining security, compliance, and operational consistency,” wrote Technical Senior Product Manager Tim Davis, in a blog item posted today .  It is an app...

Komodor Extends Reach of AI SRE Orchestration Framework

Komodor today extended the reach of its orchestration framework for artificial intelligence (AI) agents by adding support for Model Context Protocol (MCP) servers and the OpenAPI specification. Company CTO Itiel Shwartz said those capabilities will make it possible for IT teams to more broadly orchestrate AI agents that are being used to investigate and remediate issues affecting IT infrastructure. Komodor has already developed more than 50 AI agents that automate the management of Kubernetes clusters running cloud-native applications. By adding support for MCP and OpenAPI, that orchestration framework can now be used to manage hybrid IT environments running more complex applications, said Shwartz. For example, IT teams can use the Komodor orchestration framework to invoke third-party AI agents found on the Komodor Marketplace that have been trained to automate network and storage management tasks or provision graphical processor units (GPUs), he noted. Alternatively, the orches...

Policy as Code for Cost Control, Not Just Compliance

Policy as code is usually framed as a compliance tool. It blocks insecure configurations, enforces internal standards, and helps teams prove they meet audit or regulatory requirements. That framing is accurate, but incomplete. The same mechanism can also reduce waste. In many organizations, cloud cost is still reviewed after resources are live and spend is already visible on the bill. By then, the expensive decision has already been made. Policy as code gives platform teams a way to shape those decisions earlier, before waste becomes part of the default path. Why Cost Problems Grow Quietly Cloud overspend rarely comes from one spectacular mistake. More often, it grows through small, routine decisions: Dev environments left running over the weekend Instance sizes chosen for peak demand and never revisited Snapshots, volumes, and logs retained long after anyone needs them Kubernetes requests increased “just in case” Premium managed services used for workloads that are usefu...

Harness Extends AI Security Reach Across Entire DevOps Workflow

Harness today added an ability to automatically secure code as it is being written by an artificial intelligence (AI) coding tool in addition to adding a module to its DevOps platform that discovers, tests, and protects AI components within applications. Secure AI Coding is an extension of the static application security testing (SAST) and software composition analysis (SCA) capabilities that Harness already provides. Additionally, Secure AI Coding leverages a Code Property Graph (CPG) developed by Harness to trace how data flows through the entire application to surface complex vulnerabilities such as injection flaws and insecure data handling. The AI Security module, meanwhile, discovers every call to a large language model (LLM), Model Context Protocol (MCP) server or AI agent that is being made over an application programming interface (API). At the same time, Harness today also revealed it has partnered with Wipro Ltd . to help organizations accelerate AI-native software del...

Sauce Labs Makes AI Agent for Creating and Running Tests Available

Sauce Labs today made generally available an artificial intelligence (AI) agent that translates a natural language intent into a set of executable test suites that can run anywhere. Company CEO Dr. Prince Kohli said the Sauce AI for Test Authoring agent closes a gap that has emerged between the rate at which code is being written in the age of AI and the ability of application developers and software engineering teams to validate it. Testing has now become a major bottleneck that is preventing DevOps teams from realizing many of the promises of AI coding, he added. In the absence of an ability to effectively test higher volumes of code, there are now also more applications than ever that have limited test coverage, noted Kohli. In general, Sauce Labs research suggests that even prior to the rise of AI coding, automated test coverage for complex journeys typically plateaus at under 35%. Trained using 8.7 billion real-world test runs to enable 41% faster root-cause analysis than a g...

Java 26 Arrives With AI Integration and a New Ecosystem Portfolio — What It Means for DevOps Teams

Oracle released Java 26 on March 17, 2026, and while every six-month release comes with its own set of improvements, this one carries a broader message: Java isn’t just keeping pace with the AI era — it’s actively positioning itself as the infrastructure layer where AI workloads will run. For DevOps teams managing large Java estates, that’s worth paying attention to. The Scale of What You’re Already Running Before getting into what’s new, it helps to remember what’s already in place. According to a 2025 VDC study, Java is the number one language for overall enterprise use and for cloud-native deployments. There are 73 billion active JVMs running today, with 51 billion of those in the cloud. That scale matters when you’re thinking about where AI fits in. Most of the systems where agentic AI will eventually operate — transactional platforms, backend services, data pipelines — are already running on Java. The question for DevOps teams isn’t whether to adopt Java for AI. It’s how to ...

Gemini CLI Plan Mode Separates Thinking From Doing — and Makes Read-Only the Default

The pattern across AI coding tools this week has been clear: the industry is building governance, review, and safety mechanisms as fast as it’s building capabilities. Google’s latest contribution is plan mode for Gemini CLI, announced March 11 , and now enabled by default for all users. Plan mode puts Gemini CLI in a read-only state where the agent can navigate your codebase, search for patterns, read documentation, and map dependencies — but it cannot modify any files except its own internal plans. The agent researches your request, asks clarifying questions, and proposes a strategy for your review before any code changes are made. The idea is simple: Think before you act. The implementation has some features that make it more interesting than it sounds. How it Works Enter plan mode by typing /plan , pressing Shift+Tab, or asking the agent to “start a plan for” whatever you need. Gemini CLI restricts itself to read-only tools — read_file , grep_search , glob — and can use s...