Skip to main content

Posts

From AI Code to Production: The Case for FeatureOps 

According to the  2025 DORA State of DevOps report , three out of four developers now use AI coding tools daily. That number keeps climbing. By the end of 2026, over 80% of individual developers will rely on AI assistants to write, review and refactor code.   But here’s the problem: The same research found that as AI usage increases, delivery stability tends to decrease. Code ships faster than governance can follow. When developers start accepting AI-generated suggestions without fully understanding subtle issues buried in the logic, the understanding gap between writing code and comprehending its production impact widens.   In other words, speed without control is a false economy.   The Control Gap   When AI generates code at the speed of a keystroke, traditional review cycles struggle to keep up. Pull requests pile up. Code reviews become bottlenecks. Teams feel pressure to approve changes faster, and subtle bugs slip through.   The ...
Recent posts

Two Malicious npm Packages Aim to Steal Credentials and Other Secrets

Bad actors took over a npm maintainer account and have published two malicious packages designed to steal credentials, API keys, and other secrets from the computers of victims who download them from the repository. Analysts with Sonatype’s Security Research Team wrote in a report that the two packages – sbx-mask and touch-adv – likely are more than test packages, with the attackers hijacking the publisher account to take advantage of the trust maintainers build with developers to steal valuable information, in this case, secrets that can include credentials, certificates, or API keys. Sonatype is tracking the packages under  Sonatype-2026-001276  and  Sonatype-2026-001275 , adding that the malware campaign is still active and under investigation. The attacks haven’t been attributed to a threat actor yet. Sonatype reported the packages this week to npm. The malicious packages are only the latest examples of a rising trend of bad actors targeting open code repositori...

The SaaS Observability Era is Ending: Why BYOC Is the Future of Telemetry 

For years, observability was supposed to be the great equalizer . The way every team could understand their systems, debug faster, and ship with confidence. But somewhere along the way, it became the opposite: Complex, expensive, and increasingly constrained.   What was meant to empower developers has become a system governed by egress costs, ingestion pricing, and sampling limits. Teams do not stop observing because they want to. They stop because they are forced to make tradeoffs to stay within budget.   The good news? The pendulum is swinging back. A quiet architectural revolution is already underway. One that puts observability back inside your cloud, under your control. It’s called bring your own cloud (BYOC) and it’s redefining how telemetry is stored, processed, and paid for.   The Problem: Observability Got Too Expensive and Too Centralized   In the early days, sending all your telemetry to a SaaS platform felt like a superpower. Datadog, New Relic and ...

Secure Code Warrior AI Agent Applies Policies to AI Generated Code

Secure Code Warrior (SCW) this week added an artificial intelligence (AI) agent that both identifies code generated by an AI coding tool and automatically applies the appropriate governance policies. Company CEO Pieter Danhieux said the SCW Trust Agent makes it possible for DevSecOps teams to use AI to verify which AI models influenced specific commits, correlate that influence to vulnerability exposure, and take corrective action before insecure code is added to a production environment. DevSecOps teams can also use the AI agent to discover any Model Context Protocol (MCP) servers that might have been deployed without permission. Finally, SCW benchmark data can also be used to evaluate models and enforce approved AI usage policies based on measurable output, noted Danhieux. For example, a developer may be using one AI model to reduce costs without realizing they are also generating more vulnerabilities that would not otherwise be created if they relied on a different AI model. A...

SpyCloud’s 2026 Identity Exposure Report Reveals Explosion of Non-Human Identity Theft

Austin, TX, USA, March 19th, 2026, CyberNewswire New Report Highlights Surge in Exposed API Keys, Session Tokens, and Machine Identities, and more. SpyCloud , the leader in identity threat protection, today released its annual 2026 Identity Exposure Report , one of the most comprehensive analyses of stolen credentials and identity exposure data circulating in the criminal underground and highlighting a sharp expansion in non-human identity (NHI) exposure. Last year, SpyCloud saw a 23% increase in its recaptured identity datalake, which now totals 65.7B distinct identity records. The report shows attackers are increasingly targeting machine identities and authenticated session artifacts in addition to traditional username and password combinations and personally identifiable information (PII). “We’re witnessing a structural shift in how identity is exploited,” said Trevor Hilligoss, Chief Intelligence Officer at SpyCloud . “Attackers are no longer just targeting credentials. The...

Open SWE Captures the Architecture That Stripe, Coinbase and Ramp Built Independently for Internal Coding Agents

Stripe built Minions. Ramp built Inspect. Coinbase built Cloudbot. Three engineering organizations, working independently, arrived at similar architectural decisions for their internal AI coding agents. LangChain noticed the convergence and open-sourced the pattern. Open SWE, released March 17, is an open-source framework built on LangChain’s Deep Agents and LangGraph that provides the core architectural components for internal coding agents. The MIT-licensed project isn’t trying to be another AI coding assistant . It’s a customizable foundation for organizations that want to build their own — the way Stripe, Ramp and Coinbase already have. The Convergence What caught LangChain’s attention was that these independently developed systems share the same architectural decisions. Isolated cloud sandboxes where tasks run with full permissions inside strict boundaries. Curated toolsets — Stripe reportedly maintains around 500 carefully selected tools. Subagent orchestration where complex...

Arcjet Extends Runtime Policy Engine to Block Malicious Prompts

Arcjet today added an ability to detect and block risky prompts before they are shared with a large language model (LLM) embedded within an application. The Arcjet AI prompt injection protection capability is based on an LLM that the company has been specifically training to detect patterns indicative of risky prompts that can then be blocked using a runtime policy engine built using WebAssembly (Wasm). That approach makes it simpler to embed the Arcjet policy engine into application code and apply it to endpoints built with JavaScript, Python or frameworks such as the Vercel AI software development kit (SDK) or LangChain. Arcjet CEO David Mytton said that the overall goal is to prevent malicious prompts from being used to, for example, discover the underlying components of an application environment or delete data. Alternatively, a prompt might also expose sensitive data to an AI model in a way that shouldn’t be allowed. Initially, Arcjet is focused on prompt-extraction and shel...