Skip to main content

Posts

Waydev Adds Ability to Track How Much AI Code Winds Up in Production

Waydev today revealed it has revamped its engineering intelligence platform to provide insights into how the adoption of artificial intelligence (AI) coding tools is impacting DevOps workflows. Company CEO Alex Circei said the overall goal is to make it easier for the leaders of software engineering teams to determine the return on investment (ROI) their AI coding tools are actually providing. While there is little doubt that AI tools are capable of generating code faster than humans, the percentage of that code making it into production environments is often unknown. DevOps engineers need to understand where AI code is being accepted, rejected or rewritten, and whether AI-assisted pull requests pass CI at the same rate as those authored by a human. The Waydev platform, now at every checkpoint in a DevOps workflow, captures which AI agent wrote the code across all commits, repositories, teams, and tools, along with insights into usage costs. A Waydev AI agent then provides a natur...
Recent posts

GitHub Introduces Stacked PRs to Ease Review Bottlenecks

GitHub’s new Stacked Pull Requests feature restructures how developers submit and review changes by allowing large code updates to be broken into smaller, interdependent units. With Stacked PRs, each unit can be reviewed and merged individually while still contributing to the overall feature set. The approach helps developers shift away from monolithic pull requests, which have become increasingly difficult to manage as development continues to move faster. The release of Stacked PRs is a response to the rise of AI-assisted coding tools, which have greatly increased the volume and scale of code submissions, placing new pressure on review workflows. While large pull requests spanning dozens of files used to be merely inconvenient, they are sometimes now a systemic issue. There is a widening gap between code generation and code review, with reviewers dealing with reduced visibility and slower turnaround times. With the layered workflow of Stacked PRs, developers can sequence related...

FinOps Isn’t Slowing You Down — It’s Fixing Your Pipeline 

If you work in DevOps, you’ve probably had this experience:   You ship something. It works. Performance looks good. Deployment is clean.   A few weeks later, someone from finance shows up asking why costs spiked 30%.   Now you’re digging through logs, trying to reconstruct decisions you made weeks ago, in a completely different context.   That’s not a FinOps problem.   That’s a workflow problem.   The Real Issue: Cost Lives Outside the Pipeline   Most DevOps teams have spent years tightening feedback loops:   Code quality → caught in PRs   Security → caught in CI   Performance → caught in testing   Cost is the outlier.   It typically shows up:   After deployment   In a separate dashboard   Owned by a different team   Which means it’s not actionable when it matters.   You can’t fix what you can’t see *in context*.   Why DevOps Teams End Up Owning Cloud Cost Anyway ...

SmartBear Extends Scope of API Lifecycle Management Ambitions

SmartBear today added capabilities to its platform for designing and managing application programming interfaces (APIs) that make it easier to both keep track of them and detect drift. A revamped Swagger Catalog, in addition to providing a unified view of APIs, also makes it possible to govern them. At the same time, SmartBear is adding Swagger Contract Testing with drift detection that verifies the API is behaving as specified in a contract. Additionally, SmartBear later this quarter plans to revamp its API editor along with artificial intelligence (AI) tools for generating APIs, a context-aware ability to create documentation, Spectral-based governance enforcement, a Model Context Protocol (MCP) Server and expanded multi-protocol support, including OpenAPI 3.1. AsyncAPI 3.0, and GraphQL. Laura Kennedy, director of product management for SmartBear, said both additions extend the API lifecycle management capabilities of the company’s platform. For example, Swagger Catalog combi...

Agentic CI/CD is Not Automation: Why the Distinction Will Define DevOps in 2026

There is a dangerous conflation happening across our industry right now. Teams are plugging LLM-powered agents into their deployment pipelines, calling it “agentic CI/CD,” and treating it as the next logical step after shell scripts and Terraform modules. It is not. Automation executes predefined instructions. An agent reasons about context, makes decisions, and takes actions that were never explicitly coded. If we continue treating intelligent agents like scripts, we will fail to build the necessary governance layer that defines this next era of CI/CD. That difference is not semantic. It is architectural, operational, and, if you get it wrong, catastrophic. Think about what happens when your Terraform plan runs. It reads state, computes a diff, and presents you with a deterministic set of changes. You review. You approve. You apply. The blast radius is knowable. Now think about what happens when an AI agent decides to scale down a service because it interpreted a cost anomaly as a ...

Claude Code Can Now Run Your Desktop

For most of its short life, Claude has lived inside a chat window. You type, it responds. That model is changing fast. Anthropic recently expanded Claude Code and Claude Cowork with a new computer use capability that lets the AI directly control your Mac or Windows desktop — clicking, typing, opening applications, navigating browsers, and completing workflows on your behalf. It’s available now as a research preview for Pro and Max subscribers. The short version: Claude can now do things at your desk while you’re somewhere else. How it Actually Works Claude doesn’t reach for the mouse first. It prioritizes existing connectors to services like Slack or Google Calendar. When no connector is available, it steps up to browser control. Only when those options don’t apply does it take direct control of the desktop — navigating through UI elements the way a human would. Claude always requests permission before accessing any new application, and users can halt operations at any point. T...

GitHub Copilot Pulls Drawstring On Tighter Developer Usage Limits

GitHub Copilot is popular. The AI-powered code completion tool (originally developed by GitHub and OpenAI) works to give software application developers a so-called “AI pair programmer” buddy that offers suggested code snippets and (when called upon) entire functions – and it happens directly within an engineer’s Integrated Development Environment (IDE) of choice. All of which means that GitHub Copilot isn’t just popular in terms of total usage; the tool is reporting an increase in patterns of high concurrency (individual developers performing similar operations, but more likely different developers requesting the same types of functions) and intense usage among power-users. No Foul Play, Probably The GitHub blog itself doesn’t necessarily point the finger at nefarious usage techniques – the team understands that spikes “can be driven by legitimate workflows” here – but indirect prompt injection (placing malicious instructions inside a public repository or pull request) could ex...