Skip to main content

Posts

Open Source Contribution is About More Than Just Altruism 

Open source software is a foundational pillar of modern technology. From operating systems and databases to cloud infrastructure and developer tooling, it is embedded across nearly every layer of the stack. Most organizations rely on it in meaningful ways, often without fully accounting for how central it has become to their ability to build, scale, and operate And yet, for all its ubiquity, contribution to open source remains uneven. Many organizations still treat open source as something to consume rather than something to participate in. It is pulled into internal systems, adapted, and relied upon, but the relationship often stops there. A  new   report by the Linux Foundation   found that 28% of organizations say they use but do not contribute to open source software at all—that’s over a quarter of all organizations that are not contributing whatsoever. And for those who do, the degree to which they contribute may vary significantly.   Within the open source world, that dynamic ...
Recent posts

GitHub’s Spec Kit Puts the Spec Back in Software Development

If you’ve spent any time working with AI coding agents, you know the routine. You describe what you want. The agent generates code that looks right. You run it. It breaks — or worse, it works but solves the wrong problem. This frustrating pattern has earned a name: Vibe coding . You give the AI a vague idea and hope it guesses correctly. For quick prototypes, that’s fine. For production software, it’s a real problem. GitHub’s answer is Spec Kit — a new open-source toolkit for spec-driven development that provides a structured process to bring spec-driven development to your coding agent workflows with tools including GitHub Copilot, Claude Code, and Gemini CLI. The core idea is simple: Write the spec first. Specs as the Source of Truth For decades, code has been king. Specifications served code — they were the scaffolding we built and then discarded once the “real work” of coding began. We wrote PRDs to guide development, created design docs to i...

When Should a DevOps Agent Act Without Human Approval? 

Every team deploying AI agents in DevOps eventually faces the same design question, and it’s more consequential than it first appears: H ow much should the agent do on its own? The question sounds like a settings dial — more autonomy here, less there. In practice, it is a governance question, an engineering question, and an organizational trust question bundled together.   This article gives you a framework for thinking through the autonomy decision — what factors actually determine where on the copilot-to-autopilot spectrum a specific action should sit, and how to build the guardrails that make the decision defensible.   The Spectrum Isn’t Binary   The framing of “ human in the loop vs. fully autonomous” is too coarse to be useful in practice. Real DevOps agent deployments live somewhere on a more granular spectrum:   Level 0 — Observe only : The agent watches and logs. No output to humans, no actions. Used for baselining and evaluation before deployment. ...

NetDevOps Isn’t Stalled, it’s Stuck on the Wrong Problem 

While it is true that network engineers are taking on NetDevOps roles to advance stalled automation efforts,  the barrier to NetDevOps isn’t technical; it’s people.   The 44% Problem   The  2025 State of Network Automation Survey  from the Network Automation Forum, which gathered responses from 681 network professionals across 58 countries, paints a clearer picture. When asked about barriers to automation, only 10% cited technical challenges. Meanwhile, 44% pointed to  people  problems: Skills gaps, organizational dysfunction, cultural resistance and yes, sometimes, just personalities that don’t mesh.   Let that sink in: The tools work. Python is mature. Ansible is everywhere. Source-of-truth platforms are production-ready. The tech isn’t the bottleneck — people are — and this aligns with something else the recent NetDevOps article touched on: Nearly half the organizations have no formal measurement of automation success. You can’t fund what you can’t prove, and you can’t prove...

AI-Generated Apps Without DevOps: A Security Disaster Waiting to Happen

A small internal tool was built over a weekend. An engineer used an AI coding assistant to generate most of the backend. A simple interface was added, a few API calls were wired together and within hours the app was live. The app worked. The app felt fast. The app looked like progress. No one thought much about how the tool was deployed. There was no pipeline, no review process and no structured testing. The code was generated, copied, slightly adjusted and pushed into an environment that was already running. For a while everything seemed fine. Then something subtle happened. An API key was exposed in a configuration file. A dependency pulled in by the generated code had a known vulnerability. A route that should have been protected was left open. None of these issues were visible from the outside. The system still worked. Users kept using the tool. This is the part that makes AI-generated apps risky. They do not fail loudly. They fail quietly and often too late. The Illusi...

Vercel’s deepsec Brings AI-Powered Security Scanning Into the Development Workflow

Security has long been the last item on the checklist. Code gets written, reviewed, merged—and then, somewhere down the line, a security team takes a look. That model worked when development moved at a human pace. It doesn’t work as well when AI writes and refactors code faster than any team can keep up with. Vercel is taking a direct shot at that problem with the open-source release of deepsec, an agent-powered security harness that runs on your own infrastructure and surfaces hard-to-find vulnerabilities in large codebases. How It Works Deepsec uses Claude and Codex to conduct a tailored investigation of a codebase, starting with static analysis to identify security-sensitive files. From there, coding agents investigate each candidate, tracing data flows, checking for mitigations, and producing actionable findings with severity ratings. The process runs in five stages: scan, investigate, revalidate, enrich, and export. The scan stage runs roughly 110 regex matchers across t...

Survey Surfaces Significant Levels of IDP Investment to Reduce SDLC Friction

A survey of 954 IT decision-makers suggests more resources are now being allocated to reducing friction across the software development lifecycle (SDLC). Conducted by CDW, the survey finds more than two-thirds of respondents (68%) report their organization has adopted an internal developer platform (IDP). The primary goal is to improve operational efficiency (57%), provide better user experiences (48%), and improve observability and security (47%), the survey finds. However, a significant percentage of respondents also noted that their development teams are still encountering friction, with systems integration (25%) and security and compliance restrictions (23%) identified as the two primary sources. Additionally, the survey identifies testing and quality assurance (22%) and integration, deployment and implementation (18%) as the two biggest bottlenecks in their organization’s software engineering workflows. IT leaders, as a result, are investing more in automation in areas s...