Skip to main content

Posts

Low-Code’s New Frontier: Tailored Solutions for Each Industry

For years, most low-code platforms have focused on one primary challenge: efficiency . The goal was to help teams build applications faster and with less effort, reducing manual coding, speeding up iterations, empowering non-developers, and enabling apps to be created in just a few clicks. That focus delivered real value, but it’s no longer enough. Today, the low-code conversation is shifting. While automation and speed still matter, they are no longer what sets platforms apart. The next phase of low-code is about fit—how well a platform supports the real-world needs of specific industries. This new frontier moves beyond simply closing productivity gaps or automating workflows. It’s about building applications that reflect the realities of regulated environments, complex data models, existing systems, and industry-specific processes. Low code is becoming more context-aware . As a result, industry alignment is emerging as a key differentiator. Platforms that understand the nuances...
Recent posts

The Risk Profile of AI-Driven Development 

In the cloud-native ecosystem, velocity is everything. We built Kubernetes, microservices, and CI/CD pipelines to ship faster and more reliably.   Now, AI coding assistants and autonomous agents are pushing that accelerator to the floor. What started as simple code completion has evolved into tools that draft requirements, generate Helm charts, scaffold microservices, and optimize CI/CD pipelines.   For those who care deeply about security hygiene, and especially dependency management, this acceleration requires a hard look at how we manage risk. When an AI agent can scaffold a microservice in seconds, it also makes dozens of architectural and dependency decisions in the blink of an eye.   Let’s discuss how the risk profile of development is shifting in the AI era, and how we must adapt.   The Pain Points: Dangerous Autonomy   Rapid Decision Velocity and Massive Volume   In traditional workflows, selecting a third-party library or container base im...

How eBPF and OpenTelemetry Have Simplified the Observability Function 

While many IT and engineering leaders understand the benefits of a comprehensive observability practice , achieving full visibility still presents some challenges. For example, instrumentation for new applications or off-the-shelf software often can be a time-consuming and complex process. As a result, engineering teams can be led to avoid observability in certain parts of their environments. When hurdles to observability exist and subsequently halt these efforts, systems are in more danger of disruptions or going completely dark. This can lead to serious business consequences such as financial losses, legal issues, and damage to brand reputation.   OpenTelemetry eBPF Instrumentation (OBI) makes getting this data a cinch. It allows engineering teams to confidently lean into observability without any manual setup steps. Consequently, teams can rapidly gain visibility into their services and infrastructure.   The Challenges to Complete Visibility    There are...

AI Is Forcing DevOps Teams to Rethink Observability Data Management

As AI coding tools accelerate software delivery, they are also intensifying a problem DevOps and SRE teams have been dealing with for years: the unchecked growth of observability data. In this conversation, the founders of Sawmills argue that telemetry volume is no longer just a cost issue. It is becoming a data quality problem that affects how effectively teams can monitor systems, troubleshoot incidents and make sense of production behavior. Ronit Belson and Erez Rusovsky describe how the rise of AI-generated code is making observability harder to manage. Instrumentation is often treated as an afterthought, which means more logs, metrics and traces are being generated without much discipline around relevance, quality or downstream impact. The result is familiar to many DevOps teams: rising observability bills, more noise in monitoring systems and growing difficulty separating useful telemetry from unnecessary data. Rather than waiting until data lands in production systems and...

Zero Downtime Multicloud Migrations for Observability Control Planes

Most platform teams aren’t deciding whether they’ll run across multiple clouds. They already are, or they’ll be soon. The real question is how to migrate critical systems without turning on-call into a guessing game. Observability raises the stakes more than almost any other domain. An observability control plane isn’t just a dashboard. It’s the operational authority system. It defines alert rules, routing, ownership, escalation policy, and notification endpoints. When that layer is wrong, the impact is immediate. The wrong team gets paged. The right team never hears about the incident. Your service level indicators look clean while production burns. A typical failure pattern is painfully simple. During a migration window, an ownership change lands in one system but not the other. A routing update is processed out of order. A notification endpoint rotates, but only one store is updated. Those discrepancies can sit quietly for days. Then a real incident hits, an alert fires, and it...

JetBrains Launches Air and Junie CLI to Blend Traditional IDE with AI Agents

JetBrains has launched a new “agentic” tooling stack that pairs a multi‑agent development environment, Air, with a standalone, LLM‑agnostic coding agent, Junie CLI. If you know JetBrains , you probably know it for Kotlin , the statically typed Java Virtual Machine (JVM) language used mostly for Android development, or for its well-known integrated development environments (IDEs), such as IntelliJ IDEA for Java, PyCharm for Python, and WebStorm for JavaScript. Going forward, JetBrains hopes you’ll also know it for its AI tools, JetBrains Air and Junie CLI . The first, Air, is pitched as an “agentic development environment” that lets developers delegate coding tasks to multiple AI agents running concurrently. Rather than bolting chat boxes onto editors, Air “builds tools around the agent,” bundling terminals, Git, previews, and code navigation into a single workspace designed to guide and correct agents rather than just prompt them. JetBrains says it’s using its 26 years of IDE ...

Microsoft Azure Skills Plugin Gives AI Coding Agents a Playbook for Cloud Deployment

AI coding agents are good at writing code. They’re not good at knowing which Azure service fits your workload, which SKU makes sense, what needs to be validated before deployment, or which permissions and quotas matter. That gap between writing code and getting it to production is exactly what Microsoft’s new Azure Skills Plugin is designed to close. Announced March 9 by Chris Harris on the All Things Azure blog, the plugin bundles 19+ curated Azure skills, the Azure MCP Server with over 200 tools across 40+ services, and the Foundry MCP Server for AI model workflows — all in a single install. It works across GitHub Copilot in VS Code, Copilot CLI, Claude Code, and other tools that support the agent plugin and skills patterns. The timing isn’t accidental. This is one of the first major plugins built on the VS Code agent plugin architecture that shipped in VS Code 1.110 just days earlier. And it demonstrates what that architecture looks like when a cloud platform vendor fills it wit...