Skip to main content

The Vibing Continuum: How Software Will Vibe its Way Through Agentic Engineering 

AI coding, teams, vibecoding, shadow, vibecoding vibe, coding, GitHub, agents, Gemini, Canvas, Gemini, code, Augment Code, code, kernel compliance-as-code software secure software Terraform infrastructure
AI coding, teams, vibecoding, shadow, vibecoding vibe, coding, GitHub, agents, Gemini, Canvas, Gemini, code, Augment Code, code, kernel compliance-as-code software secure software Terraform infrastructure

Did God vibe the universe into existence? My mind served up a strange thought at three in the morning. The sudden idea may have been sparked from an occurrence in the previous evening, when one of our team members spun an entire e-commerce website by merely “vibing with Codex. I tried to shush my mind, but it wouldn’t stay quiet.  

God spoke, let there be light and there was light, isn’t that a classic example of spinning the whole universe by sheer vibing? Now for the record, my mind has never contested or undermined the Big Bang theory, but creating the world through mere words feels far less unbelievable when seen through the vibe coding analogy. The mind prodded further.  

Could God have created and then deputed (abandoned?) the world to human agents, eerily similar to how humans have deputed (are deputing) software development to AI agents?  

Possible, entirely possible! Now my eyes were wide open.   

I quickly recalled the dark software factory supposedly run by Agents, operating on two cardinal rules: Humans don’t write codes. Humans don’t review codes. Even in the event of a malfunction, the debugging is still taken care of by the AI agents. Humans have no say in it, just like God remains silent during awful incidents, trusting the human agents to accept and correct their actions.  

The next morning, in the office, I posed this question to one of my pals, a senior technical manager with decades of experience.  

“Let’s say ten years from now, if the dark software factory has become the de facto standard for producing software, where would that leave the human coders?” I had hoped to catch some signs of denial, but surprisingly, he didn’t dwell much on the question and rather blurted out that he didn’t see a future where agents would entirely replace humans. The grand role of planning and designing the systems would always rest with the almighty humans, he asserted. “However, the future of coding would come down to merely talking to AI agents all day,” he sighed, the drop in his voice, clearly visible.  

What a time to be alive, I rather exclaimed, saying it is a great time for all those non-engineering folks who see coding purely as a means to an end. But what about those engineers who really enjoy the process of writing code line by line? The kind of guys who like to meditate on a problem and figure out good code that works without bugs. I wanted to understand how these thoroughbred software engineers feel when AI steals their labors of love.  

Again, I was surprised to receive two strikingly opposite viewpoints of how these experienced programmers perceive the AI change.  

Viewpoint 1: This group sees the intrusion of AI as a massive opportunity. “Buddy, I am running a herd of agents.” A senior systems architect, a friend of mine, elaborated his grand AI pivot. “I am solely focused on the outcomes rather than the process,” He clarified. “Gone are the days when I would spend an entire day coming up with a few lines of working code. Now AI agents can build products in record time.” He was emphatic about what the future holds for human coders.  “Remember today’s leverage is not about building products. The real moat is in scaling the products. For that, you need expertise in how to extract the work from AI. My sole focus these days is on learning how to switch contexts between five agents assigned to five different tasks.”  

Viewpoint 2: This group is deeply concerned about the future (fate) of software engineering itself. One in particular, a senior engineer, told me that he derived a working prototype in ten minutes using Replit. “I was blown away by the speed, but all the time I was a mere spectator, as there was only minimal involvement. I felt less of an engineer.”  He spoke rather bluntly and then gave careful thought. “If this is how it is going to be, then we may have more products, but less engineers in the future.” He based his assumption on the fact that the smart, experienced engineers who have been successful with AI were fairly trained during the pre-AI period. The way they instruct AI, monitor its progress, and course correct when necessary was based on decades of experience working in multiple languages on multiple platforms. But the next generation of software engineers may not have the same luxury to harbor their own mental capabilities. Unless there is a deliberate restraint on the use of AI, the bulk of their thinking and judgment would end up outsourced to AI, hindering them from gaining an intimate and intuitive understanding of software engineering.  

Weighing Two Perspectives  

The first group believes that the ends justify the means. They see software engineering as a means to solve real-life business problems. By the same breadth, they wouldn’t mind if they were to vibe code an application or maneuver through a host of agents. The motivation is no longer in creating demos or prototypes. Only scale inspires them. Am I creating an application that could succeed in processing terabytes of data? How many projects am I running in parallel? The challenge that excites them is envisioning production-ready software that fulfills its intended purposes. They may have taken their hands off coding, but they spend a great deal at the macro level, designing, planning and executing projects in optimal time.  

The second group prefers to retain control over micro-level decisions, let alone macro-level ones: Should I generalize the logic or duplicate it now, and refactor it if need be? Should I replace this working-but-ugly solution before merging? Is this edge case realistic enough to justify a test? Even if they were to use AI at some (assisted or agentic) capacity, they would still prefer to micro-manage the programming process. In effect, they either remain grounded in traditional coding, or, at most step into the role of technical lead, managing a group of AI agents, carefully reading walkthroughs and codebase explanations, closely monitoring agent progress, and ensuring that the agents adhere to their blueprints from start to end.  

The Vibing Continuum  

As you know, vibe coding is declarative; you describe what you want, and the AI figures out the rest. Agentic engineering on the other hand is imperative, where the engineers provide a clear direction, detailing action items for each agent, establishing guardrails, and following up with agents at every milestone.  

Since emerging last year, Vibe coding has made writing code remarkably cheap. The cost of launching a working software has dramatically reduced to a minimum. However, the technique has become largely synonymous with sloppy software. It works like a charm when spinning cute little applications, but proves far less effective for modernizing legacy systems or building novel applications. The non-engineering folks purely see vibe coding as a flashy toy to create throwaway demos, and so do the professionals who indulge during the weekend to bring those passion projects to life. As a result, there is a broad consensus that producing high-quality, reliable software requires agentic engineering or AI-assisted traditional coding.  

Having said that, we can all agree that as AI continues to evolve, much of the software built will be either agentically engineered or vibe-coded. In parallel, experienced software engineers will find their work divided between orchestrating agents and vibing outcomes, while the junior engineers would do well to practice traditional coding—with AI as assistants—to strengthen their fundamentals before moving into agentic engineering.  

All things considered, we can safely presume that AI is here to stay, and believe that humans will hold the final say! 



from DevOps.com https://ift.tt/cwClLrM

Comments

Popular posts from this blog

Claude Code’s Ultraplan Bridges the Gap Between Planning and Execution

Planning a complex code change is hard enough. Reviewing it in a terminal window shouldn’t make it harder. Anthropic is addressing that friction with a new capability called Ultraplan, currently in research preview as part of Claude Code. The feature moves the planning phase of a coding task from your local terminal to the cloud — and gives developers a richer environment to review, revise, and approve a plan before a single line of code changes. It’s a small workflow shift with real practical value, especially for teams working on large-scale migrations, service refactoring, or anything that requires careful coordination before execution begins. How it Works Ultraplan connects Claude Code’s command-line interface (CLI) to a cloud-based session running in plan mode. When a developer triggers it — either by running /ultraplan followed by a prompt, typing the word “ultraplan” anywhere in a standard prompt, or choosing to refine an existing local plan in the cloud — Claude picks u...

Claude Code Can Now Run Your Desktop

For most of its short life, Claude has lived inside a chat window. You type, it responds. That model is changing fast. Anthropic recently expanded Claude Code and Claude Cowork with a new computer use capability that lets the AI directly control your Mac or Windows desktop — clicking, typing, opening applications, navigating browsers, and completing workflows on your behalf. It’s available now as a research preview for Pro and Max subscribers. The short version: Claude can now do things at your desk while you’re somewhere else. How it Actually Works Claude doesn’t reach for the mouse first. It prioritizes existing connectors to services like Slack or Google Calendar. When no connector is available, it steps up to browser control. Only when those options don’t apply does it take direct control of the desktop — navigating through UI elements the way a human would. Claude always requests permission before accessing any new application, and users can halt operations at any point. T...

Google’s Scion Gives Developers a Smarter Way to Run AI Agents in Parallel

Running multiple AI agents on the same project sounds straightforward — until they start stepping on each other. Different agents accessing the same files, sharing credentials, or colliding on the same codebase can quickly turn a promising setup into a coordination nightmare. That’s the problem Google set out to solve with Scion. Scion is an experimental multi-agent orchestration testbed built to manage concurrent AI agents running in containers across local machines and remote clusters. Google recently open-sourced the project, giving developers a hands-on way to experiment with parallel agent execution across tasks like research, coding, auditing, and testing. Think of it as a control layer that keeps agents working together without getting in each other’s way. What Makes Scion Different Most agent frameworks treat AI as a library or prompt-chaining script that runs directly in your environment. Scion takes a different approach — it treats agents as system processes, wrapping ...