
When you ask an AI coding agent how to solve a problem, it reaches for code. That’s not just a preference – code is how software teams actually ship and we have an ecosystem of essential tools and management systems: Version control, reviews, tests in CI, deploys and rollbacks.
We’ve spent the last decade pushing more of our systems into code: Configuration, infrastructure, and of course, application logic. The payoff was control, reproducibility, audit trails, and the ability to prove a change works before it hits production.
The Dashboard Problem
But a lot of “AI tooling” – especially in security – still lives outside that world. When a problem depends on a third-party system, the agent often can’t complete the loop. It can recommend steps, but it can’t reliably apply them, verify them, or keep them correct over time. They’re outside of the context window that is your codebase (the source of truth).
Take spam signups. If the solution is a vendor product configured in a dashboard, the agent has to bridge the gap with brittle workflows: opening a browser, creating an account, clicking through settings, copying keys, tweaking rules, maybe even asking you to change DNS. At best, you get a checklist. At worst, you’re letting an AI drive production config through a UI built for humans, not automation. In the AI world, the web dashboard is secondary.
Verification
Then you hit the real issue – verification. With code, you can run tests, but with a dashboard, what’s the equivalent of a unit test? How do you prove it blocks the bad traffic, allows the good traffic, and keeps working when the vendor changes something? Everyone has pressed a dashboard button and broken production, sometimes so subtly that it ends up being reported by your users.
One response is “wrap the vendor in an agent-friendly API” – for example, by shipping an MCP server. That helps, but it doesn’t solve the core problem that most integrations still don’t offer a clean, testable contract. You can change settings through an API and still have no reliable way to validate behavior locally or in CI. So you end up testing in production.
Code-Native Wins
The products that will win in the era of AI coding are code-native end-to-end: Integrated, configured, executed and observable in code. And testable locally and in CI before reaching production.
If your agent can solve spam signups by writing code you can run locally – and prove in CI – you’ve given it a real tool. If it can only hand you dashboard instructions, the model is handicapped and your team is stuck doing the risky, untestable part manually.
You don’t want AI that recommends changes. You want AI that can ship changes safely, with verification, and keep them correct over time. If it isn’t code, it isn’t automatable – it’s just advice.
from DevOps.com https://ift.tt/0NL3goZ
Comments
Post a Comment