Skip to main content

Embracing the MCP Suck: Taming the Wild West of AI Protocols

The Model Context Protocol (MCP) is moving faster than the developer community can keep up with, racing past its original design parameters and leaving teams scrambling to build clients that can match its pace. The result is an ecosystem where the protocol itself keeps shifting under everyone’s feet, and where the tooling, conventions and security thinking that should accompany a foundational standard are still being figured out on the fly.

Joey Stout, solutions architect at Spacelift, joins Mike Vizard to make the case that this is the price of being early. Stout describes an environment that increasingly resembles a Wild West, where rogue MCP servers get spun up inside organizations without anyone in leadership knowing they exist, let alone whether they have basic guardrails wrapped around them. The convenience of standing one up in a few minutes has outrun the discipline needed to govern them.

MCP servers can give AI agents broad reach into internal systems, data and APIs, and most of the early implementations were never designed with adversarial behavior in mind. Without authentication standards, scoped permissions and observability built in, every new server becomes another piece of shadow infrastructure that can be exploited, misconfigured or simply forgotten until it causes a problem.

Stout’s advice for developers is blunt: embrace the suck. The protocol is going to keep changing, the security story is going to keep evolving, and waiting for a stable, fully governed version before getting hands-on isn’t a realistic option. The teams that learn to wrangle MCP now — messy edges and all — will be the ones positioned to set the patterns everyone else ends up following.



from DevOps.com https://ift.tt/oSVnPFv

Comments

Popular posts from this blog

Claude Code’s Ultraplan Bridges the Gap Between Planning and Execution

Planning a complex code change is hard enough. Reviewing it in a terminal window shouldn’t make it harder. Anthropic is addressing that friction with a new capability called Ultraplan, currently in research preview as part of Claude Code. The feature moves the planning phase of a coding task from your local terminal to the cloud — and gives developers a richer environment to review, revise, and approve a plan before a single line of code changes. It’s a small workflow shift with real practical value, especially for teams working on large-scale migrations, service refactoring, or anything that requires careful coordination before execution begins. How it Works Ultraplan connects Claude Code’s command-line interface (CLI) to a cloud-based session running in plan mode. When a developer triggers it — either by running /ultraplan followed by a prompt, typing the word “ultraplan” anywhere in a standard prompt, or choosing to refine an existing local plan in the cloud — Claude picks u...

Claude Code Can Now Run Your Desktop

For most of its short life, Claude has lived inside a chat window. You type, it responds. That model is changing fast. Anthropic recently expanded Claude Code and Claude Cowork with a new computer use capability that lets the AI directly control your Mac or Windows desktop — clicking, typing, opening applications, navigating browsers, and completing workflows on your behalf. It’s available now as a research preview for Pro and Max subscribers. The short version: Claude can now do things at your desk while you’re somewhere else. How it Actually Works Claude doesn’t reach for the mouse first. It prioritizes existing connectors to services like Slack or Google Calendar. When no connector is available, it steps up to browser control. Only when those options don’t apply does it take direct control of the desktop — navigating through UI elements the way a human would. Claude always requests permission before accessing any new application, and users can halt operations at any point. T...

Google’s Scion Gives Developers a Smarter Way to Run AI Agents in Parallel

Running multiple AI agents on the same project sounds straightforward — until they start stepping on each other. Different agents accessing the same files, sharing credentials, or colliding on the same codebase can quickly turn a promising setup into a coordination nightmare. That’s the problem Google set out to solve with Scion. Scion is an experimental multi-agent orchestration testbed built to manage concurrent AI agents running in containers across local machines and remote clusters. Google recently open-sourced the project, giving developers a hands-on way to experiment with parallel agent execution across tasks like research, coding, auditing, and testing. Think of it as a control layer that keeps agents working together without getting in each other’s way. What Makes Scion Different Most agent frameworks treat AI as a library or prompt-chaining script that runs directly in your environment. Scion takes a different approach — it treats agents as system processes, wrapping ...