Skip to main content

Your CI/CD Pipeline Has Non-Human Identities You Forgot About

A deployment starts failing late on a Friday evening.

The initial assumption is that something changed in the application release. Teams start checking container images, Terraform plans and recent commits. Nothing looks wrong.

A few hours later, someone discovers the actual issue: a deployment token tied to an old automation workflow expired months ago. The token was still being used by a pipeline nobody realized was active.

The original engineer who created it had already moved to another team.

Situations like this are becoming normal in modern delivery environments. Not because organizations suddenly lost visibility into human access, but because CI/CD systems now create machine identities constantly. Most of them are temporary. Some become permanent without anyone planning for it.

A few years ago, infrastructure access mostly revolved around employees, administrators and service accounts that teams could track manually. That model no longer holds up very well.

Today’s pipelines rely on build runners, deployment bots, ephemeral workloads, repository integrations, infrastructure automation accounts and short-lived cloud credentials moving across multiple systems at once. Some exist for minutes. Others stay around for years after their original purpose disappeared.

The harder part is figuring out how much access quietly exists between systems.

Most organizations can usually tell you which employees still have production access. Tracking older automation workflows, deployment runners and pipeline credentials is often much less clear.

Machine Access Builds Up Quickly in CI/CD Systems

Modern software delivery pipelines create and consume identities almost nonstop.

A single deployment workflow may involve:

  • repository actions
  • build runners
  • container registries
  • cloud workload identities
  • Kubernetes service accounts
  • infrastructure automation tools
  • artifact signing systems
  • secret managers
  • deployment orchestrators

Each step often introduces another credential, access dependency or permission change.

A GitHub Actions workflow might assume a cloud role during deployment. A Kubernetes controller may retrieve secrets dynamically at runtime. Terraform automation could receive temporary permissions to create infrastructure resources before handing execution to another pipeline stage.

None of this is unusual anymore. Most teams depend on these workflows daily.

A lot of existing access management processes still assume identities are relatively static and tied to employees.

CI/CD environments behave very differently.

Identities appear temporarily during builds. Permissions get copied between environments to speed up deployments. Automation accounts survive long after projects are retired because removing them feels risky. A pipeline integration created during an outage quietly becomes part of the environment long-term.

Over time, these workflows start depending on access paths nobody fully tracks anymore.

In many environments, the delivery pipeline itself quietly becomes one of the most privileged systems in the organization.

It shows up in small ways first. An old Jenkins runner still has production deployment permissions even though the project was retired months ago. A GitHub Actions workflow keeps using the same cloud role across development and production because splitting access would slow down releases. A Kubernetes service account originally scoped for one namespace quietly gains broader cluster permissions after multiple troubleshooting changes over time.

Why Traditional IAM Visibility Starts Breaking Down

Traditional IAM processes were built around employees joining, changing roles and eventually leaving.

Automation does not follow the same lifecycle.

One deployment workflow may use:

  • a repository token
  • a cloud role
  • a Kubernetes service account
  • a shared deployment credential
  • several secrets injected at runtime

After a while, it becomes difficult to tell who still owns what.

Security teams may know the service account exists, but not which pipeline still depends on it. Infrastructure teams may rotate one credential without realizing another automation workflow still references the old version. Permissions granted during emergency fixes remain in place because nobody wants to risk breaking deployments later.

This is how temporary access slowly becomes permanent infrastructure.

Shared automation credentials make the problem worse. In some environments, multiple pipelines reuse the same secrets across staging and production simply because separating them would require refactoring old workflows. In some cases, old deployment repositories still retain access tokens for cloud environments that are no longer actively maintained, simply because nobody is fully certain what might break if they are removed.

Teams also copy permissions between projects more often than they realize.

A deployment role that originally needed read-only storage access suddenly receives write permissions because another pipeline required it during a migration. Later, a different project reuses the same configuration template. The permissions continue expanding even though nobody revisits the original assumptions.

The result is usually not one catastrophic configuration mistake.

It is years of small operational shortcuts accumulating quietly across delivery systems.

Most of them look harmless individually.

Together, they create environments where organizations no longer have a reliable inventory of which non-human identities can access critical systems.

Security Problems Teams Usually Miss

The risks usually do not come from one obviously dangerous credential sitting in a repository.

Most of these problems build up slowly across multiple systems.

A deployment pipeline may still have access to production environments long after the original application was decommissioned. Shared secrets reused across environments make lateral movement easier during incidents. Build runners sometimes receive broad infrastructure permissions simply because restricting them would require redesigning older workflows.

Repository integrations create another blind spot. An automation token added temporarily during a migration may continue retaining write access years later because nobody wants to risk breaking the pipeline by removing it.

Teams also underestimate how much production access exists inside CI/CD systems themselves. Build infrastructure often connects directly to artifact registries, cloud environments, deployment platforms and secret managers at the same time.

Once attackers gain access to a pipeline with elevated permissions, the CI/CD environment can become a path into multiple systems at once.

What Actually Helps

Another dashboard showing thousands of machine identities usually does not solve the real problem.

What teams usually need instead is clearer ownership and shorter trust chains.

A good starting point is inventorying which non-human identities still have production access and which systems actually depend on them. Many organizations discover old deployment runners, unused automation accounts or long-lived credentials that nobody intentionally kept.

Separating build and deployment permissions also helps reduce unnecessary exposure. Build systems do not always need direct production access. In many environments, they inherited those permissions simply because the pipeline evolved faster than the security model around it.

Short-lived workload identities reduce some of the long-term credential problems that static secrets create. Regular reviews of repository integrations, cloud role assumptions and pipeline-to-cloud trust relationships help prevent older exceptions from becoming permanent infrastructure.

A few operational practices help reduce long-term access sprawl:

  • Inventory which non-human identities still have production access.
  • Separate build and deployment permissions wherever possible.
  • Regularly review old pipeline integrations and cloud role assumptions.

Most importantly, CI/CD systems should be treated as production infrastructure themselves.

They already hold privileged access across large parts of the environment. Security reviews that focus only on applications while ignoring delivery pipelines leave a significant portion of operational risk unexamined.

Most organizations already know who their employees are.

Fewer know which machines still have access to production.



from DevOps.com https://ift.tt/zchljn7

Comments

Popular posts from this blog

Cursor’s New SDK Turns AI Coding Agents Into Deployable Infrastructure

For most of its life, Cursor has been an IDE. A very good one. But with the public beta of the Cursor SDK, the company is making a different kind of move — one that should get the attention of DevOps teams. The Cursor SDK is a TypeScript library that gives engineers programmatic access to the same runtime, models, and agent harness that power Cursor’s desktop app, CLI, and web interface. In short, the agents that used to live inside an editor can now be invoked from anywhere in your stack. That’s a meaningful shift in how AI coding tools fit into software delivery pipelines. From the Editor to the Pipeline If you’ve used Cursor before, the workflow is familiar — you interact with an agent in real time, asking it to write functions, fix bugs, or review code. The SDK breaks that dependency on interactive use. Now you can call those same agents programmatically, from a CI/CD trigger, a backend service, or embedded inside another tool. Getting started is a single inst...

Mistral Moves Coding Agents to the Cloud — and Gets Out of Your Way

For the past year or so, AI coding agents have been tethered to your local machine. You kick off a task, watch the terminal, and babysit every step. It works — but it’s not exactly hands-free. Mistral just changed that. On April 29, the Paris-based AI company announced remote coding agents for its Vibe platform, powered by a new model called Mistral Medium 3.5. The idea is simple: Instead of running coding sessions on your laptop, they now run in the cloud — asynchronously, in parallel, and without you watching over them. What’s Actually New Coding sessions can now work through long tasks while you’re away. Many can run in parallel, and you no longer become the bottleneck at every step the agent takes. That’s the core pitch. You start a task from the Mistral Vibe CLI or directly from Le Chat — Mistral’s AI assistant — and the agent handles the rest. When it’s done, it opens a pull request on GitHub and notifies you, so you review the result inste...

OpenAI Debuts Symphony to Orchestrate Coding Agents at Scale

OpenAI has unveiled Symphony, an open-source specification that shifts how software development teams deploy AI in workflows, moving from interactive coding assistance toward continuous orchestration of autonomous agents. Symphony reframes project management tools as operational hubs for AI-driven coding. Rather than prompting an assistant for individual tasks, developers assign work through issue trackers, allowing agents to execute tasks in parallel and deliver outputs for human review. The change reflects a trend in enterprise AI in which systems are increasingly embedded into production pipelines rather than used as standalone tools. Symphony emerged from internal experimentation at   OpenAI , where engineers attempted to scale the use of   Codex   across multiple concurrent sessions. While the agents proved capable, human operators became the limiting factor. Engineers found they could only manage a handful of sessions before coordination overhead offset pro...