Sysdig this week at the RSA Conference (RSAC) revealed it has created a runtime that makes it possible to securely deploy artificial intelligence (AI) coding tools. Jonas Rosland, director of the open source program for Sysdig, said the runtime makes it possible to monitor the activity of AI coding agents in real time, including potential credential risks. It also enables investigation of incidents involving AI agent activity, he added. Additionally, AI agents can be prevented from opening sensitive files or bypassing credential controls. Risky command-line arguments that weaken safeguards, such as allowing unrestricted file writes, are also prevented. Dangerous activity with developer environments, including reverse shells, binary tampering, and persistence mechanisms, can also be prevented. As AI coding tools are made available to both professional and citizen-developers alike, the likelihood of a cybersecurity incident involving these tools continues to rise. DevSecOps teams...
There was a time when compliance meant a quarterly ritual. Someone from security would walk over with a spreadsheet, ask a few questions, tick a few boxes and disappear until the next audit cycle. The infrastructure team would scramble to prove that yes, encryption was enabled, and no, that S3 bucket was not public anymore. Everyone felt relieved, went back to shipping features and quietly hoped nothing would drift before the next review. That model is dead; it just hasn’t been buried yet. The problem is not that teams lack security awareness. Most engineering organizations today understand that vulnerabilities need catching early and that production environments need hardening. The problem is that compliance has historically lived outside the delivery pipeline — treated as a checkpoint rather than a continuous practice. In a world where teams deploy dozens of...