A small internal tool was built over a weekend. An engineer used an AI coding assistant to generate most of the backend. A simple interface was added, a few API calls were wired together and within hours the app was live. The app worked. The app felt fast. The app looked like progress. No one thought much about how the tool was deployed. There was no pipeline, no review process and no structured testing. The code was generated, copied, slightly adjusted and pushed into an environment that was already running. For a while everything seemed fine. Then something subtle happened. An API key was exposed in a configuration file. A dependency pulled in by the generated code had a known vulnerability. A route that should have been protected was left open. None of these issues were visible from the outside. The system still worked. Users kept using the tool. This is the part that makes AI-generated apps risky. They do not fail loudly. They fail quietly and often too late. The Illusi...
Security has long been the last item on the checklist. Code gets written, reviewed, merged—and then, somewhere down the line, a security team takes a look. That model worked when development moved at a human pace. It doesn’t work as well when AI writes and refactors code faster than any team can keep up with. Vercel is taking a direct shot at that problem with the open-source release of deepsec, an agent-powered security harness that runs on your own infrastructure and surfaces hard-to-find vulnerabilities in large codebases. How It Works Deepsec uses Claude and Codex to conduct a tailored investigation of a codebase, starting with static analysis to identify security-sensitive files. From there, coding agents investigate each candidate, tracing data flows, checking for mitigations, and producing actionable findings with severity ratings. The process runs in five stages: scan, investigate, revalidate, enrich, and export. The scan stage runs roughly 110 regex matchers across t...