

Hacktron revealed today it is developing a platform that leverages artificial intelligence (AI) to continuously test code for vulnerabilities.
Fresh off raising $2.9 million in seed capital, Hacktron founder Zayne Zhang said the company’s platform will employ multiple AI models to test every pull request and code change to identify vulnerabilities that are actually exploitable.
Once identified, the platform will also surface a recommendation to remediate that issue that could be shared with an AI coding tool. The overall goal is to dramatically reduce the number of false positives that DevOps teams waste time investigating, said Zhang. In effect, AI will significantly reduce the current level of burden DevSecOps teams today experience when trying to maintain application security, he added.
The team behind Hacktron has years of expertise researching vulnerabilities. Most recently, Hacktron uncovered critical vulnerabilities in the widely used OAuth2 Proxy project, highlighting risks in open-source infrastructure relied on by enterprise teams. The company has also provided security testing services for organizations such as Perplexity AI and Supabase.
With the advent of the latest AI models from Anthropic and OpenAI, it’s apparent that vulnerabilities in code will soon be discovered within hours of an application being deployed. Once discovered, it will only take a few more hours for adversaries to find ways to exploit those vulnerabilities. The only way to prevent those cybersecurity incidents in the first place will be to leverage AI to identify and remediate vulnerabilities and weaknesses long before any application is actually deployed, noted Zhang.
It’s not clear how much time DevOps teams will soon be spending on remediating vulnerabilities as more of them are discovered, but there is a case to be made for replacing or modernizing legacy applications on the assumption many of them are, from a security perspective, fundamentally flawed. The challenge then becomes making sure that any new code generated by human developers or an AI agent is truly secure. Otherwise, DevOps teams will find themselves throwing more fuel on an application security fire that is already close to spiraling out of control.
Regardless of approach, there will soon come a day when it will no longer be acceptable to ship code that has known vulnerabilities. The irony, of course, is that the first wave of AI tools that application developers adopted tended to increase the number of vulnerabilities being introduced simply because they were trained using flawed examples of code. However, the next generation of AI models have more advanced reasoning capabilities that makes it possible to surface vulnerabilities in both new and legacy code.
Ultimately, it’s now only a matter of time before DevSecOps workflows are re-engineered using AI agents that will be better at discovering vulnerabilities than most software engineers might ever hope to be. While that may lead to some fundamental changes in how DevOps workflows are constructed, there is little doubt that most of the application developers that spend time creating and testing patches for applications would, as a rule, generally prefer to be spending more of their time on higher level tasks that add significantly more value to the business.
from DevOps.com https://ift.tt/I9FqMTJ
Comments
Post a Comment