Secure AI Generated Code: A Reality with the Right Tools

December 11, 2024
Insights

The rise of AI generated code has unlocked immense productivity gains for developers. However, its adoption has also highlighted a critical challenge: security. As we referenced in a piece for TheNewStack titled “Don’t trust Security in AI Generated Code”, studies from leading institutions like Stanford and Georgetown have repeatedly shown that AI generated code often prioritizes functionality over security, creating vulnerabilities that can disrupt workflows and delay deployments. Despite this, AI-driven code generation is here to stay—and for good reason. It accelerates development in ways we’ve never seen before. So how can we harness this power responsibly? By using the right tools to make secure AI generated code achievable.

The Problem with AI Generated Code

AI generated code derives its output from training datasets, which focus heavily on functionality, not security, resulting in 30% of the code snippets studied being vulnerable according to a study by Wuhan University. Unlike a human developer, AI lacks the judgment to identify potential vulnerabilities unless explicitly prompted—and even then, mistakes happen.

Developers, for their part, care about security but face competing priorities. Their work revolves around speed and meeting deployment deadlines. This disconnect between the capabilities of AI and the realities of development workflows has created a significant problem: a growing backlog of insecure code that stalls progress.

The Problem with the Process

Currently, when AI generated code is introduced into a developer’s workflow and issues arise, developers can quickly detect and fix functional bugs within their integrated development environments (IDEs) using existing tools like a linter. However, security issues are often identified much later, after the code has been committed and flagged by separate security tools.

At this point, developers are forced to revisit previously committed code, address vulnerabilities, and resubmit. This reactive process not only disrupts velocity but also creates friction between development and security teams. Traditional approaches, such as training sessions, have failed to resolve this, often because:

This inefficiency exacerbates the bottleneck, frustrating both developers and security professionals.

The Solution: Secure AI Generated Code

While AI may be years away from achieving the nuanced judgment required to generate secure code by default, we don’t need to wait. The solution lies in leveraging the type and benefits of productivity-enhancing tools that developers already use—but with tools that address security concerns during the drafting process.

New, modern tools focused on security now allow developers to detect and fix vulnerabilities directly in their IDEs, as easily as functionality checks. This integrated approach transforms the workflow:

  1. Generate code instantly using AI.
  2. Import the code into the IDE.
  3. Use in-IDE tools to flag and resolve bugs.
  4. Use in-IDE tools to flag and fix vulnerabilities.
  5. Commit secure, functional code.

This streamlined process reverts timelines from days back to minutes. Developers maintain their velocity, while security teams see their backlog shrink.

Symbiotic Security: Rewriting the Process

Symbiotic Security's plugin does exactly this. It eliminates friction by empowering developers to address security concerns during the code drafting phase. Vulnerabilities are resolved before they ever reach the backlog, ensuring that newly deployed code is secure from the start. Over time, teams can also address existing backlogs with unprecedented speed, transforming how organizations manage security.

The Future of AI Generated Code

Can AI generated code be secure? The answer is a resounding yes. By equipping developers with tools that enable them to address security as seamlessly as functionality, we’re not just ensuring secure code—we’re creating a future where development speed and security coexist.

The era of secure AI generated code isn’t just possible—it’s here.

About the author
Jerome Robert
CEO - Symbiotic
With over 20 years of experience in cybersecurity and 15 years as a CxO, Jérôme has a proven track record in driving successful outcomes. He has been instrumental in five successful exits, including Lexsi (acquired by Orange in 2016) and Alsid (acquired by Tenable in 2021)
Icon line
See all articles

Book a demo

See how our solution empowers teams to grow their security maturity and to code securely & efficiently.
Icon line
Book a demo
Demo illustration