The rise of AI generated code has unlocked immense productivity gains for developers. However, its adoption has also highlighted a critical challenge: security. As we referenced in a piece for TheNewStack titled “Don’t trust Security in AI Generated Code”, studies from leading institutions like Stanford and Georgetown have repeatedly shown that AI generated code often prioritizes functionality over security, creating vulnerabilities that can disrupt workflows and delay deployments. Despite this, AI-driven code generation is here to stay—and for good reason. It accelerates development in ways we’ve never seen before. So how can we harness this power responsibly? By using the right tools to make secure AI generated code achievable.
AI generated code derives its output from training datasets, which focus heavily on functionality, not security, resulting in 30% of the code snippets studied being vulnerable according to a study by Wuhan University. Unlike a human developer, AI lacks the judgment to identify potential vulnerabilities unless explicitly prompted—and even then, mistakes happen.
Developers, for their part, care about security but face competing priorities. Their work revolves around speed and meeting deployment deadlines. This disconnect between the capabilities of AI and the realities of development workflows has created a significant problem: a growing backlog of insecure code that stalls progress.
Currently, when AI generated code is introduced into a developer’s workflow and issues arise, developers can quickly detect and fix functional bugs within their integrated development environments (IDEs) using existing tools like a linter. However, security issues are often identified much later, after the code has been committed and flagged by separate security tools.
At this point, developers are forced to revisit previously committed code, address vulnerabilities, and resubmit. This reactive process not only disrupts velocity but also creates friction between development and security teams. Traditional approaches, such as training sessions, have failed to resolve this, often because:
This inefficiency exacerbates the bottleneck, frustrating both developers and security professionals.
While AI may be years away from achieving the nuanced judgment required to generate secure code by default, we don’t need to wait. The solution lies in leveraging the type and benefits of productivity-enhancing tools that developers already use—but with tools that address security concerns during the drafting process.
New, modern tools focused on security now allow developers to detect and fix vulnerabilities directly in their IDEs, as easily as functionality checks. This integrated approach transforms the workflow:
This streamlined process reverts timelines from days back to minutes. Developers maintain their velocity, while security teams see their backlog shrink.
Symbiotic Security's plugin does exactly this. It eliminates friction by empowering developers to address security concerns during the code drafting phase. Vulnerabilities are resolved before they ever reach the backlog, ensuring that newly deployed code is secure from the start. Over time, teams can also address existing backlogs with unprecedented speed, transforming how organizations manage security.
Can AI generated code be secure? The answer is a resounding yes. By equipping developers with tools that enable them to address security as seamlessly as functionality, we’re not just ensuring secure code—we’re creating a future where development speed and security coexist.
The era of secure AI generated code isn’t just possible—it’s here.