The Mistakes Didn't Change. The Speed Did.
Everyone is measuring how fast agents write code. Few are measuring what that code introduces. This year, independent researchers tested the major AI coding agents building applications from scratc...

Source: DEV Community
Everyone is measuring how fast agents write code. Few are measuring what that code introduces. This year, independent researchers tested the major AI coding agents building applications from scratch. Most pull requests contained at least one vulnerability. Inside Fortune 50 companies, AI-generated code introduced 10,000+ new security findings per month. Logic and syntax bugs went down. Privilege escalation paths jumped over 300%. Yikes! The code improved while the vulnerabilities got worse. Agents just produce the same old mistakes faster. One customer seeing another customer's data. Login flows that leave a back door wide open. Endpoints exposed to the entire internet. The mistakes are harder to see The code looks clean. It follows the right patterns, uses the right frameworks, passes initial agent-driven code review. It just quietly skips the check that asks "should this user be allowed to do this?" or "has this request been authenticated?" These are judgment mistakes. The security t