/r/netsec - Information Security News & Discussion
found this breakdown that references radware's research on AI-generated code security.
key findings:
- AI errors are disproportionately high-severity (injection, auth bypass) vs human errors (typos, null checks)
- "hallucinated abstractions" β AI invents fake helper functions that look professional but are fundamentally broken
- "slopsquatting" β attackers registering hallucinated package names with malicious payloads
- "ouroboros effect" β AI training on AI-generated flawed code, permanently declining security baseline
here's the [full case study]
the framing around maintainer burnout is interesting too β open source is getting flooded with AI PRs that take 12x longer to review than to generate.
submitted by
/u/bishwasbhn [link] [comments]