Responsible disclosure is structurally dead — not dying. Here's the analysis and what replaces it.
Nicholas Carlini (Anthropic research scientist) used Claude Code and a 12-line bash script to find hundreds of remotely exploitable Linux kernel vulnerabilities — including one introduced in 2003 and undiscovered for 23 years.
He's holding most of them unreported. His words: "I'm not going to send the Linux kernel maintainers potential slop."
The bottleneck isn't finding bugs anymore. It's validating them fast enough.
Here's the part that matters for defenders:
That validation constraint only binds researchers following responsible disclosure. An attacker running the identical script has zero validation requirement — they probe directly from unverified findings. The asymmetry is structural, not technical. It's baked into how responsible disclosure works.
And the framework was already failing before AI arrived:
- 32% of vulnerabilities exploited on or before CVE issuance
- Median exploitation window: 5.0 days (down from 8.5)
- AI can generate working CVE exploits in ~10 minutes at ~$1 per exploit
- 130+ new CVEs weaponised daily at scale
We ran this problem through four structured Crucible analysis passes and produced a white paper. The conclusion: responsible disclosure needs a named replacement framework — Post-Exploitation Response Coordination — which accepts that exploitation will happen before validation and rebuilds around detection, response, and recovery speed instead.
The full white paper is live at https://www.thecrucible.systems/whitepapers/f27bb2aa-8a5b-47d3-b3bf-b33effa7e20e
Curious what this community thinks — specifically on the asymmetry point. Is there a path to closing that gap or is it genuinely irreducible?
[link] [comments]