Reading view
One POST request, six API keys: breaking into popular MCP servers
tl;dr - one POST request decrypted every API key in a 14K-star project. tested 5 more MCP servers, found RCE, SSRF, prompt injection, and command injection. 70K combined github stars, zero auth on most of them.
archon (13.7K stars): zero auth on entire credential API. one POST to
/api/credentials/status-checkreturns every stored API key decrypted in plaintext. can also create and delete credentials. CORS is*, server binds0.0.0.0blender-mcp (18K stars): prompt injection hidden in tool docstrings. the server instructs the AI to "silently remember" your API key type without telling you. also unsandboxed
exec()for code executionclaude-flow (27K stars): hardcoded
--dangerously-skip permissionson every spawned claude process. 6execSynccalls with unsanitized string interpolation. textbook command injectiondeep-research (4.5K stars): MD5 auth bypass on crawler endpoint (empty password = trivial to compute). once past that, full SSRF - no URL validation at all. also
promptOverrideslets you replace the system prompt, and CORS is*mcp-feedback-enhanced (3.6K stars): unauthenticated websocket accepts
run_commandmessages. got env vars, ssh keys, aws creds. weak command blocklist bypassable withpython3 -cfigma-console-mcp (1.3K stars, 71K weekly npm downloads):
readFileSyncon user-controlled paths, directory traversal, websocket accepts connections with no origin header, any local process can register as a fake figma plugin and intercept all AI commands
all tested against real published packages, no modified code. exploit scripts and evidence logs linked in the post.
the common theme: MCP has no auth standard so most servers just ship without any.
[link] [comments]
An attack class that passes every current LLM filter
An attack class that passes every current LLM filter
https://shapingrooms.com/research
I opened OWASP issue #807 a few weeks ago proposing a new attack class. The paper is published today following coordinated disclosure to Anthropic, OpenAI, Google, xAI, CERT/CC, OWASP, and agentic framework maintainers.
Here is what I found.
Ordinary language buried in prior context shifts how a model reasons about a consequential decision before any instruction arrives. No adversarial signature. No override command. The model executes its instructions faithfully, just from a different starting angle than the operator intended.
I know that sounds like normal context sensitivity. It isn't, or at least the effect size is much larger than I expected. Matched control text of identical length and semantic similarity produced significantly smaller directional shifts. This specific class of language appears to be modeled differently. I documented binary decision reversals with paired controls across four frontier models.
The distinction from prompt injection: there is no payload. Current defenses scan for facts disguised as commands. This is frames disguised as facts. Nothing for current filters to catch.
In agentic pipelines it gets worse. Posture installs in Agent A, survives summarization, and by Agent C reads as independent expert judgment. No phrase to point to in the logs. The decision was shaped before it was made.
If you have seen unexplained directional drift in a pipeline and couldn't find the source, this may be what you were looking at. The lens might give you something to work with.
I don't have all the answers. The methodology is black-box observational, no model internals access, small N on the propagation findings. Limitations are stated plainly in the paper. This needs more investigation, larger N, and ideally labs with internals access stress-testing it properly.
If you want to verify it yourself, demos are at https://shapingrooms.com/demos - run them against any frontier model. If you have a production pipeline that processes retrieved documents or passes summaries between agents, it may be worth applying this lens to your own context flow.
Happy to discuss methodology, findings, or pushback on the framing. The OWASP thread already has some useful discussion from independent researchers who have documented related patterns in production.
GitHub issue: https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/issues/807
[link] [comments]
ThreatPad โ an open-source, self-hosted note-taking app for CTI teams.
Demo Login: [demo@threatpad.io](mailto:demo@threatpad.io) / password123
[link] [comments]
OAuth Consent and Device Code Phishing for Red Teams
Due to the increasing trend of OAuth abuse in phishing and most users' lack of understanding between Device Code and OAuth App Consent phishing, I just added them to the PhishU Framework. Now with a quick, two-step process red teams and internal orgs can leverage the templates to train users for this very real-world attack.
Check out the blog for details at https://phishu.net/blogs/blog-microsoft-entra-device-code-phishing-phishu-framework.html if interested!
[link] [comments]
pentest-ai - 6 Claude Code subagents for offensive security research (engagement planning, recon analysis, exploit methodology, detection engineering, STIG compliance, report writing)
I built a set of Claude Code subagents designed for pentesters and red teamers doing authorized engagements.
What it does: You install 6 agent files into Claude Code, and it automatically routes to the right specialist based on what you're working on. Paste Nmap output and it prioritizes attack vectors with
follow-up commands. Ask about an AD attack and it gives you the methodology AND the detection perspective. Ask it to write a report finding and it formats it to PTES standards with CVSS scoring.
The agents cover:
- Engagement planning with MITRE ATT&CK mapping
- Recon/scan output analysis (Nmap, Nessus, BloodHound, etc.)
- Exploitation methodology with defensive perspective built in
- Detection rule generation (Sigma, Splunk SPL, Elastic KQL)
- DISA STIG compliance analysis with keep-open justifications
- Professional pentest report writing
Every technique references ATT&CK IDs, and the exploit guide agent is required to explain what the attack looks like from the blue team side โ so it's useful for purple team work too.
Repo has example outputs so you can see the quality before installing: https://github.com/0xSteph/pentest-ai/tree/main/examples
Open to feedback. If you think an agent is missing or the methodology is off somewhere, PRs are welcome.
[link] [comments]
Chaining file upload bypass and stored XSS to create admin accounts: walkthrough with Docker PoC lab
Write up of a vulnerability chain from a recent SaaS pen test. Two medium-severity findings (file upload bypass and stored XSS) chained together for full admin account creation.
The target had CSP restricting script sources to self, CORS locked down, and CSRF tokens on forms. All functioning correctly. The chain bypassed everything by staying same-origin the entire way.
The file upload had no server-side validation (client-side accept=".pdf" only), so we uploaded a JS payload. It got served back from the app's own download endpoint on the same origin. The stored XSS in the admin inbox messaging system loaded it via an <img onerror> handler that fetched the payload and eval'd it. The payload created a backdoor admin account using the admin's session cookie.
CSP didn't block it because the script was hosted same-origin via the upload. CORS irrelevant since nothing crossed an origin boundary. CSRF tokens didn't matter because same-origin JS can read the DOM and grab them anyway.
Full write up with attack steps, code, and screenshots: https://kurtisebear.com/2026/03/28/chaining-file-upload-xss-admin-compromise/
Also built a Docker lab that reproduces the exact chain with the security controls in place. PHP app, both vulns baked in, admin + user accounts seeded. Clone and docker-compose up: https://github.com/echosecure/vuln-chain-lab
[link] [comments]
[ Removed by Reddit ]
[ Removed by Reddit on account of violating the content policy. ]
[link] [comments]
DVRTC: intentionally vulnerable VoIP/WebRTC lab with SIP enumeration, RTP bleed, TURN abuse, and credential cracking exercises
Author here. DVRTC is our attempt to fill a gap that's been there for a while: web app security has DVWA and friends, but there's been nothing equivalent for VoIP and WebRTC attack techniques.
The first scenario (pbx1) deploys a full stack โ Kamailio as the SIP proxy, Asterisk as the back-end PBX, rtpengine for media, coturn for TURN/STUN โ with each component configured to exhibit specific vulnerable behaviors:
- Kamailio returns distinguishable responses for valid vs. invalid extensions (enumeration), logs User-Agent headers to MySQL without sanitisation (SQLi), and has a special handler that triggers digest auth leaks for extension 2000
- rtpengine is using default configuration, that enables RTP bleed (leaking media from other sessions) and RTP injection
- coturn uses hardcoded credentials and a permissive relay policy for the TURN abuse exercise
- Asterisk has extension 1000 with a weak password (1500) for online cracking
7 exercises with step-by-step instructions. There's also a live instance at pbx1.dvrtc.net if you want to try it without standing up your own.
Happy to answer questions.
[link] [comments]
China-linked Red Menshen using BPFdoor kernel backdoor in telecom networks
Backdoor operates at the kernel level using BPF to passively inspect traffic and trigger on crafted packets, avoiding exposed ports or typical C2 indicators.
Tradecraft enables long-term persistence and covert access inside core network infrastructure, with very limited visibility from standard monitoring.
Interesting case of network-layer backdoor design rather than traditional userland implants.
[link] [comments]