Working with a CTO on visibility into what's actually running locally across a 70-engineer org. (For the context, there's no ZTNA implementation, At the moment, if there's a way to approach it from ZTNA angle, I'd love to know)
Engineers use cursor heavily, started adopting MCPs, and now there's a mix of verified, open source, and basically untrusted github repos running locally.
Customer creds are accessible from these environments. We want visibility first - detect what MCPs exist, where they're installed, track usage.
That part feels tractable. But from a detection/monitoring angle, once you know what's there - what's worth actually watching?
Some MCPs legitimately need local execution so you can't just block them. Full network proxying feels unrealistic for dev workflows.
How you approached it? what can implement after visibility?
Static analysis is necessary, but it feels incomplete when it comes to how systems behave under real conditions. How are others dealing with that gap
When attackers use real credentials, everything they do can appear legitimate. Runtime monitoring often becomes the only way to spot it. How do you approach this in practice?
I’m approaching prompt injection less as an input sanitization issue and more as an authority and trust-boundary problem.
In many systems, model output is implicitly authorized to cause side effects, for example by triggering tool calls or function execution. Once generation is treated as execution-capable, sanitization and guardrails become reactive defenses around an actor that already holds authority.
I’m exploring an architecture where the model never has execution rights at all. It produces proposals only. A separate, non-generative control plane is the sole component allowed to execute actions, based on fixed policy and system state. If the gate says no, nothing runs. From this perspective, prompt injection fails because generation no longer implies authority. There’s no privileged path from text to side effects.
I’m curious whether people here see this as a meaningful shift in the trust model, or just a restatement of existing capability-based or mediation patterns in security systems.
Runtime threats rarely trigger obvious alerts. Usually something just feels slightly off before anything breaks. What subtle signs have tipped you off in the past?
A lot of environments look secure on paper, but runtime attacks often operate quietly. Credential misuse, app-layer abuse, and supply chain compromises tend to blend in rather than break things. What runtime signals have actually helped you catch issues early?
I am presenting a verified second-preimage collision for the SHA-256 algorithm, specifically targeting the Bitcoin Genesis Block header (Hash: 000000000019d668...).
Unlike previous theoretical differential attacks, this method utilizes a structural exploit in the message schedule (W-schedule) to manipulate internal states during the compression function. This allows for the generation of an alternative preimage (Kaoru DNA) that results in an identical 256-bit output.
Key Technical Aspects:
This discovery suggests that the collision resistance of SHA-256 is fundamentally compromised under specific state-transition conditions.
Verification Code: https://osf.io/2gdzq/files/dqghk
Every morning I find myself scrolling through 50+ tabs of RSS feeds, BleepingComputer, and CISA alerts. It’s exhausting.
I started a project called Threat Road to curate the "Top 3" most critical stories daily with a focus on immediate mitigations. I want to make it as useful as possible for the community.
I’d love your brutal honesty:
What makes a security newsletter "instant delete" for you?
Do you care about "Chili-pepper" risk ratings, or do you find them gimmicky?
Would you rather have a deep dive on one bug or a brief on three?
I'm just looking to hear what you all actually want in a daily briefing.
Little write-up for a patched WebSocket-based RCE I found in the CurseForge launcher.
It involved an unauthenticated local websocket API reachable from the browser, which could be abused to execute arbitrary code.
Happy to answer any questions if anyone has any!
Hey r/netsec -- it's been about two years since we last published a tool for the security community. As a little festive gift, today we're happy to announce the release of certgrep, a free Certificate Transparency search tool we built for our own detection work and decided to open up.
It’s focused on pattern-based discovery (regex/substring-style searches) and quick search and drill down workflows, as a complement to tools like crt.sh.
A few fun example queries it’s useful for:
(login|signin|account|secure).*yourbrand.*\*.*google.*yourbrand.*(cdn|assets|static).*We hope you like it, and would love to hear any feedback you folks may have! A number of iterations will be coming up, including API, SDKs, and integrations (e.g., Slack).
Enjoy!
A new research paper highlights a critical implementation flaw in how major vendors (ASUS, MSI, etc.) configure IOMMU during the DXE phase of boot.
The Core Issue:
The firmware reports DMA protection as "Active" to the OS, but fails to actually enable the IOMMU translation tables during the initial boot sequence. This creates a window of vulnerability where a malicious peripheral can read/write system memory unrestricted.
I've analyzed the root cause and the discrepancy between "Reported Status" vs "Actual Enforcement" in this report:
[👉 Full Analysis & Mitigation Strategies]https://www.nexaspecs.com/2025/12/critical-uefi-flaw-exposes-motherboards.html
Has anyone started seeing patched BIOS versions roll out yet?
Hi everyone,
Over the last month I’ve been analyzing modular addition not as a bitwise operation, but as a fractional mapping. Treating (a + b) mod 2^32 as a projection into the fractional domain [0, 1), modular “bit loss” stops behaving like noise and instead becomes predictable geometric wrapping.
This leads to what I call the Kaoru Method.
The core idea is to run a “Shadow SHA-256” in parallel using infinite precision arithmetic. By comparing the real SHA-256 state with the shadow state, it’s possible to reconstruct a Universal Carry Map (k) that fully captures all modular wraps occurring during execution.
Once k is recovered for the 64 rounds, the modular barriers effectively disappear and the compression function reduces to a system of linear equations.
In my experiments, a standard SHA-256 block produces exactly 186 modular wraps. This number appears stable and acts like a structural “DNA” of the hash computation.
Under this framework, differential cryptanalysis becomes significantly simpler, since the carry behavior is no longer hidden. I’m releasing both the theoretical framework and an extractor implementation so others can validate, attack, or extend the idea toward full collisions.
Paper (theory):
https://osf.io/jd392/files/4qyxc
Code (Shadow SHA-256 extractor):
https://osf.io/n9xcw
DOI:
https://doi.org/10.17605/OSF.IO/JD392
I’m aware this challenges some long-held assumptions about modular addition as a source of non-linearity, so I’m especially interested in feedback, counterexamples, or independent replication.
Thanks for reading.
Over one year ago the Goverment wanted to email the victims but Bitfinex denied it. But it is not too late yet if we act now. Did you hear of any availability of old crypto exchange user email addresses? Security researchers in possession of historic leak data could help to return $ nine digits to victims soon.
Please suggest specific forums for outreach.
Thanks!
Ranked list of 2016 exchanges: Poloniex Bitstamp OKCoin BTC-e LocalBitcoins Huobi Xapo Kraken CoinJoinMess Bittrex BitPay NitrogenSports-eu Cex-io BitVC Bitcoin-de YoBit-net Cryptsy HaoBTC BTCC BX-in-th Hashnest BtcMarkets-net Gatecoin Purse-io CloudBet Cubits AnxPro Bitcurex AlphaBayMarket Luno BTCC Loanbase Bitbond BTCJam Bit-x BitPay BitBay-net NucleusMarket PrimeDice BitAces-me Bter MasterXchange CoinGaming-io CoinJar Cryptopay-me FaucetBOX Genesis-Mining
Mac Malware analysis
A few days ago u/broadexample pointed out that our free STIX feed was doing it wrong:
"You're creating everything as Indicator, not as IPv4Address linked to Indicator via STIX Relationship hierarchy. This works when you use just this feed alone, but for everyone using multiple feeds it would be much less useful."
They were right. We were creating flat Indicator objects instead of proper STIX 2.1 hierarchy with SCOs and Relationships.
Fixed it today. New V2 endpoint with:
- IPv4Address SCOs with deterministic UUIDs (uuid5 for cross-feed deduplication)
- Relationship objects linking Indicator → SCO ("based-on")
- Malware SDOs for 10 families (Stealc, LummaC2, Cobalt Strike, etc.)
- Relationship objects linking Indicator → Malware ("indicates")
Should actually work properly in OpenCTI now.
V2 endpoint: https://analytics.dugganusa.com/api/v1/stix-feed/v2
V1 still works if you just need IOC lists: https://analytics.dugganusa.com/api/v1/stix-feed
Full writeup: https://www.dugganusa.com/post/stix-v2-reddit-feedback-opencti-ready
Thanks for the feedback. This is why we post here - you catch the stuff we miss.
During routine threat hunting on my Beelzebub honeypot, I caught something interesting: a Rust-based DDoS bot with 0 detections across 60+ AV engines at the time of capture.
TL;DR:
In the post you'll find:
The fact that no AV detected it shows that Rust + string obfuscation is making life hard for traditional detection engines.
Questions? AMA!
I’ve opened the early access waitlist for CyberCTF.space, a cybersecurity CTF platform focused on real-world attacks, not puzzle only challenges. - Docker based labs - MITRE ATT&CK aligned techniques - Real World exploits
🎖 Early joiners receive Founding Hacker recognition.
I’m also looking for security practitioners interested in contributing labs, challenges, or documentation.
Join the waitlist: https://cyberctf.space/
Contributors: https://cyberctf.space/contributors
Full disclosure: I'm a researcher at CyberArk Labs.
This is a technical deep dive from our threat research team, no marketing fluff, just code and methodology.
Static analysis tools like CodeQL are great at identifying "maybe" issues, but the signal-to-noise ratio is often overwhelming. You get thousands of alerts, and manually triaging them is impossible.
We built an open-source tool, Vulnhalla, to address this issue. It queries CodeQL's "haystack" into GPT-4o, which reasons about the code context to verify if the alert is legitimate.
The sheer volume of false positives often tricks us into thinking a codebase is "clean enough" just because we can't physically get through the backlog. This creates a significant amount of frustration for us. Still, the vulnerabilities remain, hidden in the noise.
Once we used GPT-4o to strip away ~96% of the false positives, we uncovered confirmed CVEs in the Linux Kernel, FFmpeg, Redis, Bullet3, and RetroArch. We found these in just 2 days of running the tool and triaging the output (total API cost <$80).
Running the tool for longer periods, with improved models, can reveal many additional vulnerabilities.
Write-up & Tool:
We don’t lack security ideas. We lack companies hiring juniors and products that are secure by default. These two problems are connected, and until we fix both, we’ll keep talking about a skills shortage while making it impossible to build a secure society.
What do you all think?
New preprint exploring unconventional cryptanalysis:
• Framework: “Inverse Dimensionalization”
• Target: SHA-256 structural analysis
• Result: 174/256 matching bits (M₁ = 88514, M₂ = 88551)
• Time: 3.8 seconds
• NOT a collision — but statistically anomalous
Paper + reproducible code: https://doi.org/10.17605/OSF.IO/6YRW8
Full paper with math and code: https://doi.org/10.17605/OSF.IO/6YRW8
Paper: https://osf.io/6yrw8/files/wj9ze
Code: https://osf.io/6yrw8/files/zy8ck
Verification code: https://osf.io/6yrw8/files/pqne7
Device specifications used to find the 174/256-bit match in 3.8 seconds:
• Google Colab Free CPU
• Intel Xeon
• Clock speed: between 2.20 GHz and 2.30 GHz
• Cores (vCPUs): 2 virtual cores
• RAM: 12 GB
Security implications discussion welcome.
I’m traveling next week and will need to access a website that is IP address -sensitive. My work computer’s IP address is approved for the site. If I access my work desktop remotely using something like LogMeIn or Team Viewer, will I be able to get onto the website I need to use? Or will my public IP address show up as the one I’m using from far?
Built a threat intel platform that runs on $75/month infrastructure. Decided to give the STIX feed away for free instead of charging enterprise prices for it.
What's in it:
- 59K IOCs (IPs, domains, hashes, URLs)
- ThreatFox, OTX, honeypot captures, and original discoveries
- STIX 2.1 compliant (works with Sentinel, TAXII consumers, etc.)
- Updated continuously
Feed URL: https://analytics.dugganusa.com/api/v1/stix-feed
Search API (if you want to query it): https://analytics.dugganusa.com/api/v1/search?q=cobalt+strike
We've been running this for a few months. Microsoft Sentinel and AT&T are already polling it. Found 244 things before CrowdStrike/Palo Alto had signatures for them (timestamped, documented).
Not trying to sell anything - genuinely curious if it's useful and what we're missing. Built it to scratch our own itch.
Tear it apart.
tl;dr: Ask Claude Code to tee mitmdump to a log file (with request and response). Create skills based on hackerone public reports (download from hf), let Claude Code figure out if it can find anything in the log file.
An active phishing campaign has been detection by Evalian SOC targeting HubSpot customers.
Just finished reading ActiveFence’s emerging threats assessment on 7 major models across hate speech, disinfo, fraud, and CSAM-adjacent prompts.
Key findings are: 44% of outputs were rated risky, 68% of unsafe ones were hate-speech-related, and only a single model landed in the safe range.
What really jumps out is how different vendors behave per abuse area (fraud looks relatively well-covered, hate and child safety really don’t).
For those doing your own evals/red teaming: are you seeing similar per-category gaps? Has anyone brought in an external research partner like ActiveFence to track emerging threats over time?
Freedom of the Press Foundation is developing Dangerzone, an open-source tool that uses multiple layers of containerization (gVisor, Linux containers) to sanitize untrusted documents. The target users of this tool are people who may be vulnerable to malware attacks, such as journalists and activists. To ensure that Dangerzone is adequately secure, it received a favorable security audit in December 2023, but never had a bug bounty program until now.
We are kick-starting a limited bug bounty program for this holiday season, that challenges the popular adage "containers don't contain". The premise is simple; sent Santa a naughty letter, and its team of elves will run it by Dangerzone. If your letter breaks a containerization layer by capturing a flag, you get the associated bounty. Have fun!
For the past several years I've been trying intermittently to get Cross Translation Unit taint analysis with clang static analyzer working for Firefox. While the efforts _have_ found some impactful bugs, overall the project has burnt out because of too many issues in LLVM we are unable to overcome.
Not everything you do succeeds, and I think it's important to talk about what _doesn't_ succeed just as much (if not more) about what does.
With the help of an LLVM contractor, we've authored this post to talk about our attempts, and some of the issues we'd run into.
I'm optimistic that people will get CTU taint analysis working on projects the size of Firefox, and if you do, well I guess I'll see you in the bounty committee meetings ;)
Hey everyone, I saw this report on Hacker News, about a pretty serious privacy breach involving the Urban VPN Proxy browser extension and several other extensions from the same publisher.
According to the research:
What’s especially concerning is that Urban VPN advertises an “AI protection” feature, but that doesn’t prevent data harvesting - the extension just warns you about sharing data while quietly exfiltrating it.
If you’ve ever used this extension and chatted with an AI, it’s worth uninstalling it and treating those interactions as compromised.
Link to the report:
https://www.koi.ai/blog/urban-vpn-browser-extension-ai-conversations-data-collection
Would love to hear thoughts on this.
Microsoft has released a fix for CVE-2025-64669, addressing a local privilege escalation vulnerability we reported in Windows Admin Center.
This issue allowed low privileged users to escalate to SYSTEM by abusing trusted components under insecure filesystem permissions. Microsoft validated the finding and shipped a fix as part of the latest update.
This CVE represents only the first vulnerability from our research.
We identified four distinct vulnerabilities during the investigation, and additional fixes and disclosures are coming.
More details soon.
Stay tuned.
These aren't theoretical numbers. The attackers left their C2 wide open with a /stats endpoint showing real-time campaign metrics. Yes, really.
I've been monitoring attacks hitting my Beelzebub research honeypots and caught what I'm calling "Operation PCPcat" - a large-scale credential theft campaign targeting Next.js deployments.
TL;DR of the attack chain:
.env files, SSH keys, AWS/Docker/Git credentialsWhat I documented:
If you're running Next.js in prod: patch immediately and rotate your credentials. Assume compromise if you were vulnerable during this window.
Happy to answer questions or share more technical details.
Delegation cannot be secured by refining identity because delegation is not an attribute of who you are. It is an operation on authority itself. Authority must be constructed, passed, and monotonically reduced as data. Capability systems are the only authorization model that treats delegation as a first-class, enforceable transformation rather than an inferred side effect.