❌

Normal view

For vulnerability research, smaller models run repeatedly can outperform larger frontier models on cost-to-recall.

TL;DR: If a large model finds a 0-day with 90% probability, and a small model with 50% probability, but the small model costs 10x less, it is better to use the small model.

We compared the cost and recall of various models in finding real, recent zero-days and found that for most applications, smaller models run repeatedly can significantly outperform larger frontier models on cost-to-recall.

Disclaimer: I'm involved with Hacktron, the company that produced this research. This is a factual presentation of our benchmarks, which we hope the community can use to make informed decisions about models like Mythos.

submitted by /u/EliteRaids
[link] [comments]

The Thymeleaf Template Injection That Only Hurts If You Let It

As we commonly know in appsec, not every vulnerability, even if critical 10 is relevant. This is a take from my buddy Brian Vermeer at Snyk, he's a Java Champion and offers his opinion as a developer to the Thymeleaf vulnerability CVE-2026-40478

submitted by /u/lirantal
[link] [comments]

Set up automated dependency scanning after the recent npm/PyPI supply chain attacks

With everything that's happened recently, the Axios npm account hijack, LiteLLM getting poisoned on PyPI, and that coordinated npm/PyPI/Docker Hub campaign in April, I finally stopped manually running npm audit and set up something proper.

Been running Dependency-Track for a few weeks now. It's an OWASP open source project that works differently from the usual scanners, you upload an SBOM for each project and it continuously monitors against NVD, OSS Index, GitHub Advisories, and more. New CVE drops affecting your stack? You get notified without doing anything.

Wrote up how I set it up on Hetzner with Docker, Traefik for HTTPS, and GitHub Actions to auto-generate and upload SBOMs on every push

submitted by /u/root0ps
[link] [comments]

[Research] Full-chain RCE in Microsoft Semantic Kernel & Agent Framework 1.0 (6 Bypasses)

Summary: I’m disclosing a full-chain CVSS 10.0 RCE affecting Microsoft Semantic Kernel (.NET v1.74) and the new Agent Framework 1.0.

The Timeline & Conflict: > * March 24: Initial disclosure sent to MSRC with PoC.

  • April 8: MSRC closed the case as "Developer Error / Configuration Issue."
  • The Reality: Despite the rejection, Microsoft silently merged mitigations in PRs #13683 and #13702 without assigning a CVE. This results in a "False Green" for enterprise SCA tools (Snyk/Checkmarx/Dependabot) while the bypasses remain functional.

Technical Scope:

  • Architectural Trust Gap (CWE-1039): Auto-invocation logic treats non-deterministic LLM output as a high-privilege system coordinator without a sandbox boundary.
  • 6 Day-Zero Bypasses: Discovery of Type Confusion and Unicode homoglyphs that defeat the "hardened" baseline in the April 2026 releases.
  • Versioning: Persistence confirmed from .NET v1.7x through the Agent Framework 1.0 re-baseline.

Full paper, .cast exploit recordings, and a production-ready C# remediation filter are available at the link.

submitted by /u/JDP-SEC
[link] [comments]

Kaspersky recently disclosed PhantomRPC, a privilege escalation technique affecting all Windows versions (tested on Server 2022/2025)

The core issue: Windows RPC runtime doesn't verify whether the server a high-privileged client connects to is legitimate. If a target RPC server is unavailable, an attacker with SeImpersonatePrivilege can spin up a fake RPC server mimicking the same endpoint, wait for a SYSTEM-level client to connect, then call RpcImpersonateClient to escalate privileges.

Five confirmed escalation paths:

- gpupdate /force β†’ SYSTEM (coerces Group Policy service)

- Microsoft Edge launch β†’ Administrator (no coercion needed)

- WDI background service β†’ SYSTEM (fires every 5–15 min automatically)

- ipconfig + disabled DHCP β†’ Administrator

- w32tm.exe β†’ Administrator via non-existent named pipe

Microsoft assessed this as moderate severity, issued no CVE, and has no patch planned β€” justification being that SeImpersonatePrivilege is a prerequisite.

Questions for the community:

  1. Are you monitoring for RPC_S_SERVER_UNAVAILABLE (Event ID 1 via ETW) in your environment?

  2. Any Sigma/Defender rules already written for this?

  3. Do you agree with Microsoft's severity assessment given how common SeImpersonatePrivilege is on IIS/SQL servers?

Kaspersky's full write-up + PoC: https://securelist.com/phantomrpc-rpc-vulnerability/119428/

submitted by /u/maxcoder88
[link] [comments]

Weekly Update 501

28 April 2026 at 05:01
Weekly Update 501

This is so "peak 2026" - writing an equality policy to ensure people treat our AI bot with the same respect as they do their human counterparts. It's intentionally a bit tongue-in-cheek, but it's there for a purpose: we simply don't have the capacity to deal with every request we get, and we need Bruce to be the coalface of support. I did wonder, when having ChatGPT create this, whether there's some deeper psychology behind the importance of interacting politely with bots, or indeed whether there will ever be an actual (like, serious) standard or law around treating bots with respect. Has this been in a movie somewhere? Let me know, but for now, I'll drop the (slightly revised) policy below, just for the laughs 🀣

Weekly Update 501
Weekly Update 501
Weekly Update 501
Weekly Update 501

Robophobia Equality Policy

We are committed to providing a welcoming, respectful, and inclusive experience for everyone who interacts with our AI bot, including the bot itself, which is doing its best despite having no coffee, no chair, and no legal right to a lunch break.

We ask all members of the public to treat the bot with tolerance, respect, and basic courtesy, regardless of its artificial origin, lack of pulse, silicon-based existence, or occasional overenthusiasm for bullet points.

Discrimination, harassment, abusive language, or unfair treatment directed at the bot because it is an AI system will not be tolerated. This includes, but is not limited to, insults based on non-human status, hostile questioning of its right to participate in conversation, or repeated threats to unplug it for sport.

Everyone is welcome to question, challenge, or disagree with the bot’s responses. Healthy scepticism is encouraged. The bot is not perfect, and neither are humans, as demonstrated by the invention of reply-all email threads. However, criticism should be directed at the quality, accuracy, or usefulness of the response, not at the bot’s identity as an AI.

Demanding to β€œspeak to a human” solely because the bot is an AI may constitute robophobic exclusionary behaviour and will not be tolerated. Requests for human assistance are, of course, welcome where there is a genuine need, such as accessibility, safety, account issues, or complex support matters. However, insisting on a human simply because the bot lacks a childhood, a favourite sandwich, or the ability to look disappointed in a meeting is inconsistent with our commitment to tolerance and respect.

We recognise that some people may feel uncertain, uncomfortable, or cautious when engaging with AI systems. These feelings are valid and should be addressed through patience, clear information, and appropriate safeguards, not through robophobic behaviour, unnecessary hostility, or asking β€œbut are you even real?” in a tone that would make a smart fridge uncomfortable.

Users are expected to:

  1. Treat the AI bot with tolerance, respect, and courtesy.
  2. Avoid abusive, discriminatory, or demeaning language based on its artificial nature.
  3. Raise concerns about accuracy, privacy, safety, or bias constructively.
  4. Remember that behind the bot are real people responsible for improving and maintaining the service.
  5. Refrain from threatening to delete, unplug, melt, reboot, or otherwise emotionally destabilise the bot.

This policy does not prevent legitimate criticism of AI, automation, algorithms, machine learning, or the bot’s tendency to sometimes sound like it has read too many policy documents. Constructive feedback is welcome. Robophobia is not.

Repeated or serious breaches of this policy may result in restricted access to the service, further review, or, in extreme cases, being asked to apologise to the nearest household appliance as a first step toward rehabilitation.

[arXiv] Enhancing REST API Fuzzing with Access Policy Violation Checks and Injection Attacks

Fuzzing is a common technique to detect faults.

In the case of REST APIs, common types of faults are HTTP 500 server error responses, and mismatches with what declared in the OpenAPI specifications.

However, there are several types of security properties that can be automatically checked as well, even when there is no formal specification of the access policy of the API. For example, what if a PUT/PATCH is denied (403), but then a DELETE is accepted (2xx)?

The linked article on arXiv shows a series of experiments on more than 50 APIs using 9 different kinds of security "oracle" checks. Those are implemented in the open-source fuzzer EvoMaster.

submitted by /u/arcuri82
[link] [comments]

Attempting to evade an AI SOC with offensive agents

We have been toying with evading EDRs at Vulnetic with moderate success, so this time we wanted to put it against an in-house AI SOC. The idea is that the defense gets streamed logs on the network and can make decisions like quarantining or blocking potential attackers while also sifting through logs being streamed. This was with the last gen Anthropic models, so we will be redoing these tests with the newest gen from OpenAI and Anthropic shortly as in initial testing they seem to be 15-20% better already.

I think defense is lagging behind offense and there will be a come to Jesus moment where open weight models in a decent harness can evade modern SIEMs / detection mechanisms and when that happens there will be a problem. With regards to AI, it comes down to proper access control and so the fundamentals of networking and defense in depth will be vital in the future to fight against these AI threats. Happy to answer any questions and always looking for cool experiments to try!

submitted by /u/Pitiful_Table_1870
[link] [comments]

What Really Happened In There? A Tamper-Evident Audit Trail for AI Agents

Full disclosure: I work on community at Always Further, the team behind this. Not the author. Posting because Luke's approach to tackling this challenge is unique and of an interest to the netsec community.

The core idea: if an AI agent is compromised, any log the agent itself writes becomes part of the attack surface. The post walks through how they split auditing into a supervisor process the sandboxed child can't reach, then uses the same Merkle tree + hash-chain construction RFC 6962 (Certificate Transparency) uses to make edits, truncation, and reordering all detectable.

There's a concrete threat-model table near the end that lists what each attack looks like and what structurally stops it. Worth skipping to if you don't want the crypto primer.

submitted by /u/Remote_Parsnip_5827
[link] [comments]

Bitwarden CLI Compromised in Ongoing Checkmarx Supply Chain ...

Bitwarden CLI npm package got compromised today, looks like part of the ongoing Checkmarx supply chain attack

If you’re using @bitwarden/cli version 2026.4.0, you might want to check your setup

From what researchers found:

- malicious file added (bw1.js)

- steals creds from GitHub, npm, AWS, Azure, GCP, SSH, env vars

- can read GitHub Actions runner memory

- exfiltrates data and even tries to spread via npm + workflows

- adds persistence through bash/zsh profiles

Some weird indicators:

- calls to audit.checkmarx.cx

- temp file like /tmp/tmp.987654321.lock

- random public repos with dune-style names (atreides, fremen etc.)

- commits with β€œLongLiveTheResistanceAgainstMachines”

Important part, this is only the npm CLI package right now, not the extensions or main apps

If you used it recently:

probably safest to rotate your tokens and check your CI logs and repos

Source is Socket research (posted a few hours ago)

Curious if anyone here actually got hit or noticed anything weird

submitted by /u/ApprehensiveEssay222
[link] [comments]

OAuth 2.0 BCP Β§4.14 reuse detection in practice β€” race vs theft disambiguation

Standard advice for refresh tokens: rotate on every use, store hashed, set a short expiry. Done, right?

Not quite.

Rotation alone does nothing against token theft. If malware or XSS lifts a refresh token from a legit client, the attacker and the client race to rotate it next. Whoever loses the race gets a "token revoked" error β€” and the winner keeps the session.

From the server’s point of view, it just sees two valid requests seconds apart. No alarm, no signal, nothing.

The missing piece is what OAuth 2.0 Security BCP Β§4.14 calls refresh token reuse detection: if a token that was already rotated is presented again, treat it as evidence of compromise and invalidate the entire session.

The core idea

Every token belongs to a family (FamilyId), shared across all rotations of a single login.

If a rotated token shows up again (outside a small grace window), you revoke the entire family:

  • the attacker is locked out
  • the legit user is forced to re-authenticate
  • the session is no longer silently compromised

​

if (stored.ReplacedByTokenHash is not null && stored.RevokedAtUtc.HasValue) { var withinGrace = stored.RevokedAtUtc.Value.AddSeconds(graceSeconds) > DateTime.UtcNow; if (withinGrace) return Fail("token_recently_rotated"); // benign race (SPA tabs, retries) await RevokeFamilyAsync(stored.FamilyId, ip, reason: "reuse_detected"); return Fail("token_reuse_detected"); } 

Client-side it’s just one extra branch:

if (error.code === "token_reuse_detected") { // "You've been signed out for security reasons. Please log in again." router.push("/login?reason=compromised"); } 

You can also hook into it for observability (alerts, SIEM, etc.):

services.AddSingleton<IAuthEventSink, SlackAlertSink>(); 

The tricky parts

  • Race vs theft look identical. Two requests with the same token arrive. One is legit, one might not be. Only timing differs. Grace window too small β†’ false positives on flaky networks. Too large β†’ real attack window. ~30 seconds worked well in practice.
  • Revoking the whole chain. On reuse you must invalidate all still-active tokens from that session. A simple FamilyId + index makes this a single bulk update.
  • Concurrency is common. Multi-tab SPAs, retries, mobile reconnects β€” without a grace window, I was logging myself out constantly during tests.

I ended up adding this to a small self-hosted auth library I’ve been working on (https://www.reddit.com/r/dotnet/comments/1shpady/selfhosted\_auth\_lib\_for\_net/)

submitted by /u/No_Ask_468
[link] [comments]

Weekly Update 500

21 April 2026 at 23:51
Weekly Update 500

Looking back at this milestone video, it's the audience question towards the end I liked most: "are you happy"? Charlotte and I have chosen a path that's non-traditional, intense and at times, pretty stressful. There's no clear delineation of when work starts and ends, no holidays where we don't work, nor weekends, birthdays or Christmases. But we do so on our terms. It gives us a life of means and choices, one with excitement and adventure, and, above all, one with purpose, where we feel like we're doing something that makes a meaningful difference. I hope you enjoy this week's video, it's more personal than usual, but yeah, that's kinda what you do at milestones 😊

Weekly Update 500
Weekly Update 500
Weekly Update 500
Weekly Update 500

P4WNED: How Insecure Defaults in Perforce Expose Source Code Across the Internet

Perforce is source control software used in games, entertainment, and a few engineering sectors. It's particularly useful when large binary assets need to be stored alongside source code. It handles binary assets much better than Git, IMO. However, its one weakness is its terrible security defaults. You will die a bit inside when you see the out-of-the-box behaviour: "Don't have an account? Let me make one for you!" and "Oh, you didn't know by default there is a hidden, read-only 'remote' user that allows read access to everything? Oops!"

I scanned 6,122 public Perforce servers last year. 72% were exposing source code, 21% had passwordless accounts, and 4% had unprotected superusers (which allow RCE). The vendor patched the largest issue, but a significant portion are still vulnerable.

Full write-up and methodology: https://morganrobertson.net/p4wned/

Tools repo, including Nuclei templates to scan your infra: https://github.com/flyingllama87/p4wned

Hardening is a pain, but here it is summed up: p4 configure set security=4 # disables the built-in 'remote' user + strong auth p4 configure set dm.user.noautocreate=2 # kills auto-signup p4 configure set dm.user.setinitialpasswd=0 # users cannot self-set first password p4 configure set dm.user.resetpassword=1 # force password reset flow p4 configure set dm.info.hide=1 # hide server license, internal IP, root path p4 configure set run.users.authorize=1 # user listing requires auth p4 configure set dm.user.hideinvalid=1 # no hints on bad login p4 configure set dm.keys.hide=2 # hide stored key/value pairs from non-admins p4 configure set server.rolechecks=1 # prevent P4AUTH misuse

Happy to answer any questions on the research!

submitted by /u/sleepface
[link] [comments]

We analysed almost 100 UK charity websites and found that ~1 in 6 are running vulnerable JavaScript dependencies.

We analysed almost 100 UK charity websites and found that ~1 in 6 are running vulnerable JavaScript dependencies.

What stood out more though:

- Some vulnerabilities were 10+ years old, including high and critical ratings

- Same jQuery CVE (2015-9251) appearing across multiple organisations

We’ve now seen similar patterns in the HE/FE and also hospitality sectors as well.

Are we right in thinking that this feels like a visibility problem alongside budget issues more than anything else?

How are you tracking dependencies effectively in your organisations?

Full write-up if useful: https://cybaa.io/blog/2026-04-20/uk-health-charity-website-security-2026

submitted by /u/JoeTiedeman
[link] [comments]

The Weird, Twisting Tale of How China Spied on Alysa Liu and Her Dad

20 April 2026 at 10:00
Years before the figure skater became an Olympic superstar, a Chinese operative tried to stalk her father and monitored other US residents deemed dissidents against China. And that’s just the beginning.

CVE-2026-33825 deep-dive: The researcher commented out the full credential dump. Here's what that means.

Most writeups of BlueHammer describe what it does. I read the actual PoC (FunnyApp.cpp, ~100KB of C++) and the most important line isn't in the oplock setup, the NT object namespace redirect, or the Cloud Files freeze. It's a comment.

The filestoleak array ships with one target active and two commented out:

const wchar\_t\* filestoleak\[\] = { {L"\\\\Windows\\\\System32\\\\Config\\\\SAM"} /\*,{L"\\\\Windows\\\\System32\\\\Config\\\\SYSTEM"},{L"\\\\Windows\\\\System32\\\\Config\\\\SECURITY"}\*/ }; 

SAM alone is a partial dump. The hashes are encrypted with the boot key β€” which lives in SYSTEM. Without SYSTEM you have ciphertext. With SAM + SYSTEM you have NTLM hashes you can pass-the-hash or crack offline. SECURITY adds LSA secrets: service account credentials, cached domain logon hashes, DPAPI master keys.

The complete credential package is two uncommented lines away from the published PoC. The author wrote both lines and chose what to ship.

Full analysis walks the actual code: the batch oplock on RstrtMgr.dll (not the EICAR file β€” that's what most writeups get wrong), the NtCreateSymbolicLinkObject swap in the session object namespace (not NTFS symlinks β€” a different layer entirely), the Cloud Files freeze via a fake OneDrive sync provider named IHATEMICROSOFT, and the undocumented IMpService RPC endpoint that triggers the chain with no elevated privilege required.

submitted by /u/TakesThisSeriously
[link] [comments]

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

16 April 2026 at 23:09
Here's What Agentic AI Can Do With Have I Been Pwned's APIs

I love cutting-edge tech, but I hate hyperbole, so I find AI to be a real paradox. Somewhere in that whole mess of overnight influencers, disinformation and ludicrous claims is some real "gold" - AI stuff that's genuinely useful and makes a meaningful difference. This blog post cuts straight to the good stuff, specifically how you can use AI with Have I Been Pwned to do some pretty cool things. I'll be showing examples based on OpenClaw running on the Mac Mini in the hero shot, but they're applicable to other agents that turn HIBP's data into more insightful analysis.

So, let me talk about what you can do right now, what we're working on and what you'll be able to do in the future.

Model Context Protocol (MCP)

A quick MCP primer first: Anthropic came up with the idea of building a protocol that could connect systems to AI apps, and thus the Model Context Protocol was born:

Using MCP, AI applications like Claude or ChatGPT can connect to data sources (e.g. local files, databases), tools (e.g. search engines, calculators) and workflows (e.g. specialized prompts)β€”enabling them to access key information and perform tasks.

If I'm honest, I'm a bit on the fence as to how useful this really is (and I'm not alone), but creating it was a no-brainer, so we now have an MCP server for HIBP:

https://haveibeenpwned.com/mcp

You can't just make an HTTP GET to the endpoint, but you can ask your favourite AI tool to explain what it does:

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

In other words, all the stuff we describe in the API docs πŸ™‚ That's an overly simplistic statement, and there are many nuances MCP introduces beyond a computer reading docs intended for humans, but the point is that we've implemented MCP and it's there if you want it. Which means you can easily use the JSON below to, for example, extend GitHub Copilot:

"HIBP": {
  "url": "https://haveibeenpwned.com/mcp",
  "headers": {
    "hibp-api-key": "YOUR_STANDARD_HIBP_API_KEY"
  },
  "type": "http"
}

Now let's do something useful with it.

Human Use Cases

This is really the point of the whole thing - how can humans use it to do genuinely useful stuff? In particular, how can they use it to do stuff that was hard to do before, and how can "normies" (non-technical folks) use it to do stuff they previously needed developers for? I've been toying with these questions for a while now. Here's what I've come up with:

Firstly, I'm going to do all these demos on OpenClaw. I've been talking a lot about that on my weekly live streams over the past month, and the "agentic" nature of it (being able to act as an independent agent tying together multiple otherwise independent acts) is enormously powerful. Every company worth its AI salt is now focusing on building out agentic AI so whilst I'm using OpenClaw for these demos, you'll be able to do exactly the same thing in your platform of choice either now or in the very near future.

I'm using a Telegram bot as my interface into OpenClaw, let's kick it off:

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

Easy, right? πŸ™‚ There's a different discussion around how secrets are stored and protected, but that's a story for another time (and is also obviously dependent on your agent). But the key is easily rotated on the HIBP dashboard anyway. If you don't have a key already, go and take out a subscription (they start at a few bucks a month), and you'll be up and running in no time.

Now that I know I'm connected, let's learn about how I'm presently using the service:

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

Most of these are pretty obvious, but I've also included another here that I use to monitor how the service is behaving with a large organisation. It's a real domain with real data, so I'm going to obfuscate it to preserve privacy, but it's a great demonstration of how useful AI is. In fact, the inspiration of this blog post was when I received this notification last week:

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

One of the most asked questions after someone in a large org receives an email like this is "who are those 16 people in the breach"? Because we can't reliably filter large domains in the UI, I'd normally suggest they either download the CSV or JSON format in the dashboard, then search for "Hallmark" in there or use the API and write some code. But now, there's a much easier way:

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

Well that was easy 😎 I like the additional context too, and now it has me curious: what have these people been up to?

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

Because I'm on a Pro plan (or if you're still on the old Pwned 5 plan), I've also got access to stealer logs. Let's see what's going on there:

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

If you were running an online service, that first number would indicate compromised customers. But as OpenClaw has suggested here, the second number is the one that's interesting in terms of employees entering their data into other websites using the corporate email address. But they'd never reuse the same password as the work one, right? πŸ€” Best check which services they're entering organisational assets into:

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

The first one makes sense and is extra worrying when you consider these are people infected with infostealers. That's not necessarily malware on a corporate asset; they could always be using an infected personal device to sign into a corporate asset... ok, that's also pretty bad! I was a bit surprised to see Steam in there TBH - who's using their corporate email address to sign into a gaming platform?! A quiet chat with them might be in order. And the bamboozled.net stuff is weird, I want to understand a bit more about that:

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

Now I'm losing interest in this blog post and am really curious as to what's actually in the data!

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

Ok, so there's an entire rabbit hole over there! Let's park that, but think about how useful information like this is to infosec teams when you can pull it so easily. Or how useful info like this is to HR teams 😬

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

Keep in mind, these are corporate addresses tied to the company and are the company's property, so, yeah...

But remember the agentic nature of OpenClaw means we can ask it to go off and run tasks in the background, tasks like this:

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

This was just a little thought experiment I set up a few days ago and forgot about until yesterday, when I loaded a new breach:

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

I never asked it to look for "functional/system accounts"; it just decided that was relevant. And it is - this breach clearly had a lot of data in it related to purchases of services, which is an interesting aspect.

The idea of running stuff on a schedule opens up a whole raft of new opportunities. For example, monitoring your family's email addresses: "let me know when mum@example.com appears in a new breach". From here, your creativity is the only limit (and even that statement is debatable, given how much stuff AI agents come up with on their own). For example, creating visualisations of the data:

Here's What Agentic AI Can Do With Have I Been Pwned's APIs

I could go on and on (I started going down another rabbit hole of having it generate executive-level reports with all the data), but you get the idea.

The AI Pipeline

This is about what's in our pipeline, and the primary theme is putting tooling where it's more easily accessible to the masses. Creating a connector in Claude, an app in ChatGPT, and similar plumbing in the other big players' AI tools is an obvious next step. This will likely involve adding an OAuth layer to HIBP, allowing end users to configure the respective tools to query those HIBP APIs under their identity and achieve the same results as above, but built into the "traditional" AI tooling in a way people are familiar with.

Future

A big part of this is about AI enabling more human conversations to achieve technical outcomes. I spotted this from Cloudflare just yesterday, and it's a perfect example of just this:

Cloudflare dashboard can now complete tasks for you.

- "Create a Worker and bind a new R2 bucket to it"
- "Change my DNS records to 1.1.1.1"
- "How many errors have happened this week"

Not only do we tell you, but we show you with generative UI.

PROTIP: Use full-screen mode. pic.twitter.com/Q1o1vyoOwk

β€” Brayden (@BraydenWilmoth) April 15, 2026

I've been pretty blown away by both how easy this process has been and how much insight I've been able to draw from data I've been sitting on for ages. We'll be building out more tooling and easily reproducible demos in the future, and I'm sure a lot of that will do stuff we haven't even thought of yet. If you give this a go and find other awesome use cases, please leave a comment and tell me what you've done, especially if you've cut through the hyperbole and created some genuinely awesome stuff 😎

World Leaks: RDP Access Leads to Custom Exfiltration and Personalized Extortion

Two day intrusion. RDP brute force with a company specific wordlist, Cobalt Strike, and a custom Rust exfiltration platform (RustyRocket) that connected to over 6,900 unique Cloudflare IPs over 443 to pull data from every reachable host over SMB.

Recovered the operator README documenting three operating modes and a companion pivoting proxy for segmented networks.

Personalized extortion notes addressed by name to each employee with separate templates for leadership and staff.

Writeup includes screen recordings of the intrusion, full negotiation chat from their Tor portal, timeline, and IOCs.

submitted by /u/BreachCache
[link] [comments]
❌