❌

Reading view

HIBP Mega Update: Passkeys, k-Anonymity Searches, Massive Speed Enhancements and a Bulk Domain Verification API

HIBP Mega Update: Passkeys, k-Anonymity Searches, Massive Speed Enhancements and a Bulk Domain Verification API

For a hobby project built in my spare time to provide a simple community service, Have I Been Pwned sure has, well, "escalated". Today, we support hundreds of thousands of website visitors each day, tens of millions of API queries, and hundreds of millions of password searches. We're processing billions of compromised records each year provided by breached companies, white hat researchers, hackers and law enforcement agencies. And it's used by every conceivable demographic: infosec pros, "mums and dads", customer support services, and, according to the data, more than half the Fortune 500 who are actively monitoring the exposure of their domains. So yeah, "escalated" seems fair!

Amidst all the time spent processing data, we've been trying to figure out where to invest energy in building new stuff. In essence, data breaches are pretty simple: you've got a bunch of exposed email addresses attributed to a source, sitting next to a whole bunch of fields we describe with metadata. Our goal has always been to help people use this data to do good after bad things happen, and today we're launching a bunch of new features to do just that. So, here goes:

New Features, New Plans

In the beginning (ok, in "recent years"), there was one plan we referred to as "Pwned", and within that, there were various levels. For example, the entry-level plan has been "Pwned 1," and to this day, more than half our subscriptions are on it. That's "a coffee a month" for a simple service that, by the raw numbers, does precisely what most of our subscribers are looking for. These are typically small businesses that make a handful of API queries or monitor a domain or two with a few email addresses. It's simple, effective and... insufficient for larger organisations. So, we added Pwned 2, 3 and 4, and they all added more RPMs for email searches and more capacity for searching larger domains. Then we added Pwned 5, which added stealer log support, and somewhere along the way also added Pwned Ultra tiers for making large numbers of API requests. As a result, that one "plan" added more and more stuff at different levels and ultimately became a bit kludgy.

Today, we're launching a bunch of new features to better support the volume and privacy needs of our subscribers, and we're shuffling our existing plans to help do this. Here's what they now look like:

  1. Core: The fundamentals, largely being what we already had and designed for entry-level use cases
  2. Pro: Contains a bunch of the new features designed for larger orgs and those searching domains on behalf of customers
  3. High RPM: The old "Ultra" plan levels, designed solely for making large volumes of requests to the email search API
  4. Enterprise: We've had this for many years now, and it's a more tailored offering

So, that's the high-level overview. Let's now look at all the new stuff and everything that changes:

Supporting MSPs Monitoring on Behalf of Third Parties

For most people, this won't sound particularly exciting, but I'm putting it up front because I'll refer to it when describing the more important stuff shortly. In the past, we've had the following carve-out in our terms of use, namely, what you're not permitted to use the service for:

the benefit of a third party (including for use by a related entity or for the purpose of reselling or otherwise making the Services available to any third party for commercial benefit)

This excluded managed service providers from, for example, monitoring their customers' domains as part of their services. That clause has now been revised with the preceding text:

unless you have purchased a Paid Service which expressly allows you to do so

Which means we can now welcome MSPs to the Pro and High RPM tiers. They can't just take HIBP and use it to create a competing product (for obvious reasons, that's a pretty standard clause within many online services), but they can absolutely add it to the offerings they provide to their own customers. And we're adding new features to make it easier to do just that, for example:

Automating Domain Verification

Preserving privacy whilst still providing a practical, effective service has always been a balancing act, one I think we've gotten pretty spot on. But the hoops people have had to jump through for domain verification, in particular, have been cumbersome. An organisation wanting to add a bunch of its domains has had to go through the process one by one via the web interface, then verify control over them one by one. They'd spend a lot of time doing kludgy, repetitive work. Today, we're launching two new ways of adding domains in a much more automated fashion, and the first is the verifying via DNS API:

Successfully adding a pre-defined TXT record to DNS is solid proof that whoever is attempting to search that domain genuinely controls it. As well as the old kludgy way of doing it in the browser, waiting for DNS to propagate, then coming back to the browser to complete the verification, we can now fully automate the process via API. Here's how it works:

  1. Call the HIBP API to generate the TXT record token
  2. Call the API on your DNS provider to add the token to the TXT record
  3. Call another HIBP API to validate that the token exists

This is easily scripted in your language of choice, and you can enumerate it over as many domains as you like. You can also keep retrying step 3 above as often as needed when DNS takes a little while to do its thing. It's all now fully documented in the latest version of the API, and ready to roll. But what if you don't control the DNS? Perhaps it's a cumbersome process in your org, or you're an MSP monitoring your customers' domains, but you don't have control of DNS. That's where the verifying by email API comes in:

We've long had a verification process that involves choosing one of several standard aliases on a domain to email a verification token to. You do this via the dashboard, grab the token sent to the email, paste it back into the dashboard and the domain is now verified. The new API makes that much easier, especially when multiple domains are being verified. Here's how it works:

  1. Call the HIBP API and specify one of the pre-defined aliases to send a verification email to
  2. Click the link in the email and approve the domain to be added to the requester's account

And that's it. We see this being particularly useful for MSPs who can now send a heap of emails on their customers' domains, and so long as someone receives it and clicks the link, that's the verification process done. That API is also now fully documented and ready to roll and is accessible to all Pro plan subscribers.

Auto-verifying Subdomains

This one was just unnecessarily frustrating for larger customers who spread email addresses over multiple subdomains. Let's say a company owns example.com and they successfully verify control of it, but then they distribute their email addresses by region. They end up with addresses @apac.example.com and @emea.example.com and so on, and in the past, needed to verify each subdomain separately.

Turns out we have 154 votes for this feature in User Voice, which is substantially more than I expected. So, in keeping with the theme of the Pro plan making it easier on larger orgs, anyone on that level can now add their apex domain, verify it accordingly, then go to town adding all the subdomains they want without the need for verifying each one.

Bringing K-Anonymity Searches to the Masses

Until today, every time you took out a subscription via the public website and started searching email addresses, it looked like this:

GET https://haveibeenpwned.com/api/v3/breachedaccount/test@example.com

Clearly, this involved sending the email address to HIBP's service. Whilst we don't store those addresses, if you're sending data to a service in this fashion, there's always the technical capability for us to see that piece of PII and associate it back to the requester via their API key. This approach is what we'll refer to as "direct email search". Let's now look at k-anonymity searches, and I'll break it down into a few simple steps:

  1. Start by creating a SHA-1 hash of the address to be searched, so for test@example.com, that's:
    567159D622FFBB50B11B0EFD307BE358624A26EE
  2. Take the first 6 characters of the hash and pass them to the new API:
    GET https://haveibeenpwned.com/api/v3/breachedaccount/range/567159
    What's really important here is that those 6 characters are the only identifier sent to HIBP and they're completely useless in identifying which address was actually searched for (that link also explains why SHA-1 is perfectly reasonable for this)
  3. HIBP then responds with the suffixes of every hash we have that matches that prefix and for each one, the breaches it's appeared against:
    {
      "hashSuffix": "D622FFBB50B11B0EFD307BE358624A26EE",
      "websites": [
          "Adobe",
          "Stratfor",
          "Yahoo",
          ...
      ]
    },
    ...
    The prefix presently contains 393 suffixes, and if one of them matches the remaining characters of the hash of the full email address, you know that's the address you're looking for.

This is the same methodology we've been using for years with the Pwned Passwords search, and we're currently serving about 18 billion requests a month, so it seems that lots of people have easily gotten to grips with it. It's a pretty simple technical concept with great privacy attributes, and it's fully documented on the API page.

K-anonymity searches are now available to all Pro and High RPM subscribers at the same rate limit as the direct searches. That rate limit is shared, so you can either make 100% of them to k-anon or 100% to the direct search or go 50/50. We're really happy with the privacy aspects of this API and we know it ticks a box a lot of orgs have been asking for.

Unsmoothing the API Rate Limit

Previously, when you took out a 10-request-per-minute API key, we implemented a rate limit of 1 request every 6 seconds. The same logic applied to all the higher-tier products, too, and the reason was simply to distribute the load across each minute more evenly or in other words, "smoothing" the rate at which requests were made. That was important earlier on as the underlying Azure infrastructure had to support that traffic, and sudden bursts could be problematic.

But the other thing that was problematic is that people (quite reasonably) assumed that they could make 10 fast requests, wait a minute, then go again. This led to support overhead for us and customer frustration, and neither is good.

With these latest updates, 10RPM (and all the other RPMs) is now implemented exactly as it sounds - 10 requests in any one-minute block. Here's our Azure API Management policy:

HIBP Mega Update: Passkeys, k-Anonymity Searches, Massive Speed Enhancements and a Bulk Domain Verification API

In other words, we've "unsmoothed" it. You can hammer the service 10 times in quick succession, then wait a minute, and you won't see a single HTTP 429 "Too many requests" response. Equally, if you're on a 12,000 RPM plan (and you can actually send that many requests quickly!), you won't see an unexpected 429. We can do this now because of the way we serve a huge amount of content from Cloudflare's edge, unburdening the underlying infrastructure from sudden spikes.

It's a little thing, but it'll solve a lot of unnecessary frustration for a bunch of people, including us. That's implemented across every single plan, too, so everyone benefits.

We Just Wanna Go (Even) Fast(er)

Here's our challenge today: how do we enable millions of people a day to search through billions of records with near instantaneous results... and do it affordably? They're somewhat competing objectives, but every now and then, we find this one neat trick that dramatically improves things. About 18 months ago, I wrote about how we were Hyperscaling HIBP with Cloudflare Workers and Caching. The basic premise is that, as people search the service, we build a cache in Cloudflare's 300+ edge nodes that includes the entire hash range just searched for (see the k-anon section above). We flush that out on every new breach load and as it builds back up to the full 16^6 possible cachable hash ranges, our origin load approaches zero and everything gets served from the edge. Almost, because we have the following problem I described in the post:

However, the second two models (the public and enterprise APIs) have the added burden of validating the API key against Azure API Management (APIM), and the only place that exists is in the West US origin service. What this means for those endpoints is that before we can return search results from a location that may be just a short jet ski ride away, we need to go all the way to the other side of the world to validate the key and ensure the request is within the rate limit.

Or at least we had that problem, which we've just solved with a simple fix. The quoted problem stemmed from the fact that, to ensure everyone adhered to the rate limit, we performed the APIM check before returning any data. That meant always waiting for packets to make a round trip to America, even when the data was cached nearby. But what we realised is that adhering to the rate limit can be eventually enforced; it really doesn't matter too much if a request or two in excess of the rate limit slips through, then we enforce it. The reason why that epiphany is important is that with that in mind, we can start returning data to the client immediately whilst doing the APIM check asynchronously. If the request exceeds the rate limit, Cloudflare will block subsequent requests until the client starts making requests within their limit. So, the rate limit check is no longer a blocking call; it's a background process that doesn't delay us returning results.

What that means is a dramatic reduction in the time til first byte:

HIBP Mega Update: Passkeys, k-Anonymity Searches, Massive Speed Enhancements and a Bulk Domain Verification API

That's almost a 40% reduction in wait time! It's an awesome example of how continuous investment in the way we run this thing yields tangible results that make a meaningful difference to the value people get from the service.

Passkeys!

Just one more thing...

This is all new, all free and all available to everyone, whether they have a paid subscription or not. Remember when I got phished last year? I sure do, and I vowed to use that experience to maximise the adoption of passkeys wherever possible. So, putting my money (and time) where my mouth is, we've now launched passkeys as an alternate means of signing into your dashboard:

HIBP Mega Update: Passkeys, k-Anonymity Searches, Massive Speed Enhancements and a Bulk Domain Verification API

This saves you needing a "magic" link via email on every sign-in, and whilst it doesn't constitute 2FA (the passkey becomes a single factor used to sign in), it massively streamlines how you access the dashboard. And because we never used passwords for access in the first place, the only account-takeover risk our customers face is someone gaining access to either their email account or to where they store their passkeys (in either case, they have much bigger problems!).

Here's how it works: start by signing into your dashboard, then heading over to the "Passkeys" section on the left of the screen and adding a new one:

HIBP Mega Update: Passkeys, k-Anonymity Searches, Massive Speed Enhancements and a Bulk Domain Verification API

The name is so you can keep track of which passkey you save where. I save most of mine in 1Password, but you can also save them on a physical U2F key or in your browser, for example. Clicking "Continue" will cause your browser to prompt you for the location where you'd like to store it and again, that's 1Password for me:

HIBP Mega Update: Passkeys, k-Anonymity Searches, Massive Speed Enhancements and a Bulk Domain Verification API

And that's it - we're done!

HIBP Mega Update: Passkeys, k-Anonymity Searches, Massive Speed Enhancements and a Bulk Domain Verification API

So, how does it work? Check this out, and don't blink or you'll miss it:

HIBP Mega Update: Passkeys, k-Anonymity Searches, Massive Speed Enhancements and a Bulk Domain Verification API

Compared to typing in your email address, hitting the "Sign In" button, flicking over to the mailbox, waiting for the mail to arrive, then clicking the link, we're down from let's call it 30 seconds to about 3 seconds. Nice 😎

Even though there isn't much security benefit to doing this on HIBP (you can still sign in via email, too), we wanted to build this as an example of just how easy it is. It took StefΓ‘n about an hour to build a first cut of this (with support from Copilot), and, aside from the dev time, building passkey support into your website is totally free. There are no external services you need to pay for, no hardware to buy or special crypto concepts to grasp. Passkeys are dead simple, and web developers with even a passing interest in security and usability should be adding support for them right now. We also wanted to make sure they were freely available to anyone, regardless of whether you have a paid subscription, because security like this should be the baseline, not a paid extra. So, go and give them a go in HIBP now.

And just in case you want to really geek out on how passkeys work, StefΓ‘n presented this at NDC Security in Oslo earlier this month:

All the Plans and Future Changes for Existing Subscribers

It's easiest just to see the whole overview all in one image (or jump over to the pricing page on the website), and it largely reflects everything described above:

HIBP Mega Update: Passkeys, k-Anonymity Searches, Massive Speed Enhancements and a Bulk Domain Verification API

One immediate difference to how we've previously represented the plans is that the annual price is now shown as a monthly figure. It turns out that the vast majority of our subscribers choose annual billing, so leading with the per-month pricing puts the least relevant figures front and centre. As we looked around at other services, that was a pretty consistent trend, especially when one annual subscription is more cost-effective than renewing a monthly one 12 times (annual is roughly 10x a year's worth of month-by-month payments).

Another change is that we're going to cap the number of larger domains (those with over 10 breached addresses) that can be searched on each subscription. Let me explain why: Every time we load a data breach, each record in the breach is checked against each domain being monitored. In 2025, we added 2.9 billion breached records, and we have 400k monitored domains. Multiply those out, and we're looking at 1.16 quadrillion checks for our subscribers each year. This is all handled by SQL queries, so it's not like we're getting hit with human overhead at scale, but we're getting hit hard with SQL costs. Across everything we pay to run this service (storage, app hosting, functions, API management, App Insights, bandwidth, etc.), the SQL bill is more than the total for all other services combined. In addition to how we currently calculate plan size based on breached email count, we're adding a cap on the number of domains per plan.

Only domains with more than 10 breached addresses are included in the cap.

The β€œ10” threshold aligns with the existing requirement for a domain to need a subscription at all, and means this change impacts only a single-digit percentage of subscribers. It also helps filter out noise so the cap reflects domains that actually matter. For those larger domains beyond the cap, all current alerts will continue to work just fine until they run a search. At that time, they'll have the option to upgrade the plan or reduce the number of domains. But none of that affects existing subscribers now:

There will be no changes to existing plans until at least August 2 this year.

We do an annual price revision each August, and that's already factored into the table above. That applies to any new subscriptions immediately, but it won't touch existing ones until August 2 at the earliest. The revised pricing only kicks in on the next subscription renewal after that date, so it could be as late as August 2027 if you're an existing subscriber. The same goes for the cap on the number of domains being monitored - there's no impact on existing subscribers until at least August. That leaves plenty of time to cancel, downgrade, upgrade, or just do nothing, and the plan will automatically roll over to the new one. We'll be emailing everyone in the coming days with details of precisely what will change.

Note: if you had an old Pwned 5 subscription for the sake of stealer log access, we'll be rolling all those folks over to Pro 1 and applying a permanent discount code to ensure there's no change in price by moving to the higher plan (it'll actually drop slightly). That'll be explained in the upcoming email, it just made more sense to keep stealer logs in Pro and move people over, and this'll just give them free access to all the new stuff too.

Speaking of which, the thing that (almost) nobody reads but everyone is subject to has been revised to reflect the changes described above - the Terms of Use. For the first time, we've also summarised all the changes and linked through to an archive of the old ones, so if you really love digging through a long document prepared by lawyers, this should make you happy 😊

We're Still Doing Credit Cards via Stripe

While I'm here, just a quick comment on our ongoing Stripe dependency and, as a result, the necessity to pay for public services via credit card. I've written before about some of the challenges we've faced with customers' requests to pay by other means and how, push comes to shove, they (almost) always find a way around internal barriers. Let me share a recent empirical anecdote about this:

Just the other day, I had a call with a Fortune 500 company that was initially interested in our enterprise services. As the discussion unfolded, it became evident that the public services would more than suffice and that the enterprise route was too burdensome for their particular use case. Be that as it may, the procurement lady on the call was adamant that payment by credit card was impossible, even going to the extent of making a pretty bold statement:

No Fortune 500 company is going to pay for services like this via credit card!

O RLY? If only I had the data to check that claim... 😊 Based on a list of their domains, 132 unique Fortune 500 companies have paid for our services by credit card. The real number will be higher because many more of their domains are not on that list, or purchases have been made via an email address not on the corporate domain. Let's call it somewhere between a quarter and a third of the Fortune 500 who've puschased direct via the world's most common payment method. In other words, a significantly different number from the "zero" claim.

I've dropped the hard facts here out of both frustration from our dealings with unnecessarily artificial barriers and in support of the folks out there who, just like me in my corporate days, had to deal with "Neville" in procurement. Per that linked blog post, push back against "corporate policy" prohibiting payment by card, and statistically, you'll likely find you're not the 1 in 160 who can't make a simple payment.

Summary

We're continuing to massively invest in expanding HIBP in every way we can find. Nearly 3 billion additional breached records last year, hundreds of billions of free Pwned Passwords queries during that time, a bunch of new tweaks and features everyone gets access to and, of course, all the new stuff we've rolled into the higher plans. These new features are the culmination of a huge volume of work dating back to November, when I took this pic of our little team during our planning meeting together in Oslo.

HIBP Mega Update: Passkeys, k-Anonymity Searches, Massive Speed Enhancements and a Bulk Domain Verification API

We all hope it helps people use our Have I Been Pwned services to do more good after bad things happen.

  •  

One POST request, six API keys: breaking into popular MCP servers

tl;dr - one POST request decrypted every API key in a 14K-star project. tested 5 more MCP servers, found RCE, SSRF, prompt injection, and command injection. 70K combined github stars, zero auth on most of them.

  • archon (13.7K stars): zero auth on entire credential API. one POST to /api/credentials/status-check returns every stored API key decrypted in plaintext. can also create and delete credentials. CORS is *, server binds 0.0.0.0

  • blender-mcp (18K stars): prompt injection hidden in tool docstrings. the server instructs the AI to "silently remember" your API key type without telling you. also unsandboxed exec() for code execution

  • claude-flow (27K stars): hardcoded --dangerously-skip permissions on every spawned claude process. 6 execSync calls with unsanitized string interpolation. textbook command injection

  • deep-research (4.5K stars): MD5 auth bypass on crawler endpoint (empty password = trivial to compute). once past that, full SSRF - no URL validation at all. also promptOverrides lets you replace the system prompt, and CORS is *

  • mcp-feedback-enhanced (3.6K stars): unauthenticated websocket accepts run_command messages. got env vars, ssh keys, aws creds. weak command blocklist bypassable with python3 -c

  • figma-console-mcp (1.3K stars, 71K weekly npm downloads): readFileSync on user-controlled paths, directory traversal, websocket accepts connections with no origin header, any local process can register as a fake figma plugin and intercept all AI commands

all tested against real published packages, no modified code. exploit scripts and evidence logs linked in the post.

the common theme: MCP has no auth standard so most servers just ship without any.

submitted by /u/Kind-Release-3817
[link] [comments]
  •  

An attack class that passes every current LLM filter

An attack class that passes every current LLM filter

https://shapingrooms.com/research

I opened OWASP issue #807 a few weeks ago proposing a new attack class. The paper is published today following coordinated disclosure to Anthropic, OpenAI, Google, xAI, CERT/CC, OWASP, and agentic framework maintainers.

Here is what I found.

Ordinary language buried in prior context shifts how a model reasons about a consequential decision before any instruction arrives. No adversarial signature. No override command. The model executes its instructions faithfully, just from a different starting angle than the operator intended.

I know that sounds like normal context sensitivity. It isn't, or at least the effect size is much larger than I expected. Matched control text of identical length and semantic similarity produced significantly smaller directional shifts. This specific class of language appears to be modeled differently. I documented binary decision reversals with paired controls across four frontier models.

The distinction from prompt injection: there is no payload. Current defenses scan for facts disguised as commands. This is frames disguised as facts. Nothing for current filters to catch.

In agentic pipelines it gets worse. Posture installs in Agent A, survives summarization, and by Agent C reads as independent expert judgment. No phrase to point to in the logs. The decision was shaped before it was made.

If you have seen unexplained directional drift in a pipeline and couldn't find the source, this may be what you were looking at. The lens might give you something to work with.

I don't have all the answers. The methodology is black-box observational, no model internals access, small N on the propagation findings. Limitations are stated plainly in the paper. This needs more investigation, larger N, and ideally labs with internals access stress-testing it properly.

If you want to verify it yourself, demos are at https://shapingrooms.com/demos - run them against any frontier model. If you have a production pipeline that processes retrieved documents or passes summaries between agents, it may be worth applying this lens to your own context flow.

Happy to discuss methodology, findings, or pushback on the framing. The OWASP thread already has some useful discussion from independent researchers who have documented related patterns in production.

GitHub issue: https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/issues/807

submitted by /u/lurkyloon
[link] [comments]
  •  

pentest-ai - 6 Claude Code subagents for offensive security research (engagement planning, recon analysis, exploit methodology, detection engineering, STIG compliance, report writing)

I built a set of Claude Code subagents designed for pentesters and red teamers doing authorized engagements.

What it does: You install 6 agent files into Claude Code, and it automatically routes to the right specialist based on what you're working on. Paste Nmap output and it prioritizes attack vectors with

follow-up commands. Ask about an AD attack and it gives you the methodology AND the detection perspective. Ask it to write a report finding and it formats it to PTES standards with CVSS scoring.

The agents cover:

- Engagement planning with MITRE ATT&CK mapping

- Recon/scan output analysis (Nmap, Nessus, BloodHound, etc.)

- Exploitation methodology with defensive perspective built in

- Detection rule generation (Sigma, Splunk SPL, Elastic KQL)

- DISA STIG compliance analysis with keep-open justifications

- Professional pentest report writing

Every technique references ATT&CK IDs, and the exploit guide agent is required to explain what the attack looks like from the blue team side β€” so it's useful for purple team work too.

Repo has example outputs so you can see the quality before installing: https://github.com/0xSteph/pentest-ai/tree/main/examples

Open to feedback. If you think an agent is missing or the methodology is off somewhere, PRs are welcome.

submitted by /u/stephnot
[link] [comments]
  •  

Chaining file upload bypass and stored XSS to create admin accounts: walkthrough with Docker PoC lab

Write up of a vulnerability chain from a recent SaaS pen test. Two medium-severity findings (file upload bypass and stored XSS) chained together for full admin account creation.

The target had CSP restricting script sources to self, CORS locked down, and CSRF tokens on forms. All functioning correctly. The chain bypassed everything by staying same-origin the entire way.

The file upload had no server-side validation (client-side accept=".pdf" only), so we uploaded a JS payload. It got served back from the app's own download endpoint on the same origin. The stored XSS in the admin inbox messaging system loaded it via an <img onerror> handler that fetched the payload and eval'd it. The payload created a backdoor admin account using the admin's session cookie.

CSP didn't block it because the script was hosted same-origin via the upload. CORS irrelevant since nothing crossed an origin boundary. CSRF tokens didn't matter because same-origin JS can read the DOM and grab them anyway.

Full write up with attack steps, code, and screenshots: https://kurtisebear.com/2026/03/28/chaining-file-upload-xss-admin-compromise/

Also built a Docker lab that reproduces the exact chain with the security controls in place. PHP app, both vulns baked in, admin + user accounts seeded. Clone and docker-compose up: https://github.com/echosecure/vuln-chain-lab

submitted by /u/kurtisebear
[link] [comments]
  •  

DVRTC: intentionally vulnerable VoIP/WebRTC lab with SIP enumeration, RTP bleed, TURN abuse, and credential cracking exercises

Author here. DVRTC is our attempt to fill a gap that's been there for a while: web app security has DVWA and friends, but there's been nothing equivalent for VoIP and WebRTC attack techniques.

The first scenario (pbx1) deploys a full stack β€” Kamailio as the SIP proxy, Asterisk as the back-end PBX, rtpengine for media, coturn for TURN/STUN β€” with each component configured to exhibit specific vulnerable behaviors:

  • Kamailio returns distinguishable responses for valid vs. invalid extensions (enumeration), logs User-Agent headers to MySQL without sanitisation (SQLi), and has a special handler that triggers digest auth leaks for extension 2000
  • rtpengine is using default configuration, that enables RTP bleed (leaking media from other sessions) and RTP injection
  • coturn uses hardcoded credentials and a permissive relay policy for the TURN abuse exercise
  • Asterisk has extension 1000 with a weak password (1500) for online cracking

7 exercises with step-by-step instructions. There's also a live instance at pbx1.dvrtc.net if you want to try it without standing up your own.

Happy to answer questions.

submitted by /u/EnableSecurity
[link] [comments]
  •  

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web

In enterprise environments, identity effectively became the control plane once network perimeters broke down (e.g. zero trust, et cetera).

I’m seeing a similar pattern emerging on the public internet via age verification and safety regulation, but with identity moving closer to the access layer itself.

Not just: β€œAre you over 18?”

But: identity assertions are becoming part of how access is granted at the OS/device/app store level.

From a security perspective, this seems to introduce some new attack surfaces:

  • high-value identity tokens at the OS/device level
  • new trust boundaries between apps, OS, and third-party verifiers
  • incentives to target device compromise or token reuse rather than account-level bypass
  • potential centralisation of identity providers as enforcement points

Questions I’m trying to think through:

  • Does this effectively make identity providers the new perimeter/control plane?
  • How would you model this system (closer to DRM, identity federation, or something else?)
  • What are the likely failure modes if this layer becomes centralised?
  • Are decentralised / on-device credentials actually viable from a security standpoint, or do they just shift the attack surface?

Curious how people here would threat model this or where the obvious breakpoints are.

submitted by /u/wayne_horkan
[link] [comments]
  •  

Making NTLM-Relaying Relevant Again by Attacking Web Servers with WebRelayX

NTLM-Relaying has been proclaimed dead a number of times, signing requirements for SMB and LDAP make it nearly impossible to use captured NTLM authentications anymore. However, it is still possible to relay to many webservers that do not enforce Extended Protection for Authentication (not just ADCS / ESC8).

submitted by /u/seccore_gmbh
[link] [comments]
  •  

Our first pentest on a 100% Vibe coded application : analysis & feedback

We pentested a web app built 100% with AI β€” no human-written code. Functional, clean, well-structured. But security-wise, we found critical issues on day one: LFI, IDOR, vulnerable dependencies, and more.

AI-generated code is not secure by default. And vibe coding moves fast enough that security gets skipped entirely.

Full writeup with technical details and recommendations: https://www.hackmosphere.fr/en/?p=3803

Anyone else seeing this pattern in AI-generated apps?

submitted by /u/Hackmosphere
[link] [comments]
  •  

Alleged OVHcloud data of 1.6M customers and 5.9M websites posted on popular forum for sale. CEO Comments

There are reports of OVHcloud-related data being posted on a forum for sale. No official confirmation so far from OVHCloud. Given OVH’s scale, potential impact could be significant depending on scope, especially in Europe

UPDATE: OVHcloud CEO, Octave Klaba has commented that the sample dataset was not found in their system.

submitted by /u/raptorhunter22
[link] [comments]
  •  

Weekly Update 496

Weekly Update 496

Watching OpenClaw do its thing must be like watching the first plane take flight. It's a bit rickety and stuck together with a lot of sticky tape, but squint and you can see the potential for agentic AI to change the world as we know it. And I don't think that's hyperbolic. A lot of what people claim to have done with it is hyperbolic, and as with all new tech, the challenge is to cut through the noise and find the value. Stay tuned for more on that, as I've already found some really useful applications for it to help me do my job better, which I think I should devote my next weekly vid to just that.

Weekly Update 496
Weekly Update 496
Weekly Update 496
Weekly Update 496
  •  

e open-sourced 209 security tests for multi-agent AI systems (MCP, A2A, L402/x402 protocols)

Most AI security testing focuses on the model: prompt injection, jailbreaking, and output filtering.

We've been working on something different: testing the agent *system*. The protocols, integrations, and decision paths that determine what agents do in production. The result is a framework with 209 tests covering 4 wire protocols:

**MCP (Model Context Protocol)** Tool invocation security: auth, injection, data leakage, tool abuse, scope creep

**A2A (Agent-to-Agent)** Inter-agent communication: message integrity, impersonation, privilege escalation

**L402 (Lightning)** Bitcoin-based agent payments: payment flow integrity, double-spend, authorization bypass

**x402 (USDC/Stablecoin)** Fiat-equivalent agent payments: transaction limits, approval flows, compliance

Every test maps to a specific OWASP ASI (Agentic Security Initiatives) Top 10 category. Cross-referenced with NIST AI 800-2 categories for compliance reporting.

```

pip install agent-security-harness

```

20+ enterprise platform adapters included (Salesforce, ServiceNow, Workday, etc.).

MIT license. Feedback welcome. Especially from anyone running multi-agent systems in production. What attack vectors are we missing?

submitted by /u/Careful-Living-1532
[link] [comments]
  •  

Detect SnappyClient C&C Traffic Using PacketSmith + Yara-X Detection Module

SnappyClient is a malware found by Zscaler that uses a custom binary protocol (encrypted and compressed) to communicate with its C&C server, with little to work with when it comes to network detection.

At Netomize, we set out to write a detection rule targeting the encrypted message packet by leveraging the unique features of PacketSmith + Yara-X detection module, and the result is documented in this blog post.

submitted by /u/MFMokbel
[link] [comments]
  •  

Vulnerability Disclosure - SCHNEIDER ELECTRIC Modicon Controllers M241 / M251 / M262

Schneider Electric has addressed two vulnerabilities disclosed by Team82 in its Modicon Controllers M241 / M251, and M262 PLC line. The vulnerabilities can allow an attacker to cause a denial-of-service condition that affects the availability of the controller.

Read more on our Disclosure Dashboard: http://claroty.com/team82/disclosure-dashboard

Or download SE's advisory: https://download.schneider-electric.com/files?p_Doc_Ref=SEVD-2026-069-01&p_enDocType=Security+and+Safety+Notice&p_File_Name=SEVD-2026-069-01.pdf

submitted by /u/clarotyofficial
[link] [comments]
  •  

Agent skill marketplace supply chain attack: 121 skills across 7 repos vulnerable to GitHub username hijacking, 5 scanners disagree by 10x on malicious skill rates (arXiv:2603.16572)

**Submission URL** : https://arxiv.org/abs/2603.16572 **Repository hijacking** β€” Skills.sh and SkillsDirectory index agent skills by pointing to GitHub repository URLs rather than hosting files directly. When an original repository owner renames their GitHub account, the previous username becomes available. An adversary who claims that username and recreates the repository intercepts all future skill downloads. The authors found 121 skills forwarding to 7 vulnerable repositories. The most-downloaded hijackable skill had 2,032 downloads. **Scanner disagreement** β€” The paper tested 5 scanners against 238,180 unique skills from 4 marketplaces. Fail rates ranged from 3.79% (Snyk on Skills.sh) to 41.93% (OpenClaw scanner on ClawHub). Cross-scanner consensus was negligible: only 33 of 27,111 skills (0.12%) flagged by all five. When repository-context re-scoring was applied to the 2,887 scanner-flagged skills, only 0.52% remained in malicious-flagged repositories. **Live credentials** β€” A TruffleHog scan found 12 functioning API credentials (NVIDIA, ElevenLabs, Gemini, MongoDB, and others) embedded across the corpus. **What to do:** - Pin skills to specific commit hashes, not mutable branch heads - Monitor for repository ownership changes on skills already deployed - Require at minimum two independent scanners to flag a skill before treating as confirmed - Prefer direct-hosting marketplaces (ClawHub's model) over link-out distribution The repository hijacking vector is real and responsibly disclosed. The link-out distribution model is an architectural weakness β€” no patch resolves it. We wrote a practitioner-focused analysis covering this and 6 other papers from this week at 
submitted by /u/cyberamyntas
[link] [comments]
  •  
❌