The payment card giant MasterCard just fixed a glaring error in its domain name server settings that could have allowed anyone to intercept or divert Internet traffic for the company by registering an unused domain name. The misconfiguration persisted for nearly five years until a security researcher spent $300 to register the domain and prevent it from being grabbed by cybercriminals.
A DNS lookup on the domain az.mastercard.com on Jan. 14, 2025 shows the mistyped domain name a22-65.akam.ne.
From June 30, 2020 until January 14, 2025, one of the core Internet servers that MasterCard uses to direct traffic for portions of the mastercard.com network was misnamed. MasterCard.com relies on five shared Domain Name System (DNS) servers at the Internet infrastructure provider Akamai [DNS acts as a kind of Internet phone book, by translating website names to numeric Internet addresses that are easier for computers to manage].
All of the Akamai DNS server names that MasterCard uses are supposed to end in “akam.net” but one of them was misconfigured to rely on the domain “akam.ne.”
This tiny but potentially critical typo was discovered recently by Philippe Caturegli, founder of the security consultancy Seralys. Caturegli said he guessed that nobody had yet registered the domain akam.ne, which is under the purview of the top-level domain authority for the West Africa nation of Niger.
Caturegli said it took $300 and nearly three months of waiting to secure the domain with the registry in Niger. After enabling a DNS server on akam.ne, he noticed hundreds of thousands of DNS requests hitting his server each day from locations around the globe. Apparently, MasterCard wasn’t the only organization that had fat-fingered a DNS entry to include “akam.ne,” but they were by far the largest.
Had he enabled an email server on his new domain akam.ne, Caturegli likely would have received wayward emails directed toward mastercard.com or other affected domains. If he’d abused his access, he probably could have obtained website encryption certificates (SSL/TLS certs) that were authorized to accept and relay web traffic for affected websites. He may even have been able to passively receive Microsoft Windows authentication credentials from employee computers at affected companies.
But the researcher said he didn’t attempt to do any of that. Instead, he alerted MasterCard that the domain was theirs if they wanted it, copying this author on his notifications. A few hours later, MasterCard acknowledged the mistake, but said there was never any real threat to the security of its operations.
“We have looked into the matter and there was not a risk to our systems,” a MasterCard spokesperson wrote. “This typo has now been corrected.”
Meanwhile, Caturegli received a request submitted through Bugcrowd, a program that offers financial rewards and recognition to security researchers who find flaws and work privately with the affected vendor to fix them. The message suggested his public disclosure of the MasterCard DNS error via a post on LinkedIn (after he’d secured the akam.ne domain) was not aligned with ethical security practices, and passed on a request from MasterCard to have the post removed.
MasterCard’s request to Caturegli, a.k.a. “Titon” on infosec.exchange.
Caturegli said while he does have an account on Bugcrowd, he has never submitted anything through the Bugcrowd program, and that he reported this issue directly to MasterCard.
“I did not disclose this issue through Bugcrowd,” Caturegli wrote in reply. “Before making any public disclosure, I ensured that the affected domain was registered to prevent exploitation, mitigating any risk to MasterCard or its customers. This action, which we took at our own expense, demonstrates our commitment to ethical security practices and responsible disclosure.”
Most organizations have at least two authoritative domain name servers, but some handle so many DNS requests that they need to spread the load over additional DNS server domains. In MasterCard’s case, that number is five, so it stands to reason that if an attacker managed to seize control over just one of those domains they would only be able to see about one-fifth of the overall DNS requests coming in.
But Caturegli said the reality is that many Internet users are relying at least to some degree on public traffic forwarders or DNS resolvers like Cloudflare and Google.
“So all we need is for one of these resolvers to query our name server and cache the result,” Caturegli said. By setting their DNS server records with a long TTL or “Time To Live” — a setting that can adjust the lifespan of data packets on a network — an attacker’s poisoned instructions for the target domain can be propagated by large cloud providers.
“With a long TTL, we may reroute a LOT more than just 1/5 of the traffic,” he said.
The researcher said he’d hoped that the credit card giant might thank him, or at least offer to cover the cost of buying the domain.
“We obviously disagree with this assessment,” Caturegli wrote in a follow-up post on LinkedIn regarding MasterCard’s public statement. “But we’ll let you judge— here are some of the DNS lookups we recorded before reporting the issue.”
Caturegli posted this screenshot of MasterCard domains that were potentially at risk from the misconfigured domain.
As the screenshot above shows, the misconfigured DNS server Caturegli found involved the MasterCard subdomain az.mastercard.com. It is not clear exactly how this subdomain is used by MasterCard, however their naming conventions suggest the domains correspond to production servers at Microsoft’s Azure cloud service. Caturegli said the domains all resolve to Internet addresses at Microsoft.
“Don’t be like Mastercard,” Caturegli concluded in his LinkedIn post. “Don’t dismiss risk, and don’t let your marketing team handle security disclosures.”
One final note: The domain akam.ne has been registered previously — in December 2016 by someone using the email address um-i-delo@yandex.ru. The Russian search giant Yandex reports this user account belongs to an “Ivan I.” from Moscow. Passive DNS records from DomainTools.com show that between 2016 and 2018 the domain was connected to an Internet server in Germany, and that the domain was left to expire in 2018.
This is interesting given a comment on Caturegli’s LinkedIn post from an ex-Cloudflare employee who linked to a report he co-authored on a similar typo domain apparently registered in 2017 for organizations that may have mistyped their AWS DNS server as “awsdns-06.ne” instead of “awsdns-06.net.” DomainTools reports that this typo domain also was registered to a Yandex user (playlotto@yandex.ru), and was hosted at the same German ISP — Team Internet (AS61969).
A financial firm registered in Canada has emerged as the payment processor for dozens of Russian cryptocurrency exchanges and websites hawking cybercrime services aimed at Russian-speaking customers, new research finds. Meanwhile, an investigation into the Vancouver street address used by this company shows it is home to dozens of foreign currency dealers, money transfer businesses, and cryptocurrency exchanges — none of which are physically located there.
Richard Sanders is a blockchain analyst and investigator who advises the law enforcement and intelligence community. Sanders spent most of 2023 in Ukraine, traveling with Ukrainian soldiers while mapping the shifting landscape of Russian crypto exchanges that are laundering money for narcotics networks operating in the region.
More recently, Sanders has focused on identifying how dozens of popular cybercrime services are getting paid by their customers, and how they are converting cryptocurrency revenues into cash. For the past several months, he’s been signing up for various cybercrime services, and then tracking where their customer funds go from there.
The 122 services targeted in Sanders’ research include some of the more prominent businesses advertising on the cybercrime forums today, such as:
-abuse-friendly or “bulletproof” hosting providers like anonvm[.]wtf, and PQHosting;
-sites selling aged email, financial, or social media accounts, such as verif[.]work and kopeechka[.]store;
-anonymity or “proxy” providers like crazyrdp[.]com and rdp[.]monster;
-anonymous SMS services, including anonsim[.]net and smsboss[.]pro.
The site Verif dot work, which processes payments through Cryptomus, sells financial accounts, including debit and credit cards.
Sanders said he first encountered some of these services while investigating Kremlin-funded disinformation efforts in Ukraine, as they are all useful in assembling large-scale, anonymous social media campaigns.
According to Sanders, all 122 of the services he tested are processing transactions through a company called Cryptomus, which says it is a cryptocurrency payments platform based in Vancouver, British Columbia. Cryptomus’ website says its parent firm — Xeltox Enterprises Ltd. (formerly certa-pay[.]com) — is registered as a money service business (MSB) with the Financial Transactions and Reports Analysis Centre of Canada (FINTRAC).
Sanders said the payment data he gathered also shows that at least 56 cryptocurrency exchanges are currently using Cryptomus to process transactions, including financial entities with names like casher[.]su, grumbot[.]com, flymoney[.]biz, obama[.]ru and swop[.]is.
These platforms are built for Russian speakers, and they each advertise the ability to anonymously swap one form of cryptocurrency for another. They also allow the exchange of cryptocurrency for cash in accounts at some of Russia’s largest banks — nearly all of which are currently sanctioned by the United States and other western nations.
A machine-translated version of Flymoney, one of dozens of cryptocurrency exchanges apparently nested at Cryptomus.
An analysis of their technology infrastructure shows that all of these exchanges use Russian email providers, and most are directly hosted in Russia or by Russia-backed ISPs with infrastructure in Europe (e.g. Selectel, Netwarm UK, Beget, Timeweb and DDoS-Guard). The analysis also showed nearly all 56 exchanges used services from Cloudflare, a global content delivery network based in San Francisco.
“Purportedly, the purpose of these platforms is for companies to accept cryptocurrency payments in exchange for goods or services,” Sanders told KrebsOnSecurity. “Unfortunately, it is next to impossible to find any goods for sale with websites using Cryptomus, and the services appear to fall into one or two different categories: Facilitating transactions with sanctioned Russian banks, and platforms providing the infrastructure and means for cyber attacks.”
Cryptomus did not respond to multiple requests for comment.
The Cryptomus website and its FINTRAC listing say the company’s registered address is Suite 170, 422 Richards St. in Vancouver, BC. This address was the subject of an investigation published in July by CTV National News and the Investigative Journalism Foundation (IJF), which documented dozens of cases across Canada where multiple MSBs are incorporated at the same address, often without the knowledge or consent of the location’s actual occupant.
This building at 422 Richards St. in downtown Vancouver is the registered address for 90 money services businesses, including 10 that have had their registrations revoked. Image: theijf.org/msb-cluster-investigation.
Their inquiry found 422 Richards St. was listed as the registered address for at least 76 foreign currency dealers, eight MSBs, and six cryptocurrency exchanges. At that address is a three-story building that used to be a bank and now houses a massage therapy clinic and a co-working space. But they found none of the MSBs or currency dealers were paying for services at that co-working space.
The reporters found another collection of 97 MSBs clustered at an address for a commercial office suite in Ontario, even though there was no evidence these companies had ever arranged for any business services at that address.
Peter German, a former deputy commissioner for the Royal Canadian Mounted Police who authored two reports on money laundering in British Columbia, told the publications it goes against the spirit of Canada’s registration requirements for such businesses, which are considered high-risk for money laundering and terrorist financing.
“If you’re able to have 70 in one building, that’s just an abuse of the whole system,” German said.
Ten MSBs registered to 422 Richard St. had their registrations revoked. One company at 422 Richards St. whose registration was revoked this year had a director with a listed address in Russia, the publications reported. “Others appear to be directed by people who are also directors of companies in Cyprus and other high-risk jurisdictions for money laundering,” they wrote.
A review of FINTRAC’s registry (.CSV) shows many of the MSBs at 422 Richards St. are international money transfer or remittance services to countries like Malaysia, India and Nigeria. Some act as currency exchanges, while others appear to sell merchant accounts and online payment services. Still, KrebsOnSecurity could find no obvious connections between the 56 Russian cryptocurrency exchanges identified by Sanders and the dozens of payment companies that FINTRAC says share an address with the Cryptomus parent firm Xeltox Enterprises.
In August 2023, Binance and some of the largest cryptocurrency exchanges responded to sanctions against Russia by cutting off many Russian banks and restricting Russian customers to transactions in Rubles only. Sanders said prior to that change, most of the exchanges currently served by Cryptomus were handling customer funds with their own self-custodial cryptocurrency wallets.
By September 2023, Sanders said he found the exchanges he was tracking had all nested themselves like Matryoshka dolls at Cryptomus, which adds a layer of obfuscation to all transactions by generating a new cryptocurrency wallet for each order.
“They all simply moved to Cryptomus,” he said. “Cryptomus generates new wallets for each order, rendering ongoing attribution to require transactions with high fees each time.”
“Exchanges like Binance and OKX removing Sberbank and other sanctioned banks and offboarding Russian users did not remove the ability of Russians to transact in and out of cryptocurrency easily,” he continued. “In fact, it’s become easier, because the instant-swap exchanges do not even have Know Your Customer rules. The U.S. sanctions resulted in the majority of Russian instant exchanges switching from their self-custodial wallets to platforms, especially Cryptomus.”
Russian President Vladimir Putin in August signed a new law legalizing cryptocurrency mining and allowing the use of cryptocurrency for international payments. The Russian government’s embrace of cryptocurrency was a remarkable pivot: Bloomberg notes that as recently as January 2022, just weeks before Russia’s full-scale invasion of Ukraine, the central bank proposed a blanket ban on the use and creation of cryptocurrencies.
In a report on Russia’s cryptocurrency ambitions published in September, blockchain analysis firm Chainalysis said Russia’s move to integrate crypto into its financial system may improve its ability to bypass the U.S.-led financial system and to engage in non-dollar denominated trade.
“Although it can be hard to quantify the true impact of certain sanctions actions, the fact that Russian officials have singled out the effect of sanctions on Moscow’s ability to process cross-border trade suggests that the impact felt is great enough to incite urgency to legitimize and invest in alternative payment channels it once decried,” Chainalysis assessed.
Asked about its view of activity on Cryptomus, Chainanlysis said Cryptomus has been used by criminals of all stripes for laundering money and/or the purchase of goods and services.
“We see threat actors engaged in ransomware, narcotics, darknet markets, fraud, cybercrime, sanctioned entities and jurisdictions, and hacktivism making deposits to Cryptomus for purchases but also laundering the services using Cryptomos payment API,” the company said in a statement.
It is unclear if Cryptomus and/or Xeltox Enterprises have any presence in Canada at all. A search in the United Kingdom’s Companies House registry for Xeltox’s former name — Certa Payments Ltd. — shows an entity by that name incorporated at a mail drop in London in December 2023.
The sole shareholder and director of that company is listed as a 25-year-old Ukrainian woman in the Czech Republic named Vira Krychka. Ms. Krychka was recently appointed the director of several other new U.K. firms, including an entity created in February 2024 called Globopay UAB Ltd, and another called WS Management and Advisory Corporation Ltd. Ms. Krychka did not respond to a request for comment.
WS Management and Advisory Corporation bills itself as the regulatory body that exclusively oversees licenses of cryptocurrencies in the jurisdiction of Western Sahara, a disputed territory in northwest Africa. Its website says the company assists applicants with bank setup and formation, online gaming licenses, and the creation and licensing of foreign exchange brokers. One of Certa Payments’ former websites — certa[.]website — also shared a server with 12 other domains, including rasd-state[.]ws, a website for the Central Reserve Authority of the Western Sahara.
The website crasadr dot com, the official website of the Central Reserve Authority of Western Sahara.
This business registry from the Czech Republic indicates Ms. Krychka works as a director at an advertising and marketing firm called Icon Tech SRO, which was previously named Blaven Technologies (Blaven’s website says it is an online payment service provider).
In August 2024, Icon Tech changed its name again to Mezhundarondnaya IBU SRO, which describes itself as an “experienced company in IT consulting” that is based in Armenia. The same registry says Ms. Krychka is somehow also a director at a Turkish investment venture. So much business acumen at such a young age!
For now, Canada remains an attractive location for cryptocurrency businesses to set up shop, at least on paper. The IJF and CTV News found that as of February 2024, there were just over 3,000 actively registered MSBs in Canada, 1,247 of which were located at the same building as at least one other MSB.
“That analysis does not include the roughly 2,700 MSBs whose registrations have lapsed, been revoked or otherwise stopped,” they observed. “If they are included, then a staggering 2,061 out of 5,705 total MSBs share a building with at least one other MSB.”
I've spent more than a decade now writing about how to make Have I Been Pwned (HIBP) fast. Really fast. Fast to the extent that sometimes, it was even too fast:
The response from each search was coming back so quickly that the user wasn’t sure if it was legitimately checking subsequent addresses they entered or if there was a glitch.
Over the years, the service has evolved to use emerging new techniques to not just make things fast, but make them scale more under load, increase availability and sometimes, even drive down cost. For example, 8 years ago now I started rolling the most important services to Azure Functions, "serverless" code that was no longer bound by logical machines and would just scale out to whatever volume of requests was thrown at it. And just last year, I turned on Cloudflare cache reserve to ensure that all cachable objects remained cached, even under conditions where they previously would have been evicted.
And now, the pièce de résistance, the coolest performance thing we've done to date (and it is now "we", thank you Stefán): just caching the whole lot at Cloudflare. Everything. Every search you do... almost. Let me explain, firstly by way of some background:
When you hit any of the services on HIBP, the first place the traffic goes from your browser is to one of Cloudflare's 330 "edge nodes":
As I sit here writing this on the Gold Coast on Australia's most eastern seaboard, any request I make to HIBP hits that edge node on the far right of the Aussie continent which is just up the road in Brisbane. The capital city of our great state of Queensland is just a short jet ski away, about 80km as the crow flies. Before now, every single time I searched HIBP from home, my request bytes would travel up the wire to Brisbane and then take a giant 12,000km trip to Seattle where the Azure Function in the West US Azure data would query the database before sending the response 12,000km back west to Cloudflare's edge node, then the final 80km down to my Surfers Paradise home. But what if it didn't have to be that way? What if that data was already sitting on the Cloudflare edge node in Brisbane? And the one in Paris, and the one in well, I'm not even sure where all those blue dots are, but what if it was everywhere? Several awesome things would happen:
In short, pushing data and processing "closer to the edge" benefits both our customers and ourselves. But how do you do that for 5 billion unique email addresses? (Note: As of today, HIBP reports over 14 billion breached accounts, the number of unique email addresses is lower as on average, each breached address has appeared in multiple breaches.) To answer this question, let's recap on how the data is queried:
Let's delve into that last point further because it's the secret sauce to how this whole caching model works. In order to provide subscribers of this service with complete anonymity over the email addresses being searched for, the only data passed to the API is the first six characters of the SHA-1 hash of the full email address. If this sounds odd, read the blog post linked to in that last bullet point for full details. The important thing for now, though, is that it means there are a total of 16^6 different possible requests that can be made to the API, which is just over 16 million. Further, we can transform the first two use cases above into k-anonymity searches on the server side as it simply involved hashing the email address and taking those first six characters.
In summary, this means we can boil the entire searchable database of email addresses down to the following:
That's a large albeit finite list, and that's what we're now caching. So, here's what a search via email address looks like:
K-anonymity searches obviously go straight to step four, skipping the first few steps as we already know the hash prefix. All of this happens in a Cloudflare worker, so it's "code on the edge" creating hashes, checking cache then retrieving from the origin where necessary. That code also takes care of handling parameters that transform queries, for example, filtering by domain or truncating the response. It's a beautiful, simple model that's all self-contained within a worker and a very simple origin API. But there's a catch - what happens when the data changes?
There are two events that can change cached data, one is simple and one is major:
The second point is kind of frustrating as we've built up this beautiful collection of data all sitting close to the consumer where it's super fast to query, and then we nuke it all and go from scratch. The problem is it's either that or we selectively purge what could be many millions of individual hash prefixes, which you can't do:
For Zones on Enterprise plan, you may purge up to 500 URLs in one API call.
And:
Cache-Tag, host, and prefix purging each have a rate limit of 30,000 purge API calls in every 24 hour period.
We're giving all this further thought, but it's a non-trivial problem and a full cache flush is both easy and (near) instantaneous.
Enough words, let's get to some pictures! Here's a typical week of queries to the enterprise k-anonymity API:
This is a very predictable pattern, largely due to one particular subscriber regularly querying their entire customer base each day. (Sidenote: most of our enterprise level subscribers use callbacks such that we push updates to them via webhook when a new breach impacts their customers.) That's the total volume of inbound requests, but the really interesting bit is the requests that hit the origin (blue) versus those served directly by Cloudflare (orange):
Let's take the lowest blue data point towards the end of the graph as an example:
At that time, 96% of requests were served from Cloudflare's edge. Awesome! But look at it only a little bit later:
That's when I flushed cache for the Finsure breach, and 100% of traffic started being directed to the origin. (We're still seeing 14.24k hits via Cloudflare as, inevitably, some requests in that 1-hour block were to the same hash range and were served from cache.) It then took a whole 20 hours for the cache to repopulate to the extent that the hit:miss ratio returned to about 50:50:
Look back towards the start of the graph and you can see the same pattern from when I loaded the DemandScience breach. This all does pretty funky things to our origin API:
That last sudden increase is more than a 30x traffic increase in an instant! If we hadn't been careful about how we managed the origin infrastructure, we would have built a literal DDoS machine. Stefán will write later about how we manage the underlying database to ensure this doesn't happen, but even still, whilst we're dealing with the cyclical support patterns seen in that first graph above, I know that the best time to load a breach is later in the Aussie afternoon when the traffic is a third of what it is first thing in the morning. This helps smooth out the rate of requests to the origin such that by the time the traffic is ramping up, more of the content can be returned directly from Cloudflare. You can see that in the graphs above; that big peaky block towards the end of the last graph is pretty steady, even though the inbound traffic the first graph over the same period of time increases quite significantly. It's like we're trying to race the increasing inbound traffic by building ourselves up a bugger in cache.
Here's another angle to this whole thing: now more than ever, loading a data breach costs us money. For example, by the end of the graphs above, we were cruising along at a 50% cache hit ratio, which meant we were only paying for half as many of the Azure Function executions, egress bandwidth, and underlying SQL database as we would have been otherwise. Flushing cache and suddenly sending all the traffic to the origin doubles our cost. Waiting until we're back at 90% cache it ratio literally increases those costs 10x when we flush. If I were to be completely financially ruthless about it, I would need to either load fewer breaches or bulk them together such that a cache flush is only ejecting a small amount of data anyway, but clearly, that's not what I've been doing 😄
There's just one remaining fly in the ointment...
Of those three methods of querying email addresses, the first is a no-brainer: searches from the front page of the website hit a Cloudflare Worker where it validates the Turnstile token and returns a result. Easy. However, the second two models (the public and enterprise APIs) have the added burden of validating the API key against Azure API Management (APIM), and the only place that exists is in the West US origin service. What this means for those endpoints is that before we can return search results from a location that may be just a short jet ski ride away, we need to go all the way to the other side of the world to validate the key and ensure the request is within the rate limit. We do this in the lightest possible way with barely any data transiting the request to check the key, plus we do it in async with pulling the data back from the origin service if it isn't already in cache. In other words, we're as efficient as humanly possible, but we still cop a massive latency burden.
Doing API management at the origin is super frustrating, but there are really only two alternatives. The first is to distribute our APIM instance to other Azure data centres, and the problem with that is we need a Premium instance of the product. We presently run on a Basic instance, which means we're talking about a 19x increase in price just to unlock that ability. But that's just to go Premium; we then need at least one more instance somewhere else for this to make sense, which means we're talking about a 28x increase. And every region we add amplifies that even further. It's a financial non-starter.
The second option is for Cloudflare to build an API management product. This is the killer piece of this puzzle, as it would put all the checks and balances within the one edge node. It's a suggestion I've put forward on many occasions now, and who knows, maybe it's already in the works, but it's a suggestion I make out of a love of what the company does and a desire to go all-in on having them control the flow of our traffic. I did get a suggestion this week about rolling what is effectively a "poor man's API management" within workers, and it's a really cool suggestion, but it gets hard when people change plans or when we want to apply quotas to APIs rather than rate limits. So c'mon Cloudflare, let's make this happen!
Finally, just one more stat on how powerful serving content directly from the edge is: I shared this stat last month for Pwned Passwords which serves well over 99% of requests from Cloudflare's cache reserve:
There it is - we’ve now passed 10,000,000,000 requests to Pwned Password in 30 days 😮 This is made possible with @Cloudflare’s support, massively edge caching the data to make it super fast and highly available for everyone. pic.twitter.com/kw3C9gsHmB
— Troy Hunt (@troyhunt) October 5, 2024
That's about 3,900 requests per second, on average, non-stop for 30 days. It's obviously way more than that at peak; just a quick glance through the last month and it looks like about 17k requests per second in a one-minute period a few weeks ago:
But it doesn't matter how high it is, because I never even think about it. I set up the worker, I turned on cache reserve, and that's it 😎
I hope you've enjoyed this post, Stefán and I will be doing a live stream on this topic at 06:00 AEST Friday morning for this week's regular video update, and it'll be available for replay immediately after. It's also embedded here for convenience:
CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.
Key Features:
Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.
Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.
Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.
Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.
With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.
- Still in the development phase, sometimes it can't detect the real Ip.
- CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.
1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.
2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.
3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.
How to Use:
Run CloudScan with a single command-line argument: the target domain you want to analyze.
git clone https://github.com/spyboy-productions/CloakQuest3r.git
cd CloakQuest3r
pip3 install -r requirements.txt
python cloakquest3r.py example.com
The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.
If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.
You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.
Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.
CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.
Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r
Microsoft today issued security updates for more than 100 newly-discovered vulnerabilities in its Windows operating system and related software, including four flaws that are already being exploited. In addition, Apple recently released emergency updates to quash a pair of zero-day bugs in iOS.
Apple last week shipped emergency updates in iOS 17.0.3 and iPadOS 17.0.3 in response to active attacks. The patch fixes CVE-2023-42724, which attackers have been using in targeted attacks to elevate their access on a local device.
Apple said it also patched CVE-2023-5217, which is not listed as a zero-day bug. However, as Bleeping Computer pointed out, this flaw is caused by a weakness in the open-source “libvpx” video codec library, which was previously patched as a zero-day flaw by Google in the Chrome browser and by Microsoft in Edge, Teams, and Skype products. For anyone keeping count, this is the 17th zero-day flaw that Apple has patched so far this year.
Fortunately, the zero-days affecting Microsoft customers this month are somewhat less severe than usual, with the exception of CVE-2023-44487. This weakness is not specific to Windows but instead exists within the HTTP/2 protocol used by the World Wide Web: Attackers have figured out how to use a feature of HTTP/2 to massively increase the size of distributed denial-of-service (DDoS) attacks, and these monster attacks reportedly have been going on for several weeks now.
Amazon, Cloudflare and Google all released advisories today about how they’re addressing CVE-2023-44487 in their cloud environments. Google’s Damian Menscher wrote on Twitter/X that the exploit — dubbed a “rapid reset attack” — works by sending a request and then immediately cancelling it (a feature of HTTP/2). “This lets attackers skip waiting for responses, resulting in a more efficient attack,” Menscher explained.
Natalie Silva, lead security engineer at Immersive Labs, said this flaw’s impact to enterprise customers could be significant, and lead to prolonged downtime.
“It is crucial for organizations to apply the latest patches and updates from their web server vendors to mitigate this vulnerability and protect against such attacks,” Silva said. In this month’s Patch Tuesday release by Microsoft, they have released both an update to this vulnerability, as well as a temporary workaround should you not be able to patch immediately.”
Microsoft also patched zero-day bugs in Skype for Business (CVE-2023-41763) and Wordpad (CVE-2023-36563). The latter vulnerability could expose NTLM hashes, which are used for authentication in Windows environments.
“It may or may not be a coincidence that Microsoft announced last month that WordPad is no longer being updated, and will be removed in a future version of Windows, although no specific timeline has yet been given,” said Adam Barnett, lead software engineer at Rapid7. “Unsurprisingly, Microsoft recommends Word as a replacement for WordPad.”
Other notable bugs addressed by Microsoft include CVE-2023-35349, a remote code execution weakness in the Message Queuing (MSMQ) service, a technology that allows applications across multiple servers or hosts to communicate with each other. This vulnerability has earned a CVSS severity score of 9.8 (10 is the worst possible). Happily, the MSMQ service is not enabled by default in Windows, although Immersive Labs notes that Microsoft Exchange Server can enable this service during installation.
Speaking of Exchange, Microsoft also patched CVE-2023-36778, a vulnerability in all current versions of Exchange Server that could allow attackers to run code of their choosing. Rapid7’s Barnett said successful exploitation requires that the attacker be on the same network as the Exchange Server host, and use valid credentials for an Exchange user in a PowerShell session.
For a more detailed breakdown on the updates released today, see the SANS Internet Storm Center roundup. If today’s updates cause any stability or usability issues in Windows, AskWoody.com will likely have the lowdown on that.
Please consider backing up your data and/or imaging your system before applying any updates. And feel free to sound off in the comments if you experience any difficulties as a result of these patches.
There's a "hidden" API on HIBP. Well, it's not "hidden" insofar as it's easily discoverable if you watch the network traffic from the client, but it's not meant to be called directly, rather only via the web app. It's called "unified search" and it looks just like this:
It's been there in one form or another since day 1 (so almost a decade now), and it serves a sole purpose: to perform searches from the home page. That is all - only from the home page. It's called asynchronously from the client without needing to post back the entire page and by design, it's super fast and super easy to use. Which is bad. Sometimes.
To understand why it's bad we need to go back in time all the way to when I first launched the API that was intended to be consumed programmatically by other people's services. That was easy, because it was basically just documenting the API that sat behind the home page of the website already, the predecessor to the one you see above. And then, unsurprisingly in retrospect, it started to be abused so I had to put a rate limit on it. Problem is, that was a very rudimentary IP-based rate limit and it could be circumvented by someone with enough IPs, so fast forward a bit further and I put auth on the API which required a nominal payment to access it. At the same time, that unified search endpoint was created and home page searches updated to use that rather than the publicly documented API. So, 2 APIs with 2 different purposes.
The primary objective for putting a price on the public API was to tackle abuse. And it did - it stopped it dead. By attaching a rate limit to a key that required a credit card to purchase it, abusive practices (namely enumerating large numbers of email addresses) disappeared. This wasn't just about putting a financial cost to queries, it was about putting an identity cost to them; people are reluctant to start doing nasty things with a key traceable back to their own payment card! Which is why they turned their attention to the non-authenticated, non-documented unified search API.
Let's look at a 3 day period of requests to that API earlier this year, keeping in mind this should only ever be requested organically by humans performing searches from the home page:
This is far from organic usage with requests peaking at 121.3k in just 5 minutes. Which poses an interesting question: how do you create an API that should only be consumed asynchronously from a web page and never programmatically via a script? You could chuck a CAPTCHA on the front page and require that be solved first but let's face it, that's not a pleasant user experience. Rate limit requests by IP? See the earlier problem with that. Block UA strings? Pointless, because they're easily randomised. Rate limit an ASN? It gets you part way there, but what happens when you get a genuine flood of traffic because the site has hit the mainstream news? It happens.
Over the years, I've played with all sorts of combinations of firewall rules based on parameters such as geolocations with incommensurate numbers of requests to their populations, JA3 fingerprints and, of course, the parameters mentioned above. Based on the chart above these obviously didn't catch all the abusive traffic, but they did catch a significant portion of it:
If you combine it with the previous graph, that's about a third of all the bad traffic in that period or in other words, two thirds of the bad traffic was still getting through. There had to be a better way, which brings us to Cloudflare's Turnstile:
With Turnstile, we adapt the actual challenge outcome to the individual visitor or browser. First, we run a series of small non-interactive JavaScript challenges gathering more signals about the visitor/browser environment. Those challenges include, proof-of-work, proof-of-space, probing for web APIs, and various other challenges for detecting browser-quirks and human behavior. As a result, we can fine-tune the difficulty of the challenge to the specific request and avoid ever showing a visual puzzle to a user.
"Avoid ever showing a visual puzzle to a user" is a polite way of saying they avoid the sucky UX of CAPTCHA. Instead, Turnstile offers the ability to issue a "non-interactive challenge" which implements the sorts of clever techniques mentioned above and as it relates to this blog post, that can be an invisible non-interactive challenge. This is one of 3 different widget types with the others being a visible non-interactive challenge and a non-intrusive interactive challenge. For my purposes on HIBP, I wanted a zero-friction implementation nobody saw, hence the invisible approach. Here's how it works:
Get it? Ok, let's break it down further as it relates to HIBP, starting with when the front page first loads and it embeds the Turnstile widget from Cloudflare:
<script src="https://challenges.cloudflare.com/turnstile/v0/api.js" async defer></script>
The widget takes responsibility for running the non-interactive challenge and returning a token. This needs to be persisted somewhere on the client side which brings us to embedding the widget:
<div ID="turnstileWidget" class="cf-turnstile" data-sitekey="0x4AAAAAAADY3UwkmqCvH8VR" data-callback="turnstileCompleted"></div>
Per the docs in that link, the main thing here is to have an element with the "cf-turnstile" class set on it. If you happen to go take a look at the HIBP HTML source right now, you'll see that element precisely as it appears in the code block above. However, check it out in your browser's dev tools so you can see how it renders in the DOM and it will look more like this:
Expand that DIV tag and you'll find a whole bunch more content set as a result of loading the widget, but that's not relevant right now. What's important is the data-token attribute because that's what's going to prove you're not a bot when you run the search. How you implement this from here is up to you, but what HIBP does is picks up the token and sets it in the "cf-turnstile-response" header then sends it along with the request when that unified search endpoint is called:
So, at this point we've issued a challenge, the browser has solved the challenge and received a token back, now that token has been sent along with the request for the actual resource the user wanted, in this case the unified search endpoint. The final step is to validate the token and for this I'm using a Cloudflare worker. I've written a lot about workers in the past so here's the short pitch: it's code that runs in each one of Cloudflare's 300+ edge nodes around the world and can inspect and modify requests and responses on the fly. I already had a worker to do some other processing on unified search requests, so I just added the following:
const token = request.headers.get('cf-turnstile-response');
if (token === null) {
return new Response('Missing Turnstile token', { status: 401 });
}
const ip = request.headers.get('CF-Connecting-IP');
let formData = new FormData();
formData.append('secret', '[secret key goes here]');
formData.append('response', token);
formData.append('remoteip', ip);
const turnstileUrl = 'https://challenges.cloudflare.com/turnstile/v0/siteverify';
const result = await fetch(turnstileUrl, {
body: formData,
method: 'POST',
});
const outcome = await result.json();
if (!outcome.success) {
return new Response('Invalid Turnstile token', { status: 401 });
}
That should be pretty self-explanatory and you can find the docs for this on Cloudflare's server-side validation page which goes into more detail, but in essence, it does the following:
And because this is all done in a Cloudflare worker, any of those 401 responses never even touch the origin. Not only do I not need to process the request in Azure, the person attempting to abuse my API gets a nice speedy response directly from an edge node near them 🙂
So, what does this mean for bots? If there's no token then they get booted out right away. If there's a token but it's not valid then they get booted out at the end. But can't they just take a previously generated token and use that? Well, yes, but only once:
If the same response is presented twice, the second and each subsequent request will generate an error stating that the response has already been consumed.
And remember, a real browser had to generate that token in the first place so it's not like you can just automate the process of token generation then throw it at the API above. (Sidenote: that server-side validation link includes how to handle idempotency, for example when retrying failed requests.) But what if a real human fails the verification? That's entirely up to you but in HIBP's case, that 401 response causes a fallback to a full page post back which then implements other controls, for example an interactive challenge.
Time for graphs and stats, starting with the one in the hero image of this page where we can see the number of times Turnstile was issued and how many times it was solved over the week prior to publishing this post:
That's a 91% hit rate of solved challenges which is great. That remaining 9% is either humans with a false positive or... bots getting rejected 😎
More graphs, this time how many requests to the unified search page were rejected by Turnstile:
That 990k number doesn't marry up with the 476k unsolved ones from before because they're 2 different things: the unsolved challenges are when the Turnstile widget is loaded but not solved (hopefully due to it being a bot rather than a false positive), whereas the 401 responses to the API is when a successful (and previously unused) Turnstile token isn't in the header. This could be because the token wasn't present, wasn't solved or had already been used. You get more of a sense of how many of these rejected requests were legit humans when you drill down into attributes like the JA3 fingerprints:
In other words, of those 990k failed requests, almost 40% of them were from the same 5 clients. Seems legit 🤔
And about a third were from clients with an identical UA string:
And so on and so forth. The point being that the number of actual legitimate requests from end users that were inconvenienced by Turnstile would be exceptionally small, almost certainly a very low single-digit percentage. I'll never know exactly because bots obviously attempt to emulate legit clients and sometimes legit clients look like bots and if we could easily solve this problem then we wouldn't need Turnstile in the first place! Anecdotally, that very small false positive number stacks up as people tend to complain pretty quickly when something isn't optimal, and I implemented this all the way back in March. Yep, 5 months ago, and I've waited this long to write about it just to be confident it's actually working. Over 100M Turnstile challenges later, I'm confident it is - I've not seen a single instance of abnormal traffic spikes to the unified search endpoint since rolling this out. What I did see initially though is a lot of this sort of thing:
By now it should be pretty obvious what's going on here, and it should be equally obvious that it didn't work out real well for them 😊
The bot problem is a hard one for those of us building services because we're continually torn in different directions. We want to build a slick UX for humans but an obtrusive one for bots. We want services to be easily consumable, but only in the way we intend them to... which might be by the good bots playing by the rules!
I don't know exactly what Cloudflare is doing in that challenge and I'll be honest, I don't even know what a "proof-of-space" is. But the point of using a service like this is that I don't need to know! What I do know is that Cloudflare sees about 20% of the internet's traffic and because of that, they're in an unrivalled position to look at a request and make a determination on its legitimacy.
If you're in my shoes, go and give Turnstile a go. And if you want to consume data from HIBP, go and check out the official API docs, the uh, unified search doesn't work real well for you any more 😎
What if I told you... that you could run a website from behind Cloudflare and only have 385 daily requests miss their cache and go through to the origin service?
No biggy, unless... that was out of a total of more than 166M requests in the same period:
Yep, we just hit "five nines" of cache hit ratio on Pwned Passwords being 99.999%. Actually, it was 99.9998% but we're at the point now where that's just splitting hairs, let's talk about how we've managed to only have two requests in a million hit the origin, beginning with a bit of history:
Optimising Caching on Pwned Passwords (with Workers)- @troyhunt - https://t.co/KjBtCwmhmT pic.twitter.com/BSfJbWyxMy
— Cloudflare (@Cloudflare) August 9, 2018
Ah, memories 😊 Back then, Pwned Passwords was serving way fewer requests in a month than what we do in a day now and the cache hit ratio was somewhere around 92%. Put another way, instead of 2 in every million requests hitting the origin it was 85k. And we were happy with that! As the years progressed, the traffic grew and the caching model was optimised so our stats improved:
There it is - Pwned Passwords is now doing north of 2 *billion* requests a month, peaking at 91.59M in a day with a cache-hit ratio of 99.52%. All free, open source and out there for the community to do good with 😊 pic.twitter.com/DSJOjb2CxZ
— Troy Hunt (@troyhunt) May 24, 2022
And that's pretty much where we levelled out, at about the 99-and-a-bit percent mark. We were really happy with that as it was now only 5k requests per million hitting the origin. There was bound to be a number somewhere around that mark due to the transient nature of cache and eviction criteria inevitably meaning a Cloudflare edge node somewhere would need to reach back to the origin website and pull a new copy of the data. But what if Cloudflare never had to do that unless explicitly instructed to do so? I mean, what if it just stayed in their cache unless we actually changed the source file and told them to update their version? Welcome to Cloudflare Cache Reserve:
Ok, so I may have annotated the important bit but that's what it feels like - magic - because you just turn it on and... that's it. You still serve your content the same way, you still need the appropriate cache headers and you still have the same tiered caching as before, but now there's a "cache reserve" sitting between that and your origin. It's backed by R2 which is their persistent data store and you can keep your cached things there for as long as you want. However, per the earlier link, it's not free:
You pay based on how much you store for how long, how much you write and how much you read. Let's put that in real terms and just as a brief refresher (longer version here), remember that Pwned Passwords is essentially just 16^5 (just over 1 million) text files of about 30kb each for the SHA-1 hashes and a similar number for the NTLM ones (albeit slight smaller file sizes). Here are the Cache Reserve usage stats for the last 9 days:
We can now do some pretty simple maths with that and working on the assumption of 9 days, here's what we get:
2 bucks a day 😲 But this has taken nearly 16M requests off my origin service over this period of time so I haven't paid for the Azure Function execution (which is cheap) nor the egress bandwidth (which is not cheap). But why are there only 16M read operations over 9 days when earlier we saw 167M requests to the API in a single day? Because if you scroll back up to the "insert magic here" diagram, Cache Reserve is only a fallback position and most requests (i.e. 99.52% of them) are still served from the edge caches.
Note also that there are nearly 1M write operations and there are 2 reasons for this:
An untold number of businesses rely on Pwned Passwords as an integral part of their registration, login and password reset flows. Seriously, the number is "untold" because we have no idea who's actually using it, we just know the service got hit three and a quarter billion times in the last 30 days:
Giving consumers of the service confidence that not only is it highly resilient, but also massively fast is essential to adoption. In turn, more adoption helps drive better password practices, less account takeovers and more smiles all round 😊
As those remaining hash prefixes populate Cache Reserve, keep an eye on the "cf-cache-status" response header. If you ever see a value of "MISS" then congratulations, you're literally one in a million!
Full disclosure: Cloudflare provides services to HIBP for free and they helped in getting Cache Reserve up and running. However, they had no idea I was writing this blog post and reading it live in its entirety is the first anyone there has seen it. Surprise! 👋