FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Yesterday — May 13th 2024Your RSS feeds

Severe Vulnerabilities in Cinterion Cellular Modems Pose Risks to Various Industries

Cybersecurity researchers have disclosed multiple security flaws in Cinterion cellular modems that could be potentially exploited by threat actors to access sensitive information and achieve code execution. "These vulnerabilities include critical flaws that permit remote code execution and unauthorized privilege escalation, posing substantial risks to integral communication networks and IoT
Before yesterdayYour RSS feeds

‘TunnelVision’ Attack Leaves Nearly All VPNs Vulnerable to Spying

TunnelVision is an attack developed by researchers that can expose VPN traffic to snooping or tampering.

Dropbox Discloses Breach of Digital Signature Service Affecting All Users

Cloud storage services provider Dropbox on Wednesday disclosed that Dropbox Sign (formerly HelloSign) was breached by unidentified threat actors, who accessed emails, usernames, and general account settings associated with all users of the digital signature product. The company, in a filing with the U.S. Securities and Exchange Commission (SEC), said it became aware of the "

The Dangerous Rise of GPS Attacks

Thousands of planes and ships are facing GPS jamming and spoofing. Experts warn these attacks could potentially impact critical infrastructure, communication networks, and more.

CrimsonEDR - Simulate The Behavior Of AV/EDR For Malware Development Training

By: Zion3R


CrimsonEDR is an open-source project engineered to identify specific malware patterns, offering a tool for honing skills in circumventing Endpoint Detection and Response (EDR). By leveraging diverse detection methods, it empowers users to deepen their understanding of security evasion tactics.


Features

Detection Description
Direct Syscall Detects the usage of direct system calls, often employed by malware to bypass traditional API hooks.
NTDLL Unhooking Identifies attempts to unhook functions within the NTDLL library, a common evasion technique.
AMSI Patch Detects modifications to the Anti-Malware Scan Interface (AMSI) through byte-level analysis.
ETW Patch Detects byte-level alterations to Event Tracing for Windows (ETW), commonly manipulated by malware to evade detection.
PE Stomping Identifies instances of PE (Portable Executable) stomping.
Reflective PE Loading Detects the reflective loading of PE files, a technique employed by malware to avoid static analysis.
Unbacked Thread Origin Identifies threads originating from unbacked memory regions, often indicative of malicious activity.
Unbacked Thread Start Address Detects threads with start addresses pointing to unbacked memory, a potential sign of code injection.
API hooking Places a hook on the NtWriteVirtualMemory function to monitor memory modifications.
Custom Pattern Search Allows users to search for specific patterns provided in a JSON file, facilitating the identification of known malware signatures.

Installation

To get started with CrimsonEDR, follow these steps:

  1. Install dependancy: bash sudo apt-get install gcc-mingw-w64-x86-64
  2. Clone the repository: bash git clone https://github.com/Helixo32/CrimsonEDR
  3. Compile the project: bash cd CrimsonEDR; chmod +x compile.sh; ./compile.sh

⚠️ Warning

Windows Defender and other antivirus programs may flag the DLL as malicious due to its content containing bytes used to verify if the AMSI has been patched. Please ensure to whitelist the DLL or disable your antivirus temporarily when using CrimsonEDR to avoid any interruptions.

Usage

To use CrimsonEDR, follow these steps:

  1. Make sure the ioc.json file is placed in the current directory from which the executable being monitored is launched. For example, if you launch your executable to monitor from C:\Users\admin\, the DLL will look for ioc.json in C:\Users\admin\ioc.json. Currently, ioc.json contains patterns related to msfvenom. You can easily add your own in the following format:
{
"IOC": [
["0x03", "0x4c", "0x24", "0x08", "0x45", "0x39", "0xd1", "0x75"],
["0xf1", "0x4c", "0x03", "0x4c", "0x24", "0x08", "0x45", "0x39"],
["0x58", "0x44", "0x8b", "0x40", "0x24", "0x49", "0x01", "0xd0"],
["0x66", "0x41", "0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40"],
["0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40", "0x1c", "0x49"],
["0x01", "0xc1", "0x38", "0xe0", "0x75", "0xf1", "0x4c", "0x03"],
["0x24", "0x49", "0x01", "0xd0", "0x66", "0x41", "0x8b", "0x0c"],
["0xe8", "0xcc", "0x00", "0x00", "0x00", "0x41", "0x51", "0x41"]
]
}
  1. Execute CrimsonEDRPanel.exe with the following arguments:

    • -d <path_to_dll>: Specifies the path to the CrimsonEDR.dll file.

    • -p <process_id>: Specifies the Process ID (PID) of the target process where you want to inject the DLL.

For example:

.\CrimsonEDRPanel.exe -d C:\Temp\CrimsonEDR.dll -p 1234

Useful Links

Here are some useful resources that helped in the development of this project:

Contact

For questions, feedback, or support, please reach out to me via:



How Attackers Can Own a Business Without Touching the Endpoint

Attackers are increasingly&nbsp;making use of&nbsp;“networkless”&nbsp;attack techniques targeting&nbsp;cloud apps and identities. Here’s how attackers can (and are)&nbsp;compromising organizations –&nbsp;without ever needing to touch the endpoint or conventional networked systems and services.&nbsp; Before getting into the details of the attack techniques&nbsp;being&nbsp;used, let’s discuss why

Widely-Used PuTTY SSH Client Found Vulnerable to Key Recovery Attack

The maintainers of the&nbsp;PuTTY Secure Shell (SSH) and Telnet client&nbsp;are alerting users of a critical vulnerability impacting versions from 0.68 through 0.80 that could be exploited to achieve full recovery of NIST P-521 (ecdsa-sha2-nistp521) private keys. The flaw has been assigned the CVE identifier&nbsp;CVE-2024-31497, with the discovery credited to researchers Fabian Bäumer and Marcus

Muddled Libra Shifts Focus to SaaS and Cloud for Extortion and Data Theft Attacks

The threat actor known as&nbsp;Muddled Libra&nbsp;has been observed actively targeting software-as-a-service (SaaS) applications and cloud service provider (CSP) environments in a bid to exfiltrate sensitive data. "Organizations often store a variety of data in SaaS applications and use services from CSPs," Palo Alto Networks Unit 42&nbsp;said&nbsp;in a report published last week. "The threat

April’s Patch Tuesday Brings Record Number of Fixes

If only Patch Tuesdays came around infrequently — like total solar eclipse rare — instead of just creeping up on us each month like The Man in the Moon. Although to be fair, it would be tough for Microsoft to eclipse the number of vulnerabilities fixed in this month’s patch batch — a record 147 flaws in Windows and related software.

Yes, you read that right. Microsoft today released updates to address 147 security holes in Windows, Office, Azure, .NET Framework, Visual Studio, SQL Server, DNS Server, Windows Defender, Bitlocker, and Windows Secure Boot.

“This is the largest release from Microsoft this year and the largest since at least 2017,” said Dustin Childs, from Trend Micro’s Zero Day Initiative (ZDI). “As far as I can tell, it’s the largest Patch Tuesday release from Microsoft of all time.”

Tempering the sheer volume of this month’s patches is the middling severity of many of the bugs. Only three of April’s vulnerabilities earned Microsoft’s most-dire “critical” rating, meaning they can be abused by malware or malcontents to take remote control over unpatched systems with no help from users.

Most of the flaws that Microsoft deems “more likely to be exploited” this month are marked as “important,” which usually involve bugs that require a bit more user interaction (social engineering) but which nevertheless can result in system security bypass, compromise, and the theft of critical assets.

Ben McCarthy, lead cyber security engineer at Immersive Labs called attention to CVE-2024-20670, an Outlook for Windows spoofing vulnerability described as being easy to exploit. It involves convincing a user to click on a malicious link in an email, which can then steal the user’s password hash and authenticate as the user in another Microsoft service.

Another interesting bug McCarthy pointed to is CVE-2024-29063, which involves hard-coded credentials in Azure’s search backend infrastructure that could be gleaned by taking advantage of Azure AI search.

“This along with many other AI attacks in recent news shows a potential new attack surface that we are just learning how to mitigate against,” McCarthy said. “Microsoft has updated their backend and notified any customers who have been affected by the credential leakage.”

CVE-2024-29988 is a weakness that allows attackers to bypass Windows SmartScreen, a technology Microsoft designed to provide additional protections for end users against phishing and malware attacks. Childs said one of ZDI’s researchers found this vulnerability being exploited in the wild, although Microsoft doesn’t currently list CVE-2024-29988 as being exploited.

“I would treat this as in the wild until Microsoft clarifies,” Childs said. “The bug itself acts much like CVE-2024-21412 – a [zero-day threat from February] that bypassed the Mark of the Web feature and allows malware to execute on a target system. Threat actors are sending exploits in a zipped file to evade EDR/NDR detection and then using this bug (and others) to bypass Mark of the Web.”

Update, 7:46 p.m. ET: A previous version of this story said there were no zero-day vulnerabilities fixed this month. BleepingComputer reports that Microsoft has since confirmed that there are actually two zero-days. One is the flaw Childs just mentioned (CVE-2024-21412), and the other is CVE-2024-26234, described as a “proxy driver spoofing” weakness.

Satnam Narang at Tenable notes that this month’s release includes fixes for two dozen flaws in Windows Secure Boot, the majority of which are considered “Exploitation Less Likely” according to Microsoft.

“However, the last time Microsoft patched a flaw in Windows Secure Boot in May 2023 had a notable impact as it was exploited in the wild and linked to the BlackLotus UEFI bootkit, which was sold on dark web forums for $5,000,” Narang said. “BlackLotus can bypass functionality called secure boot, which is designed to block malware from being able to load when booting up. While none of these Secure Boot vulnerabilities addressed this month were exploited in the wild, they serve as a reminder that flaws in Secure Boot persist, and we could see more malicious activity related to Secure Boot in the future.”

For links to individual security advisories indexed by severity, check out ZDI’s blog and the Patch Tuesday post from the SANS Internet Storm Center. Please consider backing up your data or your drive before updating, and drop a note in the comments here if you experience any issues applying these fixes.

Adobe today released nine patches tackling at least two dozen vulnerabilities in a range of software products, including Adobe After Effects, Photoshop, Commerce, InDesign, Experience Manager, Media Encoder, Bridge, Illustrator, and Adobe Animate.

KrebsOnSecurity needs to correct the record on a point mentioned at the end of March’s “Fat Patch Tuesday” post, which looked at new AI capabilities built into Adobe Acrobat that are turned on by default. Adobe has since clarified that its apps won’t use AI to auto-scan your documents, as the original language in its FAQ suggested.

“In practice, no document scanning or analysis occurs unless a user actively engages with the AI features by agreeing to the terms, opening a document, and selecting the AI Assistant or generative summary buttons for that specific document,” Adobe said earlier this month.

The Not-so-True People-Search Network from China

It’s not unusual for the data brokers behind people-search websites to use pseudonyms in their day-to-day lives (you would, too). Some of these personal data purveyors even try to reinvent their online identities in a bid to hide their conflicts of interest. But it’s not every day you run across a US-focused people-search network based in China whose principal owners all appear to be completely fabricated identities.

Responding to a reader inquiry concerning the trustworthiness of a site called TruePeopleSearch[.]net, KrebsOnSecurity began poking around. The site offers to sell reports containing photos, police records, background checks, civil judgments, contact information “and much more!” According to LinkedIn and numerous profiles on websites that accept paid article submissions, the founder of TruePeopleSearch is Marilyn Gaskell from Phoenix, Ariz.

The saucy yet studious LinkedIn profile for Marilyn Gaskell.

Ms. Gaskell has been quoted in multiple “articles” about random subjects, such as this article at HRDailyAdvisor about the pros and cons of joining a company-led fantasy football team.

“Marilyn Gaskell, founder of TruePeopleSearch, agrees that not everyone in the office is likely to be a football fan and might feel intimidated by joining a company league or left out if they don’t join; however, her company looked for ways to make the activity more inclusive,” this paid story notes.

Also quoted in this article is Sally Stevens, who is cited as HR Manager at FastPeopleSearch[.]io.

Sally Stevens, the phantom HR Manager for FastPeopleSearch.

“Fantasy football provides one way for employees to set aside work matters for some time and have fun,” Stevens contributed. “Employees can set a special league for themselves and regularly check and compare their scores against one another.”

Imagine that: Two different people-search companies mentioned in the same story about fantasy football. What are the odds?

Both TruePeopleSearch and FastPeopleSearch allow users to search for reports by first and last name, but proceeding to order a report prompts the visitor to purchase the file from one of several established people-finder services, including BeenVerified, Intelius, and Spokeo.

DomainTools.com shows that both TruePeopleSearch and FastPeopleSearch appeared around 2020 and were registered through Alibaba Cloud, in Beijing, China. No other information is available about these domains in their registration records, although both domains appear to use email servers based in China.

Sally Stevens’ LinkedIn profile photo is identical to a stock image titled “beautiful girl” from Adobe.com. Ms. Stevens is also quoted in a paid blog post at ecogreenequipment.com, as is Alina Clark, co-founder and marketing director of CocoDoc, an online service for editing and managing PDF documents.

The profile photo for Alina Clark is a stock photo appearing on more than 100 websites.

Scouring multiple image search sites reveals Ms. Clark’s profile photo on LinkedIn is another stock image that is currently on more than 100 different websites, including Adobe.com. Cocodoc[.]com was registered in June 2020 via Alibaba Cloud Beijing in China.

The same Alina Clark and photo materialized in a paid article at the website Ceoblognation, which in 2021 included her at #11 in a piece called “30 Entrepreneurs Describe The Big Hairy Audacious Goals (BHAGs) for Their Business.” It’s also worth noting that Ms. Clark is currently listed as a “former Forbes Council member” at the media outlet Forbes.com.

Entrepreneur #6 is Stephen Curry, who is quoted as CEO of CocoSign[.]com, a website that claims to offer an “easier, quicker, safer eSignature solution for small and medium-sized businesses.” Incidentally, the same photo for Stephen Curry #6 is also used in this “article” for #22 Jake Smith, who is named as the owner of a different company.

Stephen Curry, aka Jake Smith, aka no such person.

Mr. Curry’s LinkedIn profile shows a young man seated at a table in front of a laptop, but an online image search shows this is another stock photo. Cocosign[.]com was registered in June 2020 via Alibaba Cloud Beijing. No ownership details are available in the domain registration records.

Listed at #13 in that 30 Entrepreneurs article is Eden Cheng, who is cited as co-founder of PeopleFinderFree[.]com. KrebsOnSecurity could not find a LinkedIn profile for Ms. Cheng, but a search on her profile image from that Entrepreneurs article shows the same photo for sale at Shutterstock and other stock photo sites.

DomainTools says PeopleFinderFree was registered through Alibaba Cloud, Beijing. Attempts to purchase reports through PeopleFinderFree produce a notice saying the full report is only available via Spokeo.com.

Lynda Fairly is Entrepreneur #24, and she is quoted as co-founder of Numlooker[.]com, a domain registered in April 2021 through Alibaba in China. Searches for people on Numlooker forward visitors to Spokeo.

The photo next to Ms. Fairly’s quote in Entrepreneurs matches that of a LinkedIn profile for Lynda Fairly. But a search on that photo shows this same portrait has been used by many other identities and names, including a woman from the United Kingdom who’s a cancer survivor and mother of five; a licensed marriage and family therapist in Canada; a software security engineer at Quora; a journalist on Twitter/X; and a marketing expert in Canada.

Cocofinder[.]com is a people-search service that launched in Sept. 2019, through Alibaba in China. Cocofinder lists its market officer as Harriet Chan, but Ms. Chan’s LinkedIn profile is just as sparse on work history as the other people-search owners mentioned already. An image search online shows that outside of LinkedIn, the profile photo for Ms. Chan has only ever appeared in articles at pay-to-play media sites, like this one from outbackteambuilding.com.

Perhaps because Cocodoc and Cocosign both sell software services, they are actually tied to a physical presence in the real world — in Singapore (15 Scotts Rd. #03-12 15, Singapore). But it’s difficult to discern much from this address alone.

Who’s behind all this people-search chicanery? A January 2024 review of various people-search services at the website techjury.com states that Cocofinder is a wholly-owned subsidiary of a Chinese company called Shenzhen Duiyun Technology Co.

“Though it only finds results from the United States, users can choose between four main search methods,” Techjury explains. Those include people search, phone, address and email lookup. This claim is supported by a Reddit post from three years ago, wherein the Reddit user “ProtectionAdvanced” named the same Chinese company.

Is Shenzhen Duiyun Technology Co. responsible for all these phony profiles? How many more fake companies and profiles are connected to this scheme? KrebsOnSecurity found other examples that didn’t appear directly tied to other fake executives listed here, but which nevertheless are registered through Alibaba and seek to drive traffic to Spokeo and other data brokers. For example, there’s the winsome Daniela Sawyer, founder of FindPeopleFast[.]net, whose profile is flogged in paid stories at entrepreneur.org.

Google currently turns up nothing else for in a search for Shenzhen Duiyun Technology Co. Please feel free to sound off in the comments if you have any more information about this entity, such as how to contact it. Or reach out directly at krebsonsecurity @ gmail.com.

A mind map highlighting the key points of research in this story. Click to enlarge. Image: KrebsOnSecurity.com

ANALYSIS

It appears the purpose of this network is to conceal the location of people in China who are seeking to generate affiliate commissions when someone visits one of their sites and purchases a people-search report at Spokeo, for example. And it is clear that Spokeo and others have created incentives wherein anyone can effectively white-label their reports, and thereby make money brokering access to peoples’ personal information.

Spokeo’s Wikipedia page says the company was founded in 2006 by four graduates from Stanford University. Spokeo co-founder and current CEO Harrison Tang has not yet responded to requests for comment.

Intelius is owned by San Diego based PeopleConnect Inc., which also owns Classmates.com, USSearch, TruthFinder and Instant Checkmate. PeopleConnect Inc. in turn is owned by H.I.G. Capital, a $60 billion private equity firm. Requests for comment were sent to H.I.G. Capital. This story will be updated if they respond.

BeenVerified is owned by a New York City based holding company called The Lifetime Value Co., a marketing and advertising firm whose brands include PeopleLooker, NeighborWho, Ownerly, PeopleSmart, NumberGuru, and Bumper, a car history site.

Ross Cohen, chief operating officer at The Lifetime Value Co., said it’s likely the network of suspicious people-finder sites was set up by an affiliate. Cohen said Lifetime Value would investigate to determine if this particular affiliate was driving them any sign-ups.

All of the above people-search services operate similarly. When you find the person you’re looking for, you are put through a lengthy (often 10-20 minute) series of splash screens that require you to agree that these reports won’t be used for employment screening or in evaluating new tenant applications. Still more prompts ask if you are okay with seeing “potentially shocking” details about the subject of the report, including arrest histories and photos.

Only at the end of this process does the site disclose that viewing the report in question requires signing up for a monthly subscription, which is typically priced around $35. Exactly how and from where these major people-search websites are getting their consumer data — and customers — will be the subject of further reporting here.

The main reason these various people-search sites require you to affirm that you won’t use their reports for hiring or vetting potential tenants is that selling reports for those purposes would classify these firms as consumer reporting agencies (CRAs) and expose them to regulations under the Fair Credit Reporting Act (FCRA).

These data brokers do not want to be treated as CRAs, and for this reason their people search reports typically don’t include detailed credit histories, financial information, or full Social Security Numbers (Radaris reports include the first six digits of one’s SSN).

But in September 2023, the U.S. Federal Trade Commission found that TruthFinder and Instant Checkmate were trying to have it both ways. The FTC levied a $5.8 million penalty against the companies for allegedly acting as CRAs because they assembled and compiled information on consumers into background reports that were marketed and sold for employment and tenant screening purposes.

The FTC also found TruthFinder and Instant Checkmate deceived users about background report accuracy. The FTC alleges these companies made millions from their monthly subscriptions using push notifications and marketing emails that claimed that the subject of a background report had a criminal or arrest record, when the record was merely a traffic ticket.

The FTC said both companies deceived customers by providing “Remove” and “Flag as Inaccurate” buttons that did not work as advertised. Rather, the “Remove” button removed the disputed information only from the report as displayed to that customer; however, the same item of information remained visible to other customers who searched for the same person.

The FTC also said that when a customer flagged an item in the background report as inaccurate, the companies never took any steps to investigate those claims, to modify the reports, or to flag to other customers that the information had been disputed.

There are a growing number of online reputation management companies that offer to help customers remove their personal information from people-search sites and data broker databases. There are, no doubt, plenty of honest and well-meaning companies operating in this space, but it has been my experience that a great many people involved in that industry have a background in marketing or advertising — not privacy.

Also, some so-called data privacy companies may be wolves in sheep’s clothing. On March 14, KrebsOnSecurity published an abundance of evidence indicating that the CEO and founder of the data privacy company OneRep.com was responsible for launching dozens of people-search services over the years.

Finally, some of the more popular people-search websites are notorious for ignoring requests from consumers seeking to remove their information, regardless of which reputation or removal service you use. Some force you to create an account and provide more information before you can remove your data. Even then, the information you worked hard to remove may simply reappear a few months later.

This aptly describes countless complaints lodged against the data broker and people search giant Radaris. On March 8, KrebsOnSecurity profiled the co-founders of Radaris, two Russian brothers in Massachusetts who also operate multiple Russian-language dating services and affiliate programs.

The truth is that these people-search companies will continue to thrive unless and until Congress begins to realize it’s time for some consumer privacy and data protection laws that are relevant to life in the 21st century. Duke University adjunct professor Justin Sherman says virtually all state privacy laws exempt records that might be considered “public” or “government” documents, including voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, bankruptcy filings, and more.

“Consumer privacy laws in California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia all contain highly similar or completely identical carve-outs for ‘publicly available information’ or government records,” Sherman said.

How to Detect Signs of Identity Theft

When it comes to identity theft, trust your gut when something doesn’t feel right. Follow up. What you’re seeing could be a problem.  

A missing bill or a mysterious charge on your credit card could be the tip of an identity theft iceberg, one that can run deep if left unaddressed. Here, we’ll look at several signs of identity theft that likely need some investigation and the steps you can take to take charge of the situation.  

How does identity theft happen in the first place?  

Unfortunately, it can happen in several ways.   

In the physical world, it can happen simply because you lost your wallet or debit card. However, there are also cases where someone gets your information by going through your mail or trash for bills and statements. In other more extreme cases, theft can happen by someone successfully registering a change of address form in your name (although the U.S. Postal Service has security measures in place that make this difficult).   

In the digital world, that’s where the avenues of identity theft blow wide open. It could come by way of a data breach, a thief “skimming” credit card information from a point-of-sale terminal, or by a dedicated crook piecing together various bits of personal information that have been gathered from social media, phishing attacks, or malware designed to harvest information. Additionally, thieves may eavesdrop on public Wi-Fi and steal information from people who are shopping or banking online without the security of a VPN.  

Regardless of how crooks pull it off, identity theft is on the rise. According to the Federal Trade Commission (FTC), identity theft claims jumped up from roughly 650,000 claims in 2019 to 1 million in 2023. Of the reported fraud cases where a dollar loss was reported, the FTC calls out the following top three contact methods for identity theft:  

  • Online ads that direct you to a scammer’s site are designed to steal your information.  
  • Malicious websites and apps also steal information when you use them.  
  • Social media scams lure you into providing personal information, whether through posts or direct messages.  

However, phone calls, texts, and email remain the most preferred contact methods that fraudsters use, even if they are less successful in creating dollar losses than malicious websites, ads, and social media.  

What are some signs of identity theft?  

Identity thieves leave a trail. With your identity in hand, they can charge things to one or more of your existing accounts—and if they have enough information about you, they can even create entirely new accounts in your name. Either way, once an identity thief strikes, you’re probably going to notice that something is wrong. Possible signs include:  

  • You start getting mail for accounts that you never opened.   
  • Statements or bills stop showing up from your legitimate accounts.  
  • You receive authentication messages for accounts you don’t recognize via email, text, or phone.   
  • Debt collectors contact you about an account you have no knowledge of.  
  • Unauthorized transactions, however large or small, show up in your bank or credit card statements.  
  • You apply for credit and get unexpectedly denied.  
  • And in extreme cases, you discover that someone else has filed a tax return in your name.  

As you can see, the signs of possible identity theft can run anywhere from, “Well, that’s strange …” to “OH NO!” However, the good news is that there are several ways to check if someone is using your identity before it becomes a problem – or before it becomes a big problem that gets out of hand.   

Steps to take if you suspect that you’re the victim of identity theft  

The point is that if you suspect fraud, you need to act right away. With identity theft becoming increasingly commonplace, many businesses, banks, and organizations have fraud reporting mechanisms in place that can assist you should you have any concerns. With that in mind, here are some immediate steps you can take:  

1) Notify the companies and institutions involved 

Whether you spot a curious charge on your bank statement or you discover what looks like a fraudulent account when you get your free credit report, let the bank or business involved know you suspect fraud. With a visit to their website, you can track down the appropriate number to call and get the investigation process started.   

2) File a police report 

Some businesses will require you to file a local police report to acquire a case number to complete your claim. Even beyond a business making such a request, filing a report is still a good idea. Identity theft is still theft and reporting it provides an official record of the incident. Should your case of identity theft lead to someone impersonating you or committing a crime in your name, filing a police report right away can help clear your name down the road. Be sure to save any evidence you have, like statements or documents that are associated with the theft. They can help clean up your record as well.  

3) Contact the Federal Trade Commission (FTC) 

The FTC’s identity theft website is a fantastic resource should you find yourself in need. Above and beyond simply reporting the theft, the FTC can provide you with a step-by-step recovery plan—and even walk you through the process if you create an account with them. Additionally, reporting theft to the FTC can prove helpful if debtors come knocking to collect on any bogus charges in your name. You can provide them with a copy of your FTC report and ask them to stop.  

4) Place a fraud alert and consider a credit freeze 

You can place a free one-year fraud alert with one of the major credit bureaus (Experian, TransUnion, Equifax), and they will notify the other two. A fraud alert will make it tougher for thieves to open accounts in your name, as it requires businesses to verify your identity before issuing new credit in your name.  

A credit freeze goes a step further. As the name implies, a freeze prohibits creditors from pulling your credit report, which is needed to approve credit. Such a freeze is in place until you lift it, and it will also apply to legitimate queries as well. Thus, if you intend to get a loan or new credit card while a freeze is in place, you’ll likely need to take extra measures to see that through. Contact each of the major credit bureaus (Experian, TransUnion, Equifax) to put a freeze in place or lift it when you’re ready.  

5) Dispute any discrepancies in your credit reports 

This can run the gamut from closing any false accounts that were set up in your name, removing bogus charges, and correcting information in your credit report such as phony addresses or contact information. With your FTC report, you can dispute these discrepancies and have the business correct the record. Be sure to ask for written confirmation and keep a record of all documents and conversations involved.   

6) Contact the IRS, if needed 

If you receive a notice from the IRS that someone used your identity to file a tax return in your name, follow the information provided by the IRS in the notice. From there, you can file an identity theft affidavit with the IRS. If the notice mentions that you were paid by an employer you don’t know, contact that employer as well and let them know of possible fraud—namely that someone has stolen your identity and that you don’t truly work for them.  

Also, be aware that the IRS has specific guidelines as to how and when they will contact you. As a rule, they will most likely contact you via physical mail delivered by the U.S. Postal Service. (They won’t call or apply harassing pressure tactics—only scammers do that.) Identity-based tax scams are a topic all of their own, and for more on it, you can check out this article on tax scams and how to avoid them.  

7) Continue to monitor your credit report, invoices, and statements 

Another downside of identity theft is that it can mark the start of a long, drawn-out affair. One instance of theft can possibly lead to another, so even what may appear to be an isolated bad charge on your credit card calls for keeping an eye on your identity. Many of the tools you would use up to this point still apply, such as checking up on your credit reports, maintaining fraud alerts as needed, and reviewing your accounts closely.  

Preventing identity theft 

With all the time we spend online as we bank, shop, and simply surf, we create and share all kinds of personal information—information that can get collected and even stolen. The good news is that you can prevent theft and fraud with online protection software, such as McAfee+ Ultimate 

With McAfee+ Ultimate you can: 

  • Monitor your credit activity on all three major credit bureaus to stay on top of unauthorized use.​ 
  • Also, monitor the dark web for breaches involving your personal info and notify you if it’s found.​ 
  • Lock or freeze your credit file to help prevent accounts from being opened in your name. 
  • Remove your personal info from over 40 data broker sites collecting and selling it. 
  • Restore your identity with a licensed expert should the unexpected happen.​ 
  • Receive $1M identity theft and stolen funds coverage along with additional $25K ransomware coverage. 

In all, it’s our most comprehensive privacy, identity, and device protection plan, built for a time when we rely so heavily on the internet to go about our day, whether that’s work, play, or simply getting things done. 

Righting the wrongs of identity theft: deep breaths and an even keel  

Realizing that you’ve become a victim of identity theft carries plenty of emotion with it, which is understandable—the thief has stolen a part of you to get at your money, information, and even reputation. Once that initial rush of anger and surprise has passed, it’s time to get clinical and get busy. Think like a detective who’s building – and closing – a case. That’s exactly what you’re doing. Follow the steps, document each one, and build up your case file as you need. Staying cool, organized, and ready with an answer to any questions you’ll face in the process of restoring your identity will help you see things through.  

Once again, this is a good reminder that vigilance is the best defense against identity theft from happening in the first place. While there’s no absolute, sure-fire protection against it, there are several things you can do to lower the odds in your favor. And at the top of the list is keeping consistent tabs on what’s happening across your credit reports and accounts.  

The post How to Detect Signs of Identity Theft appeared first on McAfee Blog.

Dorkish - Chrome Extension Tool For OSINT & Recon

By: Zion3R


During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish.
Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.


Installation And Setup

1- Clone the repository

git clone https://github.com/yousseflahouifi/dorkish.git

2- Go to chrome://extensions/ and enable the Developer mode in the top right corner.
3- click on Load unpacked extension button and select the dorkish folder.

Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/

Features

Google dorking

  • Builder with keywords to filter your google search results.
  • Prebuilt dorks for Bug bounty programs.
  • Prebuilt dorks used during the reconnaissance phase in bug bounty.
  • Prebuilt dorks for exposed files and directories
  • Prebuilt dorks for logins and sign up portals
  • Prebuilt dorks for cyber secruity jobs

Shodan dorking

  • Builder with filter keywords used in shodan.
  • Varierty of prebuilt dorks to find IOT , Network infrastructure , cameras , ICS , databases , etc.

Usage

Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.

TODO

  • Add more useful dorks and catogories
  • Fix some bugs
  • Add a search bar to search through the results
  • Might add some LLM models to build dorks

Notes

I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty

Warning

  • I am not responsible for any damage caused by using the tool


New Banking Trojan CHAVECLOAK Targets Brazilian Users via Phishing Tactics

Users in Brazil are the target of a new banking trojan known as&nbsp;CHAVECLOAK&nbsp;that's propagated via phishing emails bearing PDF attachments. "This intricate attack involves the PDF downloading a ZIP file and subsequently utilizing DLL side-loading techniques to execute the final malware," Fortinet FortiGuard Labs researcher Cara Lin&nbsp;said. The attack chain involves the use of

Signal Introduces Usernames, Allowing Users to Keep Their Phone Numbers Private

End-to-end encrypted (E2EE) messaging app Signal said it’s piloting a new feature that allows users to create unique usernames (not to be confused with profile names) and keep the phone numbers away from prying eyes. “If you use Signal, your phone number will no longer be visible to everyone you chat with by default,” Signal’s Randall Sarafa&nbsp;said. “People who have your number saved in their

Patchwork Using Romance Scam Lures to Infect Android Devices with VajraSpy Malware

The threat actor known as Patchwork likely used romance scam lures to trap victims in Pakistan and India, and infect their Android devices with a remote access trojan called&nbsp;VajraSpy. Slovak cybersecurity firm ESET said it uncovered 12 espionage apps, six of which were available for download from the official Google Play Store and were collectively downloaded more than 1,400 times between

AnyDesk Hacked: Popular Remote Desktop Software Mandates Password Reset

Remote desktop software maker AnyDesk disclosed on Friday that it suffered a cyber attack that led to a compromise of its production systems. The German company said the incident, which it discovered following a security audit, is not a ransomware attack and that it has notified relevant authorities. "We have revoked all security-related certificates and systems have been remediated or replaced

Fla. Man Charged in SIM-Swapping Spree is Key Suspect in Hacker Groups Oktapus, Scattered Spider

On Jan. 9, 2024, U.S. authorities arrested a 19-year-old Florida man charged with wire fraud, aggravated identity theft, and conspiring with others to use SIM-swapping to steal cryptocurrency. Sources close to the investigation tell KrebsOnSecurity the accused was a key member of a criminal hacking group blamed for a string of cyber intrusions at major U.S. technology companies during the summer of 2022.

A graphic depicting how 0ktapus leveraged one victim to attack another. Image credit: Amitai Cohen of Wiz.

Prosecutors say Noah Michael Urban of Palm Coast, Fla., stole at least $800,000 from at least five victims between August 2022 and March 2023. In each attack, the victims saw their email and financial accounts compromised after suffering an unauthorized SIM-swap, wherein attackers transferred each victim’s mobile phone number to a new device that they controlled.

The government says Urban went by the aliases “Sosa” and “King Bob,” among others. Multiple trusted sources told KrebsOnSecurity that Sosa/King Bob was a core member of a hacking group behind the 2022 breach at Twilio, a company that provides services for making and receiving text messages and phone calls. Twilio disclosed in Aug. 2022 that an intrusion had exposed a “limited number” of Twilio customer accounts through a sophisticated social engineering attack designed to steal employee credentials.

Shortly after that disclosure, the security firm Group-IB published a report linking the attackers behind the Twilio intrusion to separate breaches at more than 130 organizations, including LastPass, DoorDash, Mailchimp, and Plex. Multiple security firms soon assigned the hacking group the nickname “Scattered Spider.”

Group-IB dubbed the gang by a different name — 0ktapus — which was a nod to how the criminal group phished employees for credentials. The missives asked users to click a link and log in at a phishing page that mimicked their employer’s Okta authentication page. Those who submitted credentials were then prompted to provide the one-time password needed for multi-factor authentication.

A booking photo of Noah Michael Urban released by the Volusia County Sheriff.

0ktapus used newly-registered domains that often included the name of the targeted company, and sent text messages urging employees to click on links to these domains to view information about a pending change in their work schedule. The phishing sites used a Telegram instant message bot to forward any submitted credentials in real-time, allowing the attackers to use the phished username, password and one-time code to log in as that employee at the real employer website.

0ktapus often leveraged information or access gained in one breach to perpetrate another. As documented by Group-IB, the group pivoted from its access to Twilio to attack at least 163 of its customers. Among those was the encrypted messaging app Signal, which said the breach could have let attackers re-register the phone number on another device for about 1,900 users.

Also in August 2022, several employees at email delivery firm Mailchimp provided their remote access credentials to this phishing group. According to an Aug. 12 blog post, the attackers used their access to Mailchimp employee accounts to steal data from 214 customers involved in cryptocurrency and finance.

On August 25, 2022, the password manager service LastPass disclosed a breach in which attackers stole some source code and proprietary LastPass technical information, and weeks later LastPass said an investigation revealed no customer data or password vaults were accessed.

However, on November 30, 2022 LastPass disclosed a far more serious breach that the company said leveraged data stolen in the August breach. LastPass said criminal hackers had stolen encrypted copies of some password vaults, as well as other personal information.

In February 2023, LastPass disclosed that the intrusion involved a highly complex, targeted attack against a DevOps engineer who was one of only four LastPass employees with access to the corporate vault. In that incident, the attackers exploited a security vulnerability in a Plex media server that the employee was running on his home network, and succeeded in installing malicious software that stole passwords and other authentication credentials. The vulnerability exploited by the intruders was patched back in 2020, but the employee never updated his Plex software.

As it happens, Plex announced its own data breach one day before LastPass disclosed its initial August intrusion. On August 24, 2022, Plex’s security team urged users to reset their passwords, saying an intruder had accessed customer emails, usernames and encrypted passwords.

KING BOB’S GRAILS

A review of thousands of messages that Sosa and King Bob posted to several public forums and Discord servers over the past two years shows that the person behind these identities was mainly focused on two things: Sim-swapping, and trading in stolen, unreleased rap music recordings from popular artists.

Indeed, those messages show Sosa/King Bob was obsessed with finding new “grails,” the slang term used in some cybercrime discussion channels to describe recordings from popular artists that have never been officially released. It stands to reason that King Bob was SIM-swapping important people in the music industry to obtain these files, although there is little to support this conclusion from the public chat records available.

“I got the most music in the com,” King Bob bragged in a Discord server in November 2022. “I got thousands of grails.”

King Bob’s chats show he was particularly enamored of stealing the unreleased works of his favorite artists — Lil Uzi Vert, Playboi Carti, and Juice Wrld. When another Discord user asked if he has Eminem grails, King Bob said he was unsure.

“I have two folders,” King Bob explained. “One with Uzi, Carti, Juicewrld. And then I have ‘every other artist.’ Every other artist is unorganized as fuck and has thousands of random shit.”

King Bob’s posts on Discord show he quickly became a celebrity on Leaked[.]cx, one of most active forums for trading, buying and selling unreleased music from popular artists. The more grails that users share with the Leaked[.]cx community, the more their status and access on the forum grows.

The last cache of Leaked dot cx indexed by the archive.org on Jan. 11, 2024.

And King Bob shared a large number of his purloined tunes with this community. Still others he tried to sell. It’s unclear how many of those sales were ever consummated, but it is not unusual for a prized grail to sell for anywhere from $5,000 to $20,000.

In mid-January 2024, several Leaked[.]cx regulars began complaining that they hadn’t seen King Bob in a while and were really missing his grails. On or around Jan. 11, the same day the Justice Department unsealed the indictment against Urban, Leaked[.]cx started blocking people who were trying to visit the site from the United States.

Days later, frustrated Leaked[.]cx users speculated about what could be the cause of the blockage.

“Probs blocked as part of king bob investigation i think?,” wrote the user “Plsdontarrest.” “Doubt he only hacked US artists/ppl which is why it’s happening in multiple countries.”

FORESHADOWING

On Sept. 21, 2022, KrebsOnSecurity told the story of a “Foreshadow,” the nickname chosen by a Florida teenager who was working for a SIM-swapping crew when he was abducted, beaten and held for a $200,000 ransom. A rival SIM-swapping group claimed that Foreshadow and his associates had robbed them of their fair share of the profits from a recent SIM-swap.

In a video released by his abductors on Telegram, a bloodied, battered Foreshadow was made to say they would kill him unless the ransom was paid.

As I wrote in that story, Foreshadow appears to have served as a “holder” — a term used to describe a low-level member of any SIM-swapping group who agrees to carry out the riskiest and least rewarding role of the crime: Physically keeping and managing the various mobile devices and SIM cards that are used in SIM-swapping scams.

KrebsOnSecurity has since learned that Foreshadow was a holder for a particularly active SIM-swapper who went by “Elijah,” which was another nickname that prosecutors say Urban used.

Shortly after Foreshadow’s hostage video began circulating on Telegram and Discord, multiple known actors in the SIM-swapping space told everyone in the channels to delete any previous messages with Foreshadow, claiming he was fully cooperating with the FBI.

This was not the first time Sosa and his crew were hit with violent attacks from rival SIM-swapping groups. In early 2022, a video surfaced on a popular cybercrime channel purporting to show attackers hurling a brick through a window at an address that matches the spacious and upscale home of Urban’s parents in Sanford, Fl.

“Brickings” are among the “violence-as-a-service” offerings broadly available on many cybercrime channels. SIM-swapping and adjacent cybercrime channels are replete with job offers for in-person assignments and tasks that can be found if one searches for posts titled, “If you live near,” or “IRL job” — short for “in real life” job.

A number of these classified ads are in service of performing brickings, where someone is hired to visit a specific address and toss a brick through the target’s window. Other typical IRL job offers involve tire slashings and even drive-by shootings.

THE COM

Sosa was known to be a top member of the broader cybercriminal community online known as “The Com,” wherein hackers boast loudly about high-profile exploits and hacks that almost invariably begin with social engineering — tricking people over the phone, email or SMS into giving away credentials that allow remote access to corporate internal networks.

Sosa also was active in a particularly destructive group of accomplished criminal SIM-swappers known as “Star Fraud.” Cyberscoop’s AJ Vicens reported last year that individuals within Star Fraud were likely involved in the high-profile Caesars Entertainment an MGM Resorts extortion attacks.

“ALPHV, an established ransomware-as-a-service operation thought to be based in Russia and linked to attacks on dozens of entities, claimed responsibility for Caesars and MGM attacks in a note posted to its website earlier this month,” Vicens wrote. “Experts had said the attacks were the work of a group tracked variously as UNC 3944 or Scattered Spider, which has been described as an affiliate working with ALPHV made up of people in the United States and Britain who excel at social engineering.”

In February 2023, KrebsOnSecurity published data taken from the Telegram channels for Star Fraud and two other SIM-swapping groups showing these crooks focused on SIM-swapping T-Mobile customers, and that they collectively claimed access to T-Mobile on 100 separate occasions over a 7-month period in 2022.

The SIM-swapping groups were able to switch targeted phone numbers to another device on demand because they constantly phished T-Mobile employees into giving up credentials to employee-only tools. In each of those cases the goal was the same: Phish T-Mobile employees for access to internal company tools, and then convert that access into a cybercrime service that could be hired to divert any T-Mobile user’s text messages and phone calls to another device.

Allison Nixon, chief research officer at the New York cybersecurity consultancy Unit 221B, said the increasing brazenness of many Com members is a function of how long it has taken federal authorities to go after guys like Sosa.

“These incidents show what happens when it takes too long for cybercriminals to get arrested,” Nixon said. “If governments fail to prioritize this source of threat, violence originating from the Internet will affect regular people.”

NO FIXED ADDRESS

The Daytona Beach News-Journal reports that Urban was arrested Jan. 9 and his trial is scheduled to begin in the trial term starting March 4 in Jacksonville. The publication said the judge overseeing Urban’s case denied bail because the defendant was a strong flight risk.

At Urban’s arraignment, it emerged that he had no fixed address and had been using an alias to stay at an Airbnb. The judge reportedly said that when a search warrant was executed at Urban’s residence, the defendant was downloading programs to delete computer files.

What’s more, the judge explained, despite telling authorities in May that he would not have any more contact with his co-conspirators and would not engage in cryptocurrency transactions, he did so anyway.

Urban entered a plea of not guilty. Urban’s court-appointed attorney said her client would have no comment at this time.

Prosecutors charged Urban with eight counts of wire fraud, one count of conspiracy to commit wire fraud, and five counts of aggravated identity theft. According to the government, if convicted Urban faces up to 20 years in federal prison on each wire fraud charge. He also faces a minimum mandatory penalty of two years in prison for the aggravated identity offenses, which will run consecutive to any other prison sentence imposed.

CISA Urges Manufacturers Eliminate Default Passwords to Thwart Cyber Threats

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) is&nbsp;urging&nbsp;manufacturers to get rid of default passwords on internet-exposed systems altogether, citing severe risks that could be exploited by malicious actors to gain initial access to, and move laterally within, organizations. In an alert published last week, the agency called out Iranian threat actors affiliated with

ICANN Launches Service to Help With WHOIS Lookups

More than five years after domain name registrars started redacting personal data from all public domain registration records, the non-profit organization overseeing the domain industry has introduced a centralized online service designed to make it easier for researchers, law enforcement and others to request the information directly from registrars.

In May 2018, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — instructed all registrars to redact the customer’s name, address, phone number and email from WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges.

ICANN made the policy change in response to the General Data Protection Regulation (GDPR), a law enacted by the European Parliament that requires companies to gain affirmative consent for any personal information they collect on people within the European Union. In the meantime, registrars were to continue collecting the data but not publish it, and ICANN promised it would develop a system that facilitates access to this information.

At the end of November 2023, ICANN launched the Registration Data Request Service (RDRS), which is designed as a one-stop shop to submit registration data requests to participating registrars. This video from ICANN walks through how the system works.

Accredited registrars don’t have to participate, but ICANN is asking all registrars to join and says participants can opt out or stop using it at any time. ICANN contends that the use of a standardized request form makes it easier for the correct information and supporting documents to be provided to evaluate a request.

ICANN says the RDRS doesn’t guarantee access to requested registration data, and that all communication and data disclosure between the registrars and requestors takes place outside of the system. The service can’t be used to request WHOIS data tied to country-code top level domains (CCTLDs), such as those ending in .de (Germany) or .nz (New Zealand), for example.

The RDRS portal.

As Catalin Cimpanu writes for Risky Business News, currently investigators can file legal requests or abuse reports with each individual registrar, but the idea behind the RDRS is to create a place where requests from “verified” parties can be honored faster and with a higher degree of trust.

The registrar community generally views public WHOIS data as a nuisance issue for their domain customers and an unwelcome cost-center. Privacy advocates maintain that cybercriminals don’t provide their real information in registration records anyway, and that requiring WHOIS data to be public simply causes domain registrants to be pestered by spammers, scammers and stalkers.

Meanwhile, security experts argue that even in cases where online abusers provide intentionally misleading or false information in WHOIS records, that information is still extremely useful in mapping the extent of their malware, phishing and scamming operations. What’s more, the overwhelming majority of phishing is performed with the help of compromised domains, and the primary method for cleaning up those compromises is using WHOIS data to contact the victim and/or their hosting provider.

Anyone looking for copious examples of both need only to search this Web site for the term “WHOIS,” which yields dozens of stories and investigations that simply would not have been possible without the data available in the global WHOIS records.

KrebsOnSecurity remains doubtful that participating registrars will be any more likely to share WHOIS data with researchers just because the request comes through ICANN. But I look forward to being wrong on this one, and will certainly mention it in my reporting if the RDRS proves useful.

Regardless of whether the RDRS succeeds or fails, there is another European law that takes effect in 2024 which is likely to place additional pressure on registrars to respond to legitimate WHOIS data requests. The new Network and Information Security Directive (NIS2), which EU member states have until October 2024 to implement, requires registrars to keep much more accurate WHOIS records, and to respond within as little as 24 hours to WHOIS data requests tied everything from phishing, malware and spam to copyright and brand enforcement.

How to Automate the Hardest Parts of Employee Offboarding

According to recent research on employee offboarding, 70% of IT professionals say they’ve experienced the negative effects of incomplete IT offboarding, whether in the form of a security incident tied to an account that wasn't deprovisioned, a surprise bill for resources that aren’t in use anymore, or a missed handoff of a critical resource or account. This is despite an average of five hours

Signal Debunks Zero-Day Vulnerability Reports, Finds No Evidence

Encrypted messaging app Signal has pushed back against "viral reports" of an alleged zero-day flaw in its software, stating it found no evidence to support the claim. "After responsible investigation *we have no evidence that suggests this vulnerability is real* nor has any additional info been shared via our official reporting channels," it said in a series of messages posted in X (formerly

Israel's Failure to Stop the Hamas Attack Shows the Danger of Too Much Surveillance

Hundreds dead, thousands wounded—Hamas’ surprise attack on Israel shows the limits of even the most advanced and invasive surveillance dragnets as full-scale war erupts.

Signal Messenger Introduces PQXDH Quantum-Resistant Encryption

By: THN
Encrypted messaging app Signal has announced an update to the Signal Protocol to add support for quantum resistance by upgrading the Extended Triple Diffie-Hellman (X3DH) specification to Post-Quantum Extended Diffie-Hellman (PQXDH). "With this upgrade, we are adding a layer of protection against the threat of a quantum computer being built in the future that is powerful enough to break current

The Cheap Radio Hack That Disrupted Poland's Railway System

The sabotage of more than 20 trains in Poland by apparent supporters of Russia was carried out with a simple “radio-stop” command anyone could broadcast with $30 in equipment.

New Supply Chain Attack Hit Close to 100 Victims—and Clues Point to China

The hackers, who mostly targeted victims in Hong Kong, also hijacked Microsoft’s trust model to make their malware harder to detect.

Hackers Exploit Windows Policy Loophole to Forge Kernel-Mode Driver Signatures

By: THN
A Microsoft Windows policy loophole has been observed being exploited primarily by native Chinese-speaking threat actors to forge signatures on kernel-mode drivers. "Actors are leveraging multiple open-source tools that alter the signing date of kernel mode drivers to load malicious and unverified drivers signed with expired certificates," Cisco Talos said in an exhaustive two-part report shared

yaraQA - YARA Rule Analyzer To Improve Rule Quality And Performance

By: Zion3R


YARA rule Analyzer to improve rule quality and performance

Why?

YARA rules can be syntactically correct but still dysfunctional. yaraQA tries to find and report these issues to the author or maintainer of a YARA rule set.

The issues yaraQA tries to detect are e.g.:

  • rules that are syntactically correct but never match due to errors in the condition (e.g. rule with one string and 2 of them in the condition)
  • rules that use string and modifier combinations that are probably wrong (e.g. $ = "\\Debug\\" fullword)
  • performance issues caused by short atoms, repeating characters or loops (e.g. $ = "AA"; can be excluded from the analysis using --ignore-performance)

I'm going to extend the test set over time. Each minor version will include new features or new tests.


Install requirements

pip install -r requirements.txt

Usage

directory (YARA rules folders, separated by space) -o outfile Output file that lists the issues (JSON, default: 'yaraQA-issues.json') -b baseline Use a issues baseline (issues found and reviewed before) to filter issues -l level Minium level to show (1=informational, 2=warning, 3=critical) --ignore-performance Suppress performance-related rule issues --debug Debug output" dir="auto">
usage: yaraQA.py [-h] [-f yara files [yara files ...]] [-d yara files [yara files ...]] [-o outfile] [-b baseline] [-l level]
[--ignore-performance] [--debug]

YARA RULE ANALYZER

optional arguments:
-h, --help show this help message and exit
-f yara files [yara files ...]
Path to input files (one or more YARA rules, separated by space)
-d yara files [yara files ...]
Path to input directory (YARA rules folders, separated by space)
-o outfile Output file that lists the issues (JSON, default: 'yaraQA-issues.json')
-b baseline Use a issues baseline (issues found and reviewed before) to filter issues
-l level Minium level to show (1=informational, 2=warning, 3=critical)
--ignore-performance Suppress performance-related rule issues
--debug Debug output

Try it out

python3 yaraQA.py -d ./test/

Suppress all performance issues and only show detection / logic issues.

python3 yaraQA.py -d ./test/ --ignore-performance

Suppress all issues of informational character

python3 yaraQA.py -d ./test/ -level 2

Use a baseline to only see new issues (not the ones that you've already reviewed). The baseline file is an old JSON output of a reviewed state.

python3 yaraQA.py -d ./test/ -b yaraQA-reviewed-issues.json

Example Rules with Issues

Example rules with issues can be found in the ./test folder.

Output

yaraQA writes the detected issues to a file named yaraQA-issues.json by default.

This listing shows an example of the output generated by yaraQA in JSON format:

binary 0 in front or a space after the string). Every additional byte helps." }, { "rule": "Demo_Rule_3_Fullword_FilePath_Section", "id": "SM3", "issue": "The rule uses a string with the modifier 'fullword' but it starts and ends with two backslashes and thus the modifier could lead to a dysfunctional rule.", "element": { "name": "$s1", "value": "\\\\ZombieBoy\\\\", "type": "text", "modifiers": [ "ascii", "fullword" ] }, "level": "warning", "type": "logic", "recommendation": "Remove the 'fullword' modifier" }, { "rule": "Demo_Rule_4_Condition_Never_Matches", "id": "CE1", "issue": "The rule uses a condition that will never match", "element": { "condition_segment": "2 of", "num_of_strings": 1 }, "level": "error", "type": "logic", "recommendation": "Fix the condition" }, { "rule": "Demo_Rule_5_Condition_Short_String_At_Pos", "id": "PA1", "issue": "This rule looks for a short string at a particular position. A short string represents a short atom and could be rewritten to an expression using uint(x) at position.", "element": { "condition_segment": "$mz at 0", "string": "$mz", "value": "MZ" }, "level": "warning", "type": "performance", "recommendation": "" }, { "rule": "Demo_Rule_5_Condition_Short_String_At_Pos", "id": "PA2", "issue": "The rule contains a string that turns out to be a very short atom, which could cause a reduced performance of the complete rule set or increased memory usage.", "element": { "name": "$mz", "value": "MZ", "type": "text", "modifiers": [ "ascii" ] }, "level": "warning", "type": "performance", "recommendation": "Try to avoid using such short atoms, by e.g. adding a few more bytes to the beginning or the end (e.g. add a binary 0 in front or a space after the string). Every additional byte helps." }, { "rule": "Demo_Rule_6_Condition_Short_Byte_At_Pos", "id": "PA1", "issue": "This rule looks for a short string at a particular position. A short string represents a short atom and could be rewritten to an expression using uint(x) at position.", "element": { "condition_segment": "$mz at 0", "string": "$mz", "value": "{ 4d 5a }" }, "level": "warning", "type": "performance", "recommendation": "" }, { "rule": "Demo_Rule_6_Condition_Short_Byte_At_Pos", "id": "PA2", "issue": "The rule contains a string that turns out to be a very short atom, which could cause a reduced performance of the complete rule set or increased memory usage.", "element": { "name": "$mz", "value": "{ 4d 5a }", "type": "byte" }, "level": "warning", "type": "performance", "recommendation": "Try to avoid using such short atoms, by e.g. adding a few more bytes to the beginning or the end (e.g. add a binary 0 in front or a space after the string). Every additional byte helps." }, { "rule": "Demo_Rule_6_Condition_Short_Byte_At_Pos", "id": "SM3", "issue": "The rule uses a string with the modifier 'fullword' but it starts and ends with two backslashes and thus the modifier could lead to a dysfunctional rule.", "element": { "name": "$s1", "value": "\\\\Section\\\\in\\\\Path\\\\", "type": "text", "modifiers": [ "ascii", "fullword" ] }, "level": "warning", "type": "logic", "recommendation": "Remove the 'fullword' modifier" } ]" dir="auto">
[
{
"rule": "Demo_Rule_1_Fullword_PDB",
"id": "SM1",
"issue": "The rule uses a PDB string with the modifier 'wide'. PDB strings are always included as ASCII strings. The 'wide' keyword is unneeded.",
"element": {
"name": "$s1",
"value": "\\\\i386\\\\mimidrv.pdb",
"type": "text",
"modifiers": [
"ascii",
"wide",
"fullword"
]
},
"level": "info",
"type": "logic",
"recommendation": "Remove the 'wide' modifier"
},
{
"rule": "Demo_Rule_1_Fullword_PDB",
"id": "SM2",
"issue": "The rule uses a PDB string with the modifier 'fullword' but it starts with two backslashes and thus the modifier could lead to a dysfunctional rule.",
"element": {
"name": " $s1",
"value": "\\\\i386\\\\mimidrv.pdb",
"type": "text",
"modifiers": [
"ascii",
"wide",
"fullword"
]
},
"level": "warning",
"type": "logic",
"recommendation": "Remove the 'fullword' modifier"
},
{
"rule": "Demo_Rule_2_Short_Atom",
"id": "PA2",
"issue": "The rule contains a string that turns out to be a very short atom, which could cause a reduced performance of the complete rule set or increased memory usage.",
"element": {
"name": "$s1",
"value": "{ 01 02 03 }",
"type": "byte"
},
"level": "warning",
"type": "performance",
"recommendation": "Try to avoid using such short atoms, by e.g. adding a few more bytes to the beginning or the end (e.g. add a binary 0 in front or a space after the string). Every additional byte helps."
},
{
"rule": "Demo_Rule_3_Fullword_FilePath_Section",
"id": "SM3",
"issue": "The rule uses a string with the modifier 'fullword' but it starts and ends with two backslashes and thus the modifier could lead to a dysfunctional rule.",
"element": {
"name": "$s1",
"value": "\\\\ZombieBoy\\\\",
"type": "text",
"modifiers": [
"ascii",
"fullword"
]
},
"level": "warning",
"type": "logic",
"recommendation": "Remove the 'fullword' modifier"
},
{
"rule": "Demo_Rule_4_Condition_Never_Matches",
"id": "CE1",
"issue": "The rule uses a condition that will never match",
"element": {
"condition_segment": "2 of",
"num_of_strings": 1
},
"level": "error",
"type": "logic",
"recommendation": "Fix the condition"
},
{
"rule": "Demo_Rule_5_Condition_Short_String_At_Pos",
"id": "PA1",
"issue": "This rule looks for a short string at a particular position. A short string represents a short atom and could be rewritten to an expression using uint(x) at position.",
"element": {
"condition_segment": "$mz at 0",
"string": "$mz",
"value": "MZ"
},
"level": "warning",
"type": "performance",
"recommendation": ""
},
{
"rule": "Demo_Rule_5_Condition_Short_String_At_Pos",
"id": "PA2",
"issue": "The rule contains a string that turns out to be a very short atom, which could cause a reduced performance of the complete rule set or increased memory usage.",< br/> "element": {
"name": "$mz",
"value": "MZ",
"type": "text",
"modifiers": [
"ascii"
]
},
"level": "warning",
"type": "performance",
"recommendation": "Try to avoid using such short atoms, by e.g. adding a few more bytes to the beginning or the end (e.g. add a binary 0 in front or a space after the string). Every additional byte helps."
},
{
"rule": "Demo_Rule_6_Condition_Short_Byte_At_Pos",
"id": "PA1",
"issue": "This rule looks for a short string at a particular position. A short string represents a short atom and could be rewritten to an expression using uint(x) at position.",
"element": {
"condition_segment": "$mz at 0",
"string": "$mz",
"value": "{ 4d 5a }"
},
"level": "warning",
"type": "performance",
"recommendation": ""
},
{
"rule": "Demo_Rule_6_Condition_Short_Byte_At_Pos",
"id": "PA2",
"issue": "The rule contains a string that turns out to be a very short atom, which could cause a reduced performance of the complete rule set or increased memory usage.",
"element": {
"name": "$mz",
"value": "{ 4d 5a }",
"type": "byte"
},
"level": "warning",
"type": "performance",
"recommendation": "Try to avoid using such short atoms, by e.g. adding a few more bytes to the beginning or the end (e.g. add a binary 0 in front or a space after the string). Every additional byte helps."
},
{
"rule": "Demo_Rule_6_Condition_Short_Byte_At_Pos",
"id": "SM3",
"issue": "The rule uses a string with the modifier 'fullword' but it starts and ends with two backsla shes and thus the modifier could lead to a dysfunctional rule.",
"element": {
"name": "$s1",
"value": "\\\\Section\\\\in\\\\Path\\\\",
"type": "text",
"modifiers": [
"ascii",
"fullword"
]
},
"level": "warning",
"type": "logic",
"recommendation": "Remove the 'fullword' modifier"
}
]

Screenshots



Ask Fitis, the Bear: Real Crooks Sign Their Malware

Code-signing certificates are supposed to help authenticate the identity of software publishers, and provide cryptographic assurance that a signed piece of software has not been altered or tampered with. Both of these qualities make stolen or ill-gotten code-signing certificates attractive to cybercriminal groups, who prize their ability to add stealth and longevity to malicious software. This post is a deep dive on “Megatraffer,” a veteran Russian hacker who has practically cornered the underground market for malware focused code-signing certificates since 2015.

One of Megatraffer’s ads on an English-language cybercrime forum.

A review of Megatraffer’s posts on Russian crime forums shows this user began peddling individual stolen code-signing certs in 2015 on the Russian-language forum Exploit, and soon expanded to selling certificates for cryptographically signing applications and files designed to run in Microsoft Windows, Java, Adobe AIR, Mac and Microsoft Office.

Megatraffer explained that malware purveyors need a certificate because many antivirus products will be far more interested in unsigned software, and because signed files downloaded from the Internet don’t tend to get blocked by security features built into modern web browsers. Additionally, newer versions of Microsoft Windows will complain with a bright yellow or red alert message if users try to install a program that is not signed.

“Why do I need a certificate?” Megatraffer asked rhetorically in their Jan. 2016 sales thread on Exploit. “Antivirus software trusts signed programs more. For some types of software, a digital signature is mandatory.”

At the time, Megatraffer was selling unique code-signing certificates for $700 apiece, and charging more than twice that amount ($1,900) for an “extended validation” or EV code-signing cert, which is supposed to only come with additional identity vetting of the certificate holder. According to Megatraffer, EV certificates were a “must-have” if you wanted to sign malicious software or hardware drivers that would reliably work in newer Windows operating systems.

Part of Megatraffer’s ad. Image: Ke-la.com.

Megatraffer has continued to offer their code-signing services across more than a half-dozen other Russian-language cybercrime forums, mostly in the form of sporadically available EV and non-EV code-signing certificates from major vendors like Thawte and Comodo.

More recently, it appears Megatraffer has been working with ransomware groups to help improve the stealth of their malware. Shortly after Russia invaded Ukraine in February 2022, someone leaked several years of internal chat logs from the Conti ransomware gang, and those logs show Megatraffer was working with the group to help code-sign their malware between July and October 2020.

WHO IS MEGATRAFFER?

According to cyber intelligence firm Intel 471, Megatraffer has been active on more than a half-dozen crime forums from September 2009 to the present day. And on most of these identities, Megatraffer has used the email address 774748@gmail.com. That same email address also is tied to two forum accounts for a user with the handle “O.R.Z.”

Constella Intelligence, a company that tracks exposed databases, finds that 774748@gmail.com was used in connection with just a handful of passwords, but most frequently the password “featar24“. Pivoting off of that password reveals a handful of email addresses, including akafitis@gmail.com.

Intel 471 shows akafitis@gmail.com was used to register another O.R.Z. user account — this one on Verified[.]ru in 2008. Prior to that, akafitis@gmail.com was used as the email address for the account “Fitis,” which was active on Exploit between September 2006 and May 2007. Constella found the password “featar24” also was used in conjunction with the email address spampage@yandex.ru, which is tied to yet another O.R.Z. account on Carder[.]su from 2008.

The email address akafitis@gmail.com was used to create a Livejournal blog profile named Fitis that has a large bear as its avatar. In November 2009, Fitis wrote, “I am the perfect criminal. My fingerprints change beyond recognition every few days. At least my laptop is sure of it.”

Fitis’s Livejournal account. Image: Archive.org.

Fitis’s real-life identity was exposed in 2010 after two of the biggest sponsors of pharmaceutical spam went to war with each other, and large volumes of internal documents, emails and chat records seized from both spam empires were leaked to this author. That protracted and public conflict formed the backdrop of my 2014 book — “Spam Nation: The Inside Story of Organized Cybercrime, from Global Epidemic to Your Front Door.

One of the leaked documents included a Microsoft Excel spreadsheet containing the real names, addresses, phone numbers, emails, street addresses and WebMoney addresses for dozens of top earners in Spamit — at the time the most successful pharmaceutical spam affiliate program in the Russian hacking scene and one that employed most of the top Russian botmasters.

That document shows Fitis was one of Spamit’s most prolific recruiters, bringing more than 75 affiliates to the Spamit program over several years prior to its implosion in 2010 (and earning commissions on any future sales from all 75 affiliates).

The document also says Fitis got paid using a WebMoney account that was created when its owner presented a valid Russian passport for a Konstantin Evgenievich Fetisov, born Nov. 16, 1982 and residing in Moscow. Russian motor vehicle records show two different vehicles are registered to this person at the same Moscow address.

The most interesting domain name registered to the email address spampage@yahoo.com, fittingly enough, is fitis[.]ru, which DomainTools.com says was registered in 2005 to a Konstantin E. Fetisov from Moscow.

The Wayback Machine at archive.org has a handful of mostly blank pages indexed for fitis[.]ru in its early years, but for a brief period in 2007 it appears this website was inadvertently exposing all of its file directories to the Internet.

One of the exposed files — Glavmed.html — is a general invitation to the infamous Glavmed pharmacy affiliate program, a now-defunct scheme that paid tens of millions of dollars to affiliates who advertised online pill shops mainly by hacking websites and manipulating search engine results. Glavmed was operated by the same Russian cybercriminals who ran the Spamit program.

A Google translated ad circa 2007 recruiting for the pharmacy affiliate program Glavmed, which told interested applicants to contact the ICQ number used by Fitis, a.k.a. MegaTraffer. Image: Archive.org.

Archive.org shows the fitis[.]ru webpage with the Glavmed invitation was continuously updated with new invite codes. In their message to would-be Glavmed affiliates, the program administrator asked applicants to contact them at the ICQ number 165540027, which Intel 471 found was an instant messenger address previously used by Fitis on Exploit.

The exposed files in the archived version of fitis[.]ru include source code for malicious software, lists of compromised websites used for pharmacy spam, and a handful of what are apparently personal files and photos. Among the photos is a 2007 image labeled merely “fitis.jpg,” which shows a bespectacled, bearded young man with a ponytail standing next to what appears to be a newly-married couple at a wedding ceremony.

Mr. Fetisov did not respond to requests for comment.

As a veteran organizer of affiliate programs, Fitis did not waste much time building a new moneymaking collective after Spamit closed up shop. New York City-based cyber intelligence firm Flashpoint found that Megatraffer’s ICQ was the contact number for Himba[.]ru, a cost-per-acquisition (CPA) program launched in 2012 that paid handsomely for completed application forms tied to a variety of financial instruments, including consumer credit cards, insurance policies, and loans.

“Megatraffer’s entrenched presence on cybercrime forums strongly suggests that malicious means are used to source at least a portion of traffic delivered to HIMBA’s advertisers,” Flashpoint observed in a threat report on the actor.

Intel 471 finds that Himba was an active affiliate program until around May 2019, when it stopping paying its associates.

Fitis’s Himba affiliate program, circa February 2014. Image: Archive.org.

Flashpoint notes that in September 2015, Megatraffer posted a job ad on Exploit seeking experienced coders to work on browser plugins, installers and “loaders” — basically remote access trojans (RATs) that establish communication between the attacker and a compromised system.

“The actor specified that he is looking for full-time, onsite help either in his Moscow or Kiev locations,” Flashpoint wrote.

Will Altanovo’s Maneuvering Continue to Delay .web?

Verisign Logo

The launch of .web top-level domain is once again at risk of being delayed by baseless procedural maneuvering.

On May 2, the Internet Corporation for Assigned Names and Numbers (ICANN) Board of Directors posted a decision on the .web matter from its April 30 meeting, which found “that NDC (Nu Dotco LLC) did not violate the Guidebook or the Auction Rules” and directed ICANN “to continue processing NDC’s .web application,” clearing the way for the delegation of .web. ICANN later posted a preliminary report from this meeting showing that the Board vote on the .web decision was without objection.

Less than 24 hours later, however, Altanovo (formerly Afilias) – a losing bidder whose repeatedly rejected claims already have delayed the delegation of .web for more than six years – dusted off its playbook from 2018 by filing yet another ICANN Cooperative Engagement Process (CEP), beginning the cycle of another independent review of the Board’s decision, which last time cost millions of dollars and resulted in years of delay.

Under ICANN rules, a CEP is intended to be a non-binding process designed to efficiently resolve or narrow disputes before the initiation of an Independent Review Process (IRP). ICANN places further actions on hold while a CEP is pending. It’s an important and worthwhile aspect of the multistakeholder process…when used in good faith.

But that does not appear to be what is happening here. Altanovo and its backers initiated this repeat CEP despite the fact that it lost a fair, ICANN-sponsored auction; lost, in every important respect, the IRP; lost its application for reconsideration of the IRP (which it was sanctioned for filing, and which was determined to be frivolous by the IRP panel); and has now lost before the ICANN Board.

The Board’s decision expressly found that these disputes “have delayed the delegation of .web for more than six years” and already cost each of the parties, including ICANN, “millions of dollars in legal fees.”

Further delay appears to be the only goal of this second CEP – and any follow-on IRP – because no one could conclude in good faith that an IRP panel would find that the thorough process and decision on .web established in the Board’s resolutions and preliminary report violated ICANN’s bylaws. At the end of the day, all that will be accomplished by this second CEP and a second IRP is continued delay, and delay for delay’s sake amounts to an abuse of process that threatens to undermine the multistakeholder processes and the rights of NDC and Verisign.

ICANN will, no doubt, follow its processes for resolving the CEP and any further procedural maneuvers attempted by Altanovo. But, given Altanovo’s track record of losses, delays, and frivolous maneuvering since the 2016 .web auction, a point has been reached when equity demands that this abuse of process not be allowed to thwart NDC’s right, as determined by the Board, to move ahead on its .web application.

The post Will Altanovo’s Maneuvering Continue to Delay .web? appeared first on Verisign Blog.

CertVerify - A Scanner That Files With Compromised Or Untrusted Code Signing Certificates


The CertVerify is a tool designed to detect executable files (exe, dll, sys) that have been signed with untrusted or leaked code signing certificates. The purpose of this tool is to identify potentially malicious files that have been signed using certificates that have been compromised, stolen, or are not from a trusted source.

Why is this tool needed?

Executable files signed with compromised or untrusted code signing certificates can be used to distribute malware and other malicious software. Attackers can use these files to bypass security controls and to make their malware appear legitimate to victims. This tool helps to identify these files so that they can be removed or investigated further.

As a continuous project of the previous malware scanner, i have created such a tool. This type of tool is also essential in the event of a security incident response.

Scope of use and limitations

  1. The CertVerify cannot guarantee that all files identified as suspicious are necessarily malicious. It is possible for files to be falsely identified as suspicious, or for malicious files to go undetected by the scanner.

  2. The scanner only targets code signing certificates that have been identified as malicious by the public community. This includes certificates extracted by malware analysis tools and services, and other public sources. There are many unverified malware signing certificates, and it is not possible to obtain the entire malware signing certificate the tool can only detect some of them. For additional detection, you have to extract the certificate's serial number and fingerprint information yourself and add it to the signatures.

  3. The scope of this tool does not include the extraction of code signing information for special rootkits that have already preempted and operated under the kernel, such as FileLess bootkits, or hidden files hidden by high-end technology. In other words, if you run this tool, it will be executed at the user level. Similar functions at the kernel level are more accurate with antirootkit or EDR. Please keep this in mind and focus on the ideas and principles... To implement the principle that is appropriate for the purpose of this tool, you need to development a driver(sys) and run it into the kernel with NT\SYSTEM privileges.

  4. Nevertheless, if you want to run this tool in the event of a Windows system intrusion incident, and your purpose is sys files, boot into safe mode or another boot option that does not load the extra driver(sys) files (load only default system drivers) of the Windows system before running the tool. I think this can be a little more helpful.

  5. Alternatively, mount the Windows system disk to the Linux and run the tool in the Linux environment. I think this could yield better results.

Features

  • File inspection based on leaked or untrusted certificate lists.
  • Scanning includes subdirectories.
  • Ability to define directories to exclude from scanning.
  • Supports multiprocessing for faster job execution.
  • Whitelisting based on certificate subject (e.g., Microsoft subject certificates are exempt from detection).
  • Option to skip inspection of unsigned files for faster scans.
  • Easy integration with SIEM systems such as Splunk by attaching scan_logs.
  • Easy-to-handle and customizable code and function structure.

And...

  • Please let me know if any changes are required or if additional features are needed.
  • If you find this helpful, please consider giving it a "star"
    to support further improvements.

v1.0.0

Scan result_log

datetime="2023-03-06 20:17:57",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\chrome.exe",signature_hash="sha256",serial_number="0e4418e2dede36dd2974c3443afb5ce5",thumbprint="7d3d117664f121e592ef897973ef9c159150e3d736326e9cd2755f71e0febc0c",subject_name="Google LLC",issu   er_name="DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1",file_created_at="2023-03-03 23:20:41",file_modified_at="2022-04-14 06:17:04"
datetime="2023-03-06 20:17:58",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineLauncher.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-03-10 18:00:10"
datetime="2023-03-06 20:17:58",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineUpdater.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumb print="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-06 10:06:28"
datetime="2023-03-06 20:17:59",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\TWOD_Launcher.exe",signature_hash="sha256",serial_number="073637b724547cd847acfd28662a5e5b",thumbprint="281734d4592d1291d27190709cb510b07e22c405d5e0d6119b70e73589f98acf",subject_name="DigiCert Trusted G4 RSA4096 SHA256 TimeStamping CA",issuer_name="DigiCert Trusted Root G4",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-07 09:14:08"
datetime="2023-03-06 20:18:00",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject \certverify\test\VBoxSup.sys",signature_hash="sha256",serial_number="2f451139512f34c8c528b90bca471f767b83c836",thumbprint="3aa166713331d894f240f0931955f123873659053c172c4b22facd5335a81346",subject_name="VirtualBox for Legacy Windows Only Timestamp Kludge 2014",issuer_name="VirtualBox for Legacy Windows Only Timestamp CA",file_created_at="2023-03-03 23:20:43",file_modified_at="2022-10-11 08:11:56"
datetime="2023-03-06 20:31:59",scan_id="f71277c5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\chrome.exe",signature_hash="sha256",serial_number="0e4418e2dede36dd2974c3443afb5ce5",thumbprint="7d3d117664f121e592ef897973ef9c159150e3d736326e9cd2755f71e0febc0c",subject_name="Google LLC",issuer_name="DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1",file_created_at="2023-03-03 23:20:41",file_modified_at="2022-04-14 06:17:04"
datetime="2023-03-06 20:32:00",scan_id="f71277c 5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineLauncher.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-03-10 18:00:10"
datetime="2023-03-06 20:32:00",scan_id="f71277c5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineUpdater.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-06 10:06:28"
datetime="2023-03-06 20:32:01",scan_id="f71277c5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\TWOD_Launcher.exe",signature_hash="sha256",serial_number="073637b724547cd847acfd28662a5e5b",thumbprint="281734d4592d1291d27190709cb510b07e22c405d5e0d6119b70e73589f98acf",subject_name="DigiCert Trusted G4 RSA4096 SHA256 TimeStamping CA",issuer_name="DigiCert Trusted Root G4",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-07 09:14:08"
datetime="2023-03-06 20:32:02",scan_id="f71277c5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\VBoxSup.sys",signature_hash="sha256",serial_number="2f451139512f34c8c528b90bca471f767b83c836",thumbprint="3aa166713331d894f240f0931955f123873659053c172c4b22facd5335a81346",subjec t_name="VirtualBox for Legacy Windows Only Timestamp Kludge 2014",issuer_name="VirtualBox for Legacy Windows Only Timestamp CA",file_created_at="2023-03-03 23:20:43",file_modified_at="2022-10-11 08:11:56"
datetime="2023-03-06 20:33:45",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\chrome.exe",signature_hash="sha256",serial_number="0e4418e2dede36dd2974c3443afb5ce5",thumbprint="7d3d117664f121e592ef897973ef9c159150e3d736326e9cd2755f71e0febc0c",subject_name="Google LLC",issuer_name="DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1",file_created_at="2023-03-03 23:20:41",file_modified_at="2022-04-14 06:17:04"
datetime="2023-03-06 20:33:45",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineLauncher.exe",signature_hash="sha 256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-03-10 18:00:10"
datetime="2023-03-06 20:33:45",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineUpdater.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-06 10:06:28"
datetime="2023-03-06 20:33:46",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192. 168.0.23",infected_file="F:\code\pythonProject\certverify\test\TWOD_Launcher.exe",signature_hash="sha256",serial_number="073637b724547cd847acfd28662a5e5b",thumbprint="281734d4592d1291d27190709cb510b07e22c405d5e0d6119b70e73589f98acf",subject_name="DigiCert Trusted G4 RSA4096 SHA256 TimeStamping CA",issuer_name="DigiCert Trusted Root G4",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-07 09:14:08"
datetime="2023-03-06 20:33:47",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\VBoxSup.sys",signature_hash="sha256",serial_number="2f451139512f34c8c528b90bca471f767b83c836",thumbprint="3aa166713331d894f240f0931955f123873659053c172c4b22facd5335a81346",subject_name="VirtualBox for Legacy Windows Only Timestamp Kludge 2014",issuer_name="VirtualBox for Legacy Windows Only Timestamp CA",file_created_at="2023-03-03 23:20:43",file_modified_at="2022-10-11 08:11:56"


Smart and Frictionless Zero Trust Access for the Workforce

Providing secure access and a frictionless user experience are typically competing initiatives, but they don’t have to be! Read on to learn why.

In our world today, context changes quickly. We work from home, coffee shops and the office. We use multiple devices to do work. And on the flip side, attackers are becoming increasingly savvy, getting around security controls, such as multi-factor authentication (MFA), to gain unauthorized access.

To quote Wendy Nather, Cisco’s head of Advisory CISOs, “Trust is neither binary nor permanent.” Therefore, security controls must constantly evaluate for change in trust, but without adding unnecessary friction for end-users.

It’s no surprise that the recently published Cybersecurity Readiness Index, a survey of 6,700 cybersecurity leaders from across the globe, revealed that more progress is needed to protect identity, networks and applications.

To address these challenges and to make zero trust access for the workforce easy and frictionless, Cisco Duo announced the general availability of Risk-Based Authentication and enhancements to our enterprise ready Single Sign-On solution at Cisco Live EMEA 2023 earlier this week.

Risk-Based Authentication

Chart showing how Risk-Based Authentication starts by evaluating the risk signal analysis based off of device trust, location, wi-fi fingerprint, and known attack patterns. Based of off this, it decides what kind of authentication is required - including no authentication, Duo push 2FA, verified Duo push, FIDO2 authenticator - before allowing (or blocking) access to corporate resources.

Risk-Based Authentication fulfills the zero trust philosophy of continuous trust verification by assessing the risk level for each access attempt in a manner that is frictionless to users. A higher level of authentication is required only when there is an increase in assessed risk. Duo dynamically detects risk and automatically steps up authentication with two key policies:

1. Risk-Based Factor Selection

The Risk-Based Factor Selection policy detects and analyzes authentication requests and adaptively enforces the most secure factors. It highlights risk and adapts its understanding of normal user behavior. It does this by looking for known attack patterns and anomalies and then allowing only the more secure authentication methods to gain access.

For example, Duo can detect if an organization or employee is being targeted for a push bombing attack or if the authentication device and access device are in two different countries, and Duo responds by automatically elevating the authentication request to a more secure factor such as phishing resistant FIDO2 security keys or Verified Duo Push.

Chart showing how Risk-Based Authentication, when picking up on known attack patterns, will either request a Verified Duo Push or Block access.

2. Risk-Based Remembered Devices

The Risk-Based Remembered Devices policy establishes a trusted device session (like “remember this computer” check box), automatically without asking the user the check a box, during a successful authentication. Once the session is established, Duo looks for anomalous IP addresses or changes to a device throughout the lifetime of the trusted session and requires re-authentication only if it observes a change from historical baselines.

The policy also incorporates a Wi-Fi Fingerprint provided by Duo Device Health app to ensure that IP address changes reflect actual changes in location and not normal usage scenarios such as a user establishing an organizational VPN (Virtual Private Network) session.

Chart showing how Risk-Based Authentication, when using location and wi-fi fingerprint to determine that risk levels are low, won't require authentication.

Duo uses anonymized Wi-Fi Fingerprint to reliably detect whether the access device is in the same location as it was for previous authentications by comparing the Wi-Fi networks that are “visible” to the access device. Further, Duo preserves user privacy and does not track user location or collect any private information. Wi-Fi Fingerprint only lets Duo know if a user has changed location.

Single Sign-On

A typical organization uses over 250 applications. Single sign-on (SSO) solutions help employees access multiple applications with a single set of credentials and allow administrators to enforce granular policies for application access from a single console. Integrated with MFA or passwordless authentication, SSO serves as a critical access management tool for organizations that want to implement zero trust access to corporate applications.

Chart showing how Duo SSO integrates with SAML 2.0 and OIDC applications

Duo SSO is already popular among Duo’s customers. Now, we are adding two new capabilities that cater to modern enterprises:

1. Support for OpenID Connect (OIDC)

An increasing number of applications use OIDC for authentication. It is a modern authentication protocol that lets application and website developers authenticate users without storing and managing other people’s passwords, which is both difficult and risky. To date, Duo SSO has supported SAML web applications. Supporting OIDC allows us to protect more of the applications that our customers are adopting as we all move towards a mobile-first world and integrate stronger and modern authentication methods.

2. On-Demand Password Resets

Password resets are expensive for organizations. It is estimated that 20-50% of IT helpdesk tickets are for password resets. And according to a report by Ponemon Institute, large enterprises experience an average loss of $5.2 million a year in user productivity due to password resets.

When logging into browser-based applications, Duo SSO already allows users to reset passwords when they have expired in the same login workflow. And we heard from our customers that users want the option to proactively reset passwords. Now, Duo SSO offers the convenience to reset their Active Directly passwords before they expire. This capability further increases user productivity and reduces IT helpdesk tickets.

Screenshot of Duo's self-service password reset prompt

Risk-Based Authentication and enhancements to Duo SSO are available now to all paying customers based on their Duo Edition. If you are not yet a Duo customer, sign up for a free 30-day trial and try out these new capabilities today!


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

GitHub Breach: Hackers Stole Code-Signing Certificates for GitHub Desktop and Atom

GitHub on Monday disclosed that unknown threat actors managed to exfiltrate encrypted code signing certificates pertaining to some versions of GitHub Desktop for Mac and Atom apps. As a result, the company is taking the step of revoking the exposed certificates out of abundance of caution. The following versions of GitHub Desktop for Mac have been invalidated: 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.0.6,

Still Using Passwords? Get Started with Phishing-Resistant, Passwordless Authentication Now!

Going beyond the hype, passwordless authentication is now a reality. Cisco Duo’s passwordless authentication is now generally available across all Duo Editions.

“Cisco Duo simplifies the passwordless journey for organizations that want to implement phishing-resistant authentication and adopt a zero trust security strategy.”
—Jack Poller, Senior Analyst, ESG

We received tremendous participation and feedback during our public preview, and we are now excited to bring this capability to our customers and prospects.

“Over the last few years, we have increased our password complexities and required 2FA wherever possible.  With this approach, employees had more password lock outs, password fatigue, and forgetting their longer passwords due to password rotations.  With Duo Passwordless, we are excited to introduce this feature to our employees to keep our password complexities in place and leverage different Biometric options whether that is using their mobile device, Windows Hello, or a provided FIDO security key. 

The Duo Push for passwordless authentication feature is simple and easy and introduces a more pleasant experience overall.  Using Duo’s device insight and application policies, we are able to leverage and verify the security of the mobile devices before the device is allowed to be used.  To top it off, Duo is connected to our SIEM and our InfoSec team is able to review detailed logs and setup alerts to be able to keep everything secure.”
—Vice President of IT, Banking and Financial Services Customer

As with any new technology, getting to a completely passwordless state will be a journey for many organizations. We see customers typically starting their passwordless journey with web-based applications that support modern authentication. To that effect, Duo’s passwordless authentication is enabled through Duo Single Sign-On (SSO) for federated applications. Customers can choose to integrate their existing SAML Identity provider such as Microsoft (ADFS, Azure), Okta or Ping Identity; or choose to use Duo SSO (Available across all Duo editions).

“Password management is a challenging proposition for many enterprises, especially in light of BYOD and ever increasing sophistication of phishing schemes. Cisco aims to simplify the process with its Duo passwordless authentication that offers out-of-box integrations with popular single sign-on solutions.”
—Will Townsend, Vice President & Principal Analyst, Networking & Security, Moor Insights & Strategy

Duo’s Passwordless Architecture

Duo Passwordless Architecture

Duo offers a flexible choice of passwordless authentication options to meet the needs of businesses and their use cases. This includes:

  1. FIDO2-compliant, phishing-resistant authentication using
    • Platform authenticators – TouchID, FaceID, Windows Hello, Android biometrics
    • Roaming authenticators – security keys (e.g. Yubico, Feitian)
  2. Strong authentication using Duo Mobile authenticator application

No matter which authentication option you choose, it is secure and inherently multi-factor authentication. We are eliminating the need for the weak knowledge factor (something you know – passwords) which are shared during authentication and can be easily compromised. Instead, we are relying on stronger factors, which are the inherence factor (something you are – biometrics) and possession factor (something you have – a registered device). A user completes this authentication in a single gesture without having to remember a complex string of characters. This significantly improves the user experience and mitigates the risk of stolen credentials and man-in-the-middle (MiTM) attacks.

Phishing resistant passwordless authentication with FIDO2

Passwordless authentication using FIDO2

FIDO2 authentication is regarded as phishing-resistant authentication because it:

  1. Removes passwords or shared secrets from the login workflow. Attackers cannot intercept passwords or use stolen credentials available on the dark web.
  2. Creates a strong binding between the browser session and the device being used. Login is allowed only from the device authenticating to an application.
  3. Ensures that the credential (public/private key) exchange can only happen between the device and the registered service provider. This prevents login to fake or phishing websites.

Using Duo with FIDO2 authenticators enables organizations to enforce phishing-resistant MFA in their environment. It also complies with the Office of Management and Budget (OMB) guidance issued earlier this year in a memo titled “Moving the U.S. Government Towards Zero Trust Cybersecurity Principles”. The memo specifically requires agencies to use phishing-resistant authentication method.

We understand that getting the IT infrastructure ready to support FIDO2 can be expensive and is typically a long-term project for organizations. In addition, deploying and managing 3rd party security keys creates IT overhead that some organizations are not able to undertake immediately.

Alternatively, using Duo Push for passwordless authentication is an easy, cost effective to get started on a passwordless journey for many organizations, without compromising on security.

Strong passwordless authentication using Duo Mobile

We have incorporated security into the login workflow to bind the browser session and the device being used. So, organizations get the same benefits of eliminating use of stolen credentials and mitigation of phishing attacks. To learn more about passwordless authentication with Duo Push, check out our post: Available Now! Passwordless Authentication Is Just a Tap Away.

 

 

Beyond passwordless: Thinking about Zero Trust Access and continuous verification

passwordless authentication

In addition to going passwordless, many organizations are looking to implement zero trust access in their IT environment. This environment typically is a mix of modern and legacy applications, meaning passwordless cannot be universally adopted. At least not until all applications can support modern authentication.

Additionally, organizations need to support a broad range of use cases to allow access from both managed and unmanaged (personal or 3rd party contractor) devices. And IT security teams need visibility into these devices and the ability to enforce compliance to meet the organization’s security policies such as ensuring that the operating system (OS) and web browser versions are up to date. The importance of verifying device posture at the time of authentication is emphasized in the guidance provided by OMB’s zero trust memorandum – “authorization systems should work to incorporate at least one device-level signal alongside identity information about the authenticated user.”

Duo can help organizations adopt a zero trust security model by enforcing strong user authentication across the board either through passwordless authentication where applicable or thought password + MFA where necessary, while providing a consistent user experience. Further, with capabilities such as device trust and granular adaptive policies, and with our vision for Continuous Trusted Access, organizations get a trusted security partner they can rely on for implementing zero trust access in their environment.

To learn more, check out the eBook – Passwordless: The Future of Authentication, which outlines a 5-step path to get started. And watch the passwordless product demo in this on-demand webinar .

Many of our customers have already begun their passwordless journey.  If you are looking to get started as well, sign-up for a free trial and reach out to our amazing representatives.

 


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

Battle with Bots Prompts Mass Purge of Amazon, Apple Employee Accounts on LinkedIn

On October 10, 2022, there were 576,562 LinkedIn accounts that listed their current employer as Apple Inc. The next day, half of those profiles no longer existed. A similarly dramatic drop in the number of LinkedIn profiles claiming employment at Amazon comes as LinkedIn is struggling to combat a significant uptick in the creation of fake employee accounts that pair AI-generated profile photos with text lifted from legitimate users.

Jay Pinho is a developer who is working on a product that tracks company data, including hiring. Pinho has been using LinkedIn to monitor daily employee headcounts at several dozen large organizations, and last week he noticed that two of them had far fewer people claiming to work for them than they did just 24 hours previously.

Pinho’s screenshot below shows the daily count of employees as displayed on Amazon’s LinkedIn homepage. Pinho said his scraper shows that the number of LinkedIn profiles claiming current roles at Amazon fell from roughly 1.25 million to 838,601 in just one day, a 33 percent drop:

The number of LinkedIn profiles claiming current positions at Amazon fell 33 percent overnight. Image: twitter.com/jaypinho

As stated above, the number of LinkedIn profiles that claimed to work at Apple fell by approximately 50 percent on Oct. 10, according to Pinho’s analysis:

Image: twitter.com/jaypinho

Neither Amazon or Apple responded to requests for comment. LinkedIn declined to answer questions about the account purges, saying only that the company is constantly working to keep the platform free of fake accounts. In June, LinkedIn acknowledged it was seeing a rise in fraudulent activity happening on the platform.

KrebsOnSecurity hired Menlo Park, Calif.-based SignalHire to check Pinho’s numbers. SignalHire keeps track of active and former profiles on LinkedIn, and during the Oct 9-11 timeframe SignalHire said it saw somewhat smaller but still unprecedented drops in active profiles tied to Amazon and Apple.

“The drop in the percentage of 7-10 percent [of all profiles], as it happened [during] this time, is not something that happened before,” SignalHire’s Anastacia Brown told KrebsOnSecurity.

Brown said the normal daily variation in profile numbers for these companies is plus or minus one percent.

“That’s definitely the first huge drop that happened throughout the time we’ve collected the profiles,” she said.

In late September 2022, KrebsOnSecurity warned about the proliferation of fake LinkedIn profiles for Chief Information Security Officer (CISO) roles at some of the world’s largest corporations. A follow-up story on Oct. 5 showed how the phony profile problem has affected virtually all executive roles at corporations, and how these fake profiles are creating an identity crisis for the businesses networking site and the companies that rely on it to hire and screen prospective employees.

A day after that second story ran, KrebsOnSecurity heard from a recruiter who noticed the number of LinkedIn profiles that claimed virtually any role in network security had dropped seven percent overnight. LinkedIn declined to comment about that earlier account purge, saying only that, “We’re constantly working at taking down fake accounts.”

A “swarm” of LinkedIn AI-generated bot accounts flagged by a LinkedIn group administrator recently.

It’s unclear whether LinkedIn is responsible for this latest account purge, or if individually affected companies are starting to take action on their own. The timing, however, argues for the former, as the account purges for Apple and Amazon employees tracked by Pinho appeared to happen within the same 24 hour period.

It’s also unclear who or what is behind the recent proliferation of fake executive profiles on LinkedIn. Cybersecurity firm Mandiant (recently acquired by Googletold Bloomberg that hackers working for the North Korean government have been copying resumes and profiles from leading job listing platforms LinkedIn and Indeed, as part of an elaborate scheme to land jobs at cryptocurrency firms.

On this point, Pinho said he noticed an account purge in early September that targeted fake profiles tied to jobs at cryptocurrency exchange Binance. Up until Sept. 3, there were 7,846 profiles claiming current executive roles at Binance. The next day, that number stood at 6,102, a 23 percent drop (by some accounts that 6,102 head count is still wildly inflated).

Fake profiles also may be tied to so-called “pig butchering” scams, wherein people are lured by flirtatious strangers online into investing in cryptocurrency trading platforms that eventually seize any funds when victims try to cash out.

In addition, identity thieves have been known to masquerade on LinkedIn as job recruiters, collecting personal and financial information from people who fall for employment scams.

Nicholas Weaver, a researcher for the International Computer Science Institute at University of California, Berkeley, suggested another explanation for the recent glut of phony LinkedIn profiles: Someone may be setting up a mass network of accounts in order to more fully scrape profile information from the entire platform.

“Even with just a standard LinkedIn account, there’s a pretty good amount of profile information just in the default two-hop networks,” Weaver said. “We don’t know the purpose of these bots, but we know creating bots isn’t free and creating hundreds of thousands of bots would require a lot of resources.”

In response to last week’s story about the explosion of phony accounts on LinkedIn, the company said it was exploring new ways to protect members, such as expanding email domain verification. Under such a scheme, LinkedIn users would be able to publicly attest that their profile is accurate by verifying that they can respond to email at the domain associated with their current employer.

LinkedIn claims that its security systems detect and block approximately 96 percent of fake accounts. And despite the recent purges, LinkedIn may be telling the truth, Weaver said.

“There’s no way you can test for that,” he said. “Because technically, it may be that there were actually 100 million bots trying to sign up at LinkedIn as employees at Amazon.”

Weaver said the apparent mass account purge at LinkedIn underscores the size of the bot problem, and could present a “real and material change” for LinkedIn.

“It may mean the statistics they’ve been reporting about usage and active accounts are off by quite a bit,” Weaver said.

ICANN’s Accountability and Transparency – a Retrospective on the IANA Transition

Verisign Logo

As we passed five years since the Internet Assigned Numbers Authority transition took place, my co-authors and I paused to look back on this pivotal moment; to take stock of what we’ve learned and to re-examine some of the key events leading up to the transition and how careful planning ensured a successful transfer of IANA responsibilities from the United States Government to the Internet Corporation for Assigned Names and Numbers. I’ve excerpted the main themes from our work, which can be found in full on the Internet Governance Project blog.

In March 2014, the National Telecommunications and Information Administration, a division of the U.S. Department of Commerce, announced its intent to “transition key Internet domain name functions to the global multi-stakeholder community” and asked ICANN to “convene global stakeholders to develop a proposal to transition the current role played by NTIA in the coordination of the Internet’s domain name system.” This transition, as announced by NTIA, was a natural progression of ICANN’s multi-stakeholder evolution, and an outcome that was envisioned by its founders.

While there was general support for a transition to the global multi-stakeholder community, many in the ICANN community raised concerns about ICANN’s accountability, transparency and organizational readiness to “stand alone” without NTIA’s legacy supervision. In response, the ICANN community began a phase of intense engagement to ensure a successful transition with all necessary accountability and transparency structures and mechanisms in place.

As a result of this meticulous planning, we believe the IANA functions have been well-served by the transition and the new accountability structures designed and developed by the ICANN community to ensure the security, stability and resiliency of the internet’s unique identifiers.

But what does the future hold? While ICANN’s multi-stakeholder processes and accountability structures are functioning, even in the face of a global pandemic that interrupted our ability to gather and engage in person, they will require ongoing care to ensure they deliver on the original vision of private-sector-led management of the DNS.

The post ICANN’s Accountability and Transparency – a Retrospective on the IANA Transition appeared first on Verisign Blog.

FISSURE - Frequency Independent SDR-based Signal Understanding and Reverse Engineering


Frequency Independent SDR-based Signal Understanding and Reverse Engineering

FISSURE is an open-source RF and reverse engineering framework designed for all skill levels with hooks for signal detection and classification, protocol discovery, attack execution, IQ manipulation, vulnerability analysis, automation, and AI/ML. The framework was built to promote the rapid integration of software modules, radios, protocols, signal data, scripts, flow graphs, reference material, and third-party tools. FISSURE is a workflow enabler that keeps software in one location and allows teams to effortlessly get up to speed while sharing the same proven baseline configuration for specific Linux distributions.

The framework and tools included with FISSURE are designed to detect the presence of RF energy, understand the characteristics of a signal, collect and analyze samples, develop transmit and/or injection techniques, and craft custom payloads or messages. FISSURE contains a growing library of protocol and signal information to assist in identification, packet crafting, and fuzzing. Online archive capabilities exist to download signal files and build playlists to simulate traffic and test systems.

The friendly Python codebase and user interface allows beginners to quickly learn about popular tools and techniques involving RF and reverse engineering. Educators in cybersecurity and engineering can take advantage of the built-in material or utilize the framework to demonstrate their own real-world applications. Developers and researchers can use FISSURE for their daily tasks or to expose their cutting-edge solutions to a wider audience. As awareness and usage of FISSURE grows in the community, so will the extent of its capabilities and the breadth of the technology it encompasses.


Getting Started

Supported

There are two branches within FISSURE to make file navigation easier and reduce code redundancy. The Python2_maint-3.7 branch contains a codebase built around Python2, PyQt4, and GNU Radio 3.7; while the Python3_maint-3.8 branch is built around Python3, PyQt5, and GNU Radio 3.8.

Operating System FISSURE Branch
Ubuntu 18.04 (x64) Python2_maint-3.7
Ubuntu 18.04.5 (x64) Python2_maint-3.7
Ubuntu 18.04.6 (x64) Python2_maint-3.7
Ubuntu 20.04.1 (x64) Python3_maint-3.8
Ubuntu 20.04.4 (x64) Python3_maint-3.8

In-Progress (beta)

Operating System FISSURE Branch
Ubuntu 22.04 (x64) Python3_maint-3.8

Note: Certain software tools do not work for every OS. Refer to Software And Conflicts

Installation

git clone https://github.com/ainfosec/fissure.git
cd FISSURE
git checkout <Python2_maint-3.7> or <Python3_maint-3.8>
./install

This will automatically install PyQt software dependencies required to launch the installation GUIs if they are not found.

Next, select the option that best matches your operating system (should be detected automatically if your OS matches an option).


Python2_maint-3.7 Python3_maint-3.8

It is recommended to install FISSURE on a clean operating system to avoid existing conflicts. Select all the recommended checkboxes (Default button) to avoid errors while operating the various tools within FISSURE. There will be multiple prompts throughout the installation, mostly asking for elevated permissions and user names. If an item contains a "Verify" section at the end, the installer will run the command that follows and highlight the checkbox item green or red depending on if any errors are produced by the command. Checked items without a "Verify" section will remain black following the installation.


Usage

fissure

Refer to the FISSURE Help menu for more details on usage.

Details

Components

  • Dashboard
  • Central Hub (HIPRFISR)
  • Target Signal Identification (TSI)
  • Protocol Discovery (PD)
  • Flow Graph & Script Executor (FGE)


Capabilities

Signal Detector



IQ Manipulation


Signal Lookup


Pattern Recognition


Attacks


Fuzzing


Signal Playlists


Image Gallery


Packet Crafting


Scapy Integration


CRC Calculator


Logging


Lessons

FISSURE comes with several helpful guides to become familiar with different technologies and techniques. Many include steps for using various tools that are integrated into FISSURE.

  • Lesson1: OpenBTS
  • Lesson2: Lua Dissectors
  • Lesson3: Sound eXchange
  • Lesson4: ESP Boards
  • Lesson5: Radiosonde Tracking
  • Lesson6: RFID
  • Lesson7: Data Types
  • Lesson8: Custom GNU Radio Blocks
  • Lesson9: TPMS
  • Lesson10: Ham Radio Exams
  • Lesson11: Wi-Fi Tools

Roadmap

  • Add more hardware types, RF protocols, signal parameters, analysis tools
  • Support more operating systems
  • Create a signal conditioner, feature extractor, and signal classifier with selectable AI/ML techniques
  • Develop class material around FISSURE (RF Attacks, Wi-Fi, GNU Radio, PyQt, etc.)
  • Transition the main FISSURE components to a generic sensor node deployment scheme

Contributing

Suggestions for improving FISSURE are strongly encouraged. Leave a comment in the Discussions page if you have any thoughts regarding the following:

  • New feature suggestions and design changes
  • Software tools with installation steps
  • New lessons or additional material for existing lessons
  • RF protocols of interest
  • More hardware and SDR types for integration
  • IQ analysis scripts in Python
  • Installation corrections and improvements

Contributions to improve FISSURE are crucial to expediting its development. Any contributions you make are greatly appreciated. If you wish to contribute through code development, please fork the repo and create a pull request:

  1. Fork the project
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a pull request

Creating Issues to bring attention to bugs is also welcomed.

Collaborating

Contact Assured Information Security, Inc. (AIS) Business Development to propose and formalize any FISSURE collaboration opportunities–whether that is through dedicating time towards integrating your software, having the talented people at AIS develop solutions for your technical challenges, or integrating FISSURE into other platforms/applications.

License

GPL-3.0

For license details, see LICENSE file.

Contact

Follow on Twitter: @FissureRF, @AinfoSec

Chris Poore - Assured Information Security, Inc. - poorec@ainfosec.com

Business Development - Assured Information Security, Inc. - bd@ainfosec.com

Acknowledgments

Special thanks to Dr. Samuel Mantravadi and Joseph Reith for their contributions to this project.



What Is Synthetic Identity Theft?

By: McAfee

It’s too bad cybercriminals don’t funnel their creativity into productive pursuits because they’re constantly coming up with nefarious new ways to eke out money and information from unsuspecting people. One of their newest schemes is called synthetic identity theft, a type of identity theft that can happen to anyone. Luckily, there are ways to lower the chance of it happening to you. And if it does happen to you, there are a few preventive measures you can take. Plus when you’re able to identify the early signs, you can minimize its damage to your finances and your credit. 

Here’s everything you need to know about synthetic identity theft in order to keep your and your family’s information safe. 

What Is Synthetic Identity Theft? 

Synthetic identity theft occurs when a cybercriminal steals a real Social Security Number (SSN) but fabricates the rest of the details that are associated with that SSN, such as the full name and birthdate. With this valid SSN, they’re able to create an entirely new identity and use it to take out loans, apply for credit cards, or even purchase a house.  

This form of identity theft is more difficult than traditional identity theft to detect. When a criminal steals someone’s entire identity – their name, birthdate, address, and SSN – there are more flags that could raise the alarm that something is amiss. Additionally, in some cases of synthetic identity theft, cybercriminals play the long game, meaning that they build up excellent credit with their new fake identity for months or even years. Then, once they’ve squeezed as much as they can from that great credit, they rack up huge charges against that credit and flee. It is only then when creditors demand payment that the rightful owner of the SSN finds out their identity was compromised.  

Synthetic identity theft can severely damage the credit or finances of the person to whom the SSN truly belongs. It most often occurs to people who don’t closely monitor their credit, such as children, people in jail, or the elderly, but it can happen to anyone. 

Signs Your Identity May Be Stolen 

The signs of synthetic identity theft are a bit different than the signs of regular identity theft. In traditional identity theft, you may receive bills to your address either with someone else’s name on them or for organizations with which you don’t have an account. However, in the case of synthetic identity theft, since the thief makes up an entirely new name and address, you’re unlikely to accidentally get their mail. 

The major red flag is if your credit score is drastically lower (or higher) than you remember it being. Did you know that you can request one free credit report per year from each major credit bureau? Get in the habit of ordering reports regularly to keep tabs on your credit and confirm that there are no new accounts that you didn’t create. 

How to Protect Your Identity 

Check out these tips on how to protect your identity online to hopefully prevent it from ever happening to you: 

  • Never share your SSN. There is a very short list of organizations who require your SSN: the IRS, your bank, the Registry of Motor Vehicles, and your work’s payroll department. If anyone else requests your SSN, it’s not rude to inquire why they need it. In cases where you do have to share your SSN, never do so over electronic correspondences. Either visit the organization in person or call them in a private location that is clear of eavesdroppers. 
  • Set up credit locks. If you aren’t planning to file for a credit card or take out a loan anytime soon, consider locking your credit. This is a process where you reach out to the major credit bureaus and notify them to deny any new claims or requests made against your name or SSN. Locking your credit is a great preventive measure that can guard against many criminal scenarios. 
  • Keep an eye on the news. Cybersecurity breaches of major companies occur with more frequency than we’d all like to see. One way to protect your identity is to watch the headlines to keep tabs on recent breaches. If a company with which you have an account is affected, take action immediately. This includes changing your password to your account and diligently tracking your bank statements for any signs that you may have been affected. 

Identity Protection Provides Security, Peace of Mind 

McAfee Identity Protection is a comprehensive identity monitoring service that protects your identity and privacy from the fastest-growing financial crimes in America. McAfee can scan risky websites to see if your information was leaked in a recent breach. Additionally, with the new security freeze feature, you can deny access to your credit report, which stops fraudsters from opening new credit cards or bank or utility accounts in your name. Finally, if the worst does happen, McAfee Identity Protection offers up to $1 million in identity theft coverage and restoration. 

If you don’t do so already, commit to a routine of monitoring your credit and financial accounts. It only takes a few minutes every month. To fill in the gaps, trust McAfee! 

The post What Is Synthetic Identity Theft? appeared first on McAfee Blog.

How 1-Time Passcodes Became a Corporate Liability

Phishers are enjoying remarkable success using text messages to steal remote access credentials and one-time passcodes from employees at some of the world’s largest technology companies and customer support firms. A recent spate of SMS phishing attacks from one cybercriminal group has spawned a flurry of breach disclosures from affected companies, which are all struggling to combat the same lingering security threat: The ability of scammers to interact directly with employees through their mobile devices.

In mid-June 2022, a flood of SMS phishing messages began targeting employees at commercial staffing firms that provide customer support and outsourcing to thousands of companies. The missives asked users to click a link and log in at a phishing page that mimicked their employer’s Okta authentication page. Those who submitted credentials were then prompted to provide the one-time password needed for multi-factor authentication.

The phishers behind this scheme used newly-registered domains that often included the name of the target company, and sent text messages urging employees to click on links to these domains to view information about a pending change in their work schedule.

The phishing sites leveraged a Telegram instant message bot to forward any submitted credentials in real-time, allowing the attackers to use the phished username, password and one-time code to log in as that employee at the real employer website. But because of the way the bot was configured, it was possible for security researchers to capture the information being sent by victims to the public Telegram server.

This data trove was first reported by security researchers at Singapore-based Group-IB, which dubbed the campaign “0ktapus” for the attackers targeting organizations using identity management tools from Okta.com.

“This case is of interest because despite using low-skill methods it was able to compromise a large number of well-known organizations,” Group-IB wrote. “Furthermore, once the attackers compromised an organization they were quickly able to pivot and launch subsequent supply chain attacks, indicating that the attack was planned carefully in advance.”

It’s not clear how many of these phishing text messages were sent out, but the Telegram bot data reviewed by KrebsOnSecurity shows they generated nearly 10,000 replies over approximately two months of sporadic SMS phishing attacks targeting more than a hundred companies.

A great many responses came from those who were apparently wise to the scheme, as evidenced by the hundreds of hostile replies that included profanity or insults aimed at the phishers: The very first reply recorded in the Telegram bot data came from one such employee, who responded with the username “havefuninjail.”

Still, thousands replied with what appear to be legitimate credentials — many of them including one-time codes needed for multi-factor authentication. On July 20, the attackers turned their sights on internet infrastructure giant Cloudflare.com, and the intercepted credentials show at least three employees fell for the scam.

Image: Cloudflare.com

In a blog post earlier this month, Cloudflare said it detected the account takeovers and that no Cloudflare systems were compromised. Cloudflare said it does not rely on one-time passcodes as a second factor, so there was nothing to provide to the attackers. But Cloudflare said it wanted to call attention to the phishing attacks because they would probably work against most other companies.

“This was a sophisticated attack targeting employees and systems in such a way that we believe most organizations would be likely to be breached,” Cloudflare CEO Matthew Prince wrote. “On July 20, 2022, the Cloudflare Security team received reports of employees receiving legitimate-looking text messages pointing to what appeared to be a Cloudflare Okta login page. The messages began at 2022-07-20 22:50 UTC. Over the course of less than 1 minute, at least 76 employees received text messages on their personal and work phones. Some messages were also sent to the employees family members.”

On three separate occasions, the phishers targeted employees at Twilio.com, a San Francisco based company that provides services for making and receiving text messages and phone calls. It’s unclear how many Twilio employees received the SMS phishes, but the data suggest at least four Twilio employees responded to a spate of SMS phishing attempts on July 27, Aug. 2, and Aug. 7.

On that last date, Twilio disclosed that on Aug. 4 it became aware of unauthorized access to information related to a limited number of Twilio customer accounts through a sophisticated social engineering attack designed to steal employee credentials.

“This broad based attack against our employee base succeeded in fooling some employees into providing their credentials,” Twilio said. “The attackers then used the stolen credentials to gain access to some of our internal systems, where they were able to access certain customer data.”

That “certain customer data” included information on roughly 1,900 users of the secure messaging app Signal, which relied on Twilio to provide phone number verification services. In its disclosure on the incident, Signal said that with their access to Twilio’s internal tools the attackers were able to re-register those users’ phone numbers to another device.

On Aug. 25, food delivery service DoorDash disclosed that a “sophisticated phishing attack” on a third-party vendor allowed attackers to gain access to some of DoorDash’s internal company tools. DoorDash said intruders stole information on a “small percentage” of users that have since been notified. TechCrunch reported last week that the incident was linked to the same phishing campaign that targeted Twilio.

This phishing gang apparently had great success targeting employees of all the major mobile wireless providers, but most especially T-Mobile. Between July 10 and July 16, dozens of T-Mobile employees fell for the phishing messages and provided their remote access credentials.

“Credential theft continues to be an ongoing issue in our industry as wireless providers are constantly battling bad actors that are focused on finding new ways to pursue illegal activities like this,” T-Mobile said in a statement. “Our tools and teams worked as designed to quickly identify and respond to this large-scale smishing attack earlier this year that targeted many companies. We continue to work to prevent these types of attacks and will continue to evolve and improve our approach.”

This same group saw hundreds of responses from employees at some of the largest customer support and staffing firms, including Teleperformanceusa.com, Sitel.com and Sykes.com. Teleperformance did not respond to requests for comment. KrebsOnSecurity did hear from Christopher Knauer, global chief security officer at Sitel Group, the customer support giant that recently acquired Sykes. Knauer said the attacks leveraged newly-registered domains and asked employees to approve upcoming changes to their work schedules.

Image: Group-IB.

Knauer said the attackers set up the phishing domains just minutes in advance of spamming links to those domains in phony SMS alerts to targeted employees. He said such tactics largely sidestep automated alerts generated by companies that monitor brand names for signs of new phishing domains being registered.

“They were using the domains as soon as they became available,” Knauer said. “The alerting services don’t often let you know until 24 hours after a domain has been registered.”

On July 28 and again on Aug. 7, several employees at email delivery firm Mailchimp provided their remote access credentials to this phishing group. According to an Aug. 12 blog post, the attackers used their access to Mailchimp employee accounts to steal data from 214 customers involved in cryptocurrency and finance.

On Aug. 15, the hosting company DigitalOcean published a blog post saying it had severed ties with MailChimp after its Mailchimp account was compromised. DigitalOcean said the MailChimp incident resulted in a “very small number” of DigitalOcean customers experiencing attempted compromises of their accounts through password resets.

According to interviews with multiple companies hit by the group, the attackers are mostly interested in stealing access to cryptocurrency, and to companies that manage communications with people interested in cryptocurrency investing. In an Aug. 3 blog post from email and SMS marketing firm Klaviyo.com, the company’s CEO recounted how the phishers gained access to the company’s internal tools, and used that to download information on 38 crypto-related accounts.

A flow chart of the attacks by the SMS phishing group known as 0ktapus and ScatterSwine. Image: Amitai Cohen for Wiz.io. twitter.com/amitaico.

The ubiquity of mobile phones became a lifeline for many companies trying to manage their remote employees throughout the Coronavirus pandemic. But these same mobile devices are fast becoming a liability for organizations that use them for phishable forms of multi-factor authentication, such as one-time codes generated by a mobile app or delivered via SMS.

Because as we can see from the success of this phishing group, this type of data extraction is now being massively automated, and employee authentication compromises can quickly lead to security and privacy risks for the employer’s partners or for anyone in their supply chain.

Unfortunately, a great many companies still rely on SMS for employee multi-factor authentication. According to a report this year from Okta, 47 percent of workforce customers deploy SMS and voice factors for multi-factor authentication. That’s down from 53 percent that did so in 2018, Okta found.

Some companies (like Knauer’s Sitel) have taken to requiring that all remote access to internal networks be managed through work-issued laptops and/or mobile devices, which are loaded with custom profiles that can’t be accessed through other devices.

Others are moving away from SMS and one-time code apps and toward requiring employees to use physical FIDO multi-factor authentication devices such as security keys, which can neutralize phishing attacks because any stolen credentials can’t be used unless the phishers also have physical access to the user’s security key or mobile device.

This came in handy for Twitter, which announced last year that it was moving all of its employees to using security keys, and/or biometric authentication via their mobile device. The phishers’ Telegram bot reported that on June 16, 2022, five employees at Twitter gave away their work credentials. In response to questions from KrebsOnSecurity, Twitter confirmed several employees were relieved of their employee usernames and passwords, but that its security key requirement prevented the phishers from abusing that information.

Twitter accelerated its plans to improve employee authentication following the July 2020 security incident, wherein several employees were phished and relieved of credentials for Twitter’s internal tools. In that intrusion, the attackers used Twitter’s tools to hijack accounts for some of the world’s most recognizable public figures, executives and celebrities — forcing those accounts to tweet out links to bitcoin scams.

“Security keys can differentiate legitimate sites from malicious ones and block phishing attempts that SMS 2FA or one-time password (OTP) verification codes would not,” Twitter said in an Oct. 2021 post about the change. “To deploy security keys internally at Twitter, we migrated from a variety of phishable 2FA methods to using security keys as our only supported 2FA method on internal systems.”

Update, 6:02 p.m. ET: Clarified that Cloudflare does not rely on TOTP (one-time multi-factor authentication codes) as a second factor for employee authentication.

Okta Hackers Behind Twilio and Cloudflare Attacks Hit Over 130 Organizations

The threat actor behind the attacks on Twilio and Cloudflare earlier this month has been linked to a broader phishing campaign aimed at 136 organizations that resulted in a cumulative compromise of 9,931 accounts. The activity has been condemned 0ktapus by Group-IB because the initial goal of the attacks was to "obtain Okta identity credentials and two-factor authentication (2FA) codes from

dBmonster - Track WiFi Devices With Their Recieved Signal Strength


With dBmonster you are able to scan for nearby WiFi devices and track them trough the signal strength (dBm) of their sent packets (sniffed with TShark). These dBm values will be plotted to a graph with matplotlib. It can help you to identify the exact location of nearby WiFi devices (use a directional WiFi antenna for the best results) or to find out how your self made antenna works the best (antenna radiation patterns).


Features on Linux and MacOS

Feature Linux MacOS
Listing WiFi interfaces
Track & scan on 2.4GHz
Track & scan on 5GHz
Scanning for AP
Scanning for STA
Beep when device found

Installation

git clone https://github.com/90N45-d3v/dBmonster
cd dBmonster

# Install required tools (On MacOS without sudo)
sudo python requirements.py

# Start dBmonster
sudo python dBmonster.py

Has been successfully tested on...

Platform
WiFi Adapter
Kali Linux ALFA AWUS036NHA, DIY Bi-Quad WiFi Antenna
MacOS Monterey Internal card 802.11 a/b/g/n/ac (MBP 2019)
* should work on any MacOS or Debian based system and with every WiFi card that supports monitor-mode

Troubleshooting for MacOS

Normally, you can only enable monitor-mode on the internal wifi card from MacOS with the airport utility from Apple. Somehow, wireshark (or here TShark) can enable it too on MacOS. Cool, but because of the MacOS system and Wireshark’s workaround, there are many issues running dBmonster on MacOS. After some time, it could freeze and/or you have to stop dBmonster/Tshark manually from the CLI with the ps command. If you want to run it anyway, here are some helpful tips:

Kill dBmonster, if you can't stop it over the GUI

Look if there are any processes, named dBmonster, tshark or python:

sudo ps -U root

Now kill them with the following command:

sudo kill <PID OF PROCESS>

Stop monitor-mode, if it's enabled after running dBmonster

sudo airport <WiFi INTERFACE NAME> sniff

Press control + c after a few seconds

* Please contact me on twitter, if you have anymore problems

Working on...

  • Capture signal strength data for offline graphs
  • Generate graphs from normal wireshark.pcapng file
  • Generate multiple graphs in one coordinate system

Additional information

  • If the tracked WiFi device is out of range or doesn't send any packets, the graph stops plotting till there is new data. So don't panic ;)
  • dBmonster wasn't tested on all systems... If there are any errors or something is going wrong, contact me.
  • If you used dBmonster on a non-listed Platform or WiFi Adapter, please open an issue (with Platform and WiFi Adapter information) and I will add your specification to the README.md


Nearly 1,900 Signal Messenger Accounts Potentially Compromised in Twilio Hack

Popular end-to-end encrypted messaging service Signal on Monday disclosed the cyberattack aimed at Twilio earlier this month may have exposed the phone numbers of roughly 1,900 users. "For about 1,900 users, an attacker could have attempted to re-register their number to another device or learned that their number was registered to Signal," the company said. "All users can rest assured that

GitHub Moves to Guard Open Source Against Supply Chain Attacks

The popular Microsoft-owned code repository plans to roll out code signing, which will help beef up the security of open source projects.

Kubeaudit - Tool To Audit Your Kubernetes Clusters Against Common Security Controls


kubeaudit is a command line tool and a Go package to audit Kubernetes clusters for various different security concerns, such as:

  • run as non-root
  • use a read-only root filesystem
  • drop scary capabilities, don't add new ones
  • don't run privileged
  • and more!

tldr. kubeaudit makes sure you deploy secure containers!

Package

To use kubeaudit as a Go package, see the package docs.

The rest of this README will focus on how to use kubeaudit as a command line tool.

Command Line Interface (CLI)

Installation

Brew

brew install kubeaudit

Download a binary

Kubeaudit has official releases that are blessed and stable: Official releases

DIY build

Master may have newer features than the stable releases. If you need a newer feature not yet included in a release, make sure you're using Go 1.17+ and run the following:

go get -v github.com/Shopify/kubeaudit

Start using kubeaudit with the Quick Start or view all the supported commands.

Kubectl Plugin

Prerequisite: kubectl v1.12.0 or later

With kubectl v1.12.0 introducing easy pluggability of external functions, kubeaudit can be invoked as kubectl audit by

  • running make plugin and having $GOPATH/bin available in your path.

or

  • renaming the binary to kubectl-audit and having it available in your path.

Docker

We also release a Docker image: shopify/kubeaudit. To run kubeaudit as a job in your cluster see Running kubeaudit in a cluster.

Quick Start

kubeaudit has three modes:

  1. Manifest mode
  2. Local mode
  3. Cluster mode

Manifest Mode

If a Kubernetes manifest file is provided using the -f/--manifest flag, kubeaudit will audit the manifest file.

Example command:

kubeaudit all -f "/path/to/manifest.yml"

Example output:

$ kubeaudit all -f "internal/test/fixtures/all_resources/deployment-apps-v1.yml"

---------------- Results for ---------------

apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
namespace: deployment-apps-v1

--------------------------------------------

-- [error] AppArmorAnnotationMissing
Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/container' should be added.
Metadata:
Container: container
MissingAnnotation: container.apparmor.security.beta.kubernetes.io/container

-- [error] AutomountServiceAccountTokenTrueAndDefaultSA
Message: Default service account with token mounted. automountServiceAccountToken should be set to 'false' or a non-default service account should be used.

-- [error] CapabilityShouldDropAll
Message: Capability not set to ALL. Ideally, you should drop ALL capabilities and add the specific ones you need to the add list.
Metadata:
Container: container
Capability: AUDIT_WRITE
...

If no errors with a given minimum severity are found, the following is returned:

All checks completed. 0 high-risk vulnerabilities found

Autofix

Manifest mode also supports autofixing all security issues using the autofix command:

kubeaudit autofix -f "/path/to/manifest.yml"

To write the fixed manifest to a new file instead of modifying the source file, use the -o/--output flag.

kubeaudit autofix -f "/path/to/manifest.yml" -o "/path/to/fixed"

To fix a manifest based on custom rules specified on a kubeaudit config file, use the -k/--kconfig flag.

kubeaudit autofix -k "/path/to/kubeaudit-config.yml" -f "/path/to/manifest.yml" -o "/path/to/fixed"

Cluster Mode

Kubeaudit can detect if it is running within a container in a cluster. If so, it will try to audit all Kubernetes resources in that cluster:

kubeaudit all

Local Mode

Kubeaudit will try to connect to a cluster using the local kubeconfig file ($HOME/.kube/config). A different kubeconfig location can be specified using the --kubeconfig flag. To specify a context of the kubeconfig, use the -c/--context flag.

kubeaudit all --kubeconfig "/path/to/config" --context my_cluster

For more information on kubernetes config files, see https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

Audit Results

Kubeaudit produces results with three levels of severity:

  • Error: A security issue or invalid kubernetes configuration
  • Warning: A best practice recommendation
  • Info: Informational, no action required. This includes results that are overridden

The minimum severity level can be set using the --minSeverity/-m flag.

By default kubeaudit will output results in a human-readable way. If the output is intended to be further processed, it can be set to output JSON using the --format json flag. To output results as logs (the previous default) use --format logrus. Some output formats include colors to make results easier to read in a terminal. To disable colors (for example, if you are sending output to a text file), you can use the --no-color flag.

If there are results of severity level error, kubeaudit will exit with exit code 2. This can be changed using the --exitcode/-e flag.

For all the ways kubeaudit can be customized, see Global Flags.

Commands

Command Description Documentation
all Runs all available auditors, or those specified using a kubeaudit config. docs
autofix Automatically fixes security issues. docs
version Prints the current kubeaudit version.

Auditors

Auditors can also be run individually.

Command Description Documentation
apparmor Finds containers running without AppArmor. docs
asat Finds pods using an automatically mounted default service account docs
capabilities Finds containers that do not drop the recommended capabilities or add new ones. docs
deprecatedapis Finds any resource defined with a deprecated API version. docs
hostns Finds containers that have HostPID, HostIPC or HostNetwork enabled. docs
image Finds containers which do not use the desired version of an image (via the tag) or use an image without a tag. docs
limits Finds containers which exceed the specified CPU and memory limits or do not specify any. docs
mounts Finds containers that have sensitive host paths mounted. docs
netpols Finds namespaces that do not have a default-deny network policy. docs
nonroot Finds containers running as root. docs
privesc Finds containers that allow privilege escalation. docs
privileged Finds containers running as privileged. docs
rootfs Finds containers which do not have a read-only filesystem. docs
seccomp Finds containers running without Seccomp. docs

Global Flags

Short Long Description
--format The output format to use (one of "pretty", "logrus", "json") (default is "pretty")
--kubeconfig Path to local Kubernetes config file. Only used in local mode (default is $HOME/.kube/config)
-c --context The name of the kubeconfig context to use
-f --manifest Path to the yaml configuration to audit. Only used in manifest mode. You may use - to read from stdin.
-n --namespace Only audit resources in the specified namespace. Not currently supported in manifest mode.
-g --includegenerated Include generated resources in scan (such as Pods generated by deployments). If you would like kubeaudit to produce results for generated resources (for example if you have custom resources or want to catch orphaned resources where the owner resource no longer exists) you can use this flag.
-m --minseverity Set the lowest severity level to report (one of "error", "warning", "info") (default is "info")
-e --exitcode Exit code to use if there are results with severity of "error". Conventionally, 0 is used for success and all non-zero codes for an error. (default is 2)
--no-color Don't use colors in the output (default is false)

Configuration File

The kubeaudit config can be used for two things:

  1. Enabling only some auditors
  2. Specifying configuration for auditors

Any configuration that can be specified using flags for the individual auditors can be represented using the config.

The config has the following format:

enabledAuditors:
# Auditors are enabled by default if they are not explicitly set to "false"
apparmor: false
asat: false
capabilities: true
deprecatedapis: true
hostns: true
image: true
limits: true
mounts: true
netpols: true
nonroot: true
privesc: true
privileged: true
rootfs: true
seccomp: true
auditors:
capabilities:
# add capabilities needed to the add list, so kubeaudit won't report errors
allowAddList: ['AUDIT_WRITE', 'CHOWN']
deprecatedapis:
# If no versions are specified and the'deprecatedapis' auditor is enabled, WARN
# results will be genereted for the resources defined with a deprecated API.
currentVersion: '1.22'
targetedVersion: '1.25'
image:
# If no image is specified and the 'image' auditor is enabled, WARN results
# will be generated for containers which use an ima ge without a tag
image: 'myimage:mytag'
limits:
# If no limits are specified and the 'limits' auditor is enabled, WARN results
# will be generated for containers which have no cpu or memory limits specified
cpu: '750m'
memory: '500m'

For more details about each auditor, including a description of the auditor-specific configuration in the config, see the Auditor Docs.

Note: The kubeaudit config is not the same as the kubeconfig file specified with the --kubeconfig flag, which refers to the Kubernetes config file (see Local Mode). Also note that only the all and autofix commands support using a kubeaudit config. It will not work with other commands.

Note: If flags are used in combination with the config file, flags will take precedence.

Override Errors

Security issues can be ignored for specific containers or pods by adding override labels. This means the auditor will produce info results instead of error results and the audit result name will have Allowed appended to it. The labels are documented in each auditor's documentation, but the general format for auditors that support overrides is as follows:

An override label consists of a key and a value.

The key is a combination of the override type (container or pod) and an override identifier which is unique to each auditor (see the docs for the specific auditor). The key can take one of two forms depending on the override type:

  1. Container overrides, which override the auditor for that specific container, are formatted as follows:
container.audit.kubernetes.io/[container name].[override identifier]
  1. Pod overrides, which override the auditor for all containers within the pod, are formatted as follows:
audit.kubernetes.io/pod.[override identifier]

If the value is set to a non-empty string, it will be displayed in the info result as the OverrideReason:

$ kubeaudit asat -f "auditors/asat/fixtures/service-account-token-true-allowed.yml"

---------------- Results for ---------------

apiVersion: v1
kind: ReplicationController
metadata:
name: replicationcontroller
namespace: service-account-token-true-allowed

--------------------------------------------

-- [info] AutomountServiceAccountTokenTrueAndDefaultSAAllowed
Message: Audit result overridden: Default service account with token mounted. automountServiceAccountToken should be set to 'false' or a non-default service account should be used.
Metadata:
OverrideReason: SomeReason

As per Kubernetes spec, value must be 63 characters or less and must be empty or begin and end with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), underscores (_), dots (.), and alphanumerics between.

Multiple override labels (for multiple auditors) can be added to the same resource.

See the specific auditor docs for the auditor you wish to override for examples.

To learn more about labels, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/

Contributing

If you'd like to fix a bug, contribute a feature or just correct a typo, please feel free to do so as long as you follow our Code of Conduct.

  1. Create your own fork!
  2. Get the source: go get github.com/Shopify/kubeaudit
  3. Go to the source: cd $GOPATH/src/github.com/Shopify/kubeaudit
  4. Add your forked repo as a fork: git remote add fork https://github.com/you-are-awesome/kubeaudit
  5. Create your feature branch: git checkout -b awesome-new-feature
  6. Install Kind
  7. Run the tests to see everything is working as expected: make test (to run tests without Kind: USE_KIND=false make test)
  8. Commit your changes: git commit -am 'Adds awesome feature'
  9. Push to the branch: git push fork
  10. Sign the Contributor License Agreement
  11. Submit a PR (All PR must be labeled with
    (Bug fix),
    (New feature),
    (Documentation update), or
    (Breaking changes) )
  12. ???
  13. Profit

Note that if you didn't sign the CLA before opening your PR, you can re-run the check by adding a comment to the PR that says "I've signed the CLA!"!



How Cisco Duo Is Simplifying Secure Access for Organizations Around the World

At Cisco Duo, we continually strive to enhance our products to make it easy for security practitioners to apply access policies based on the principles of zero trust. This blog highlights how Duo is achieving that goal by simplifying user and administrator experience and supporting data sovereignty requirements for customers around the world. Read on to get an overview of what we have been delivering to our customers in those areas in the past few months.

Simplifying Administrator and End-User Experience for Secure Access 

Duo strives to make secure access frictionless for employees while reducing the administrative burden on IT (Information Technology) and helpdesk teams. This is made possible thanks to the strong relationship between our customers and our user research team. The insights we gained helped us implement some exciting enhancements to Duo Single Sign-On (SSO) and Device Trust capabilities.

Duo SSO unifies identities across systems and reduces the number of credentials a user must remember and enter to gain access to resources. Active Directory (AD) is the most popular authentication source connected to Duo SSO, accounting for almost 80% of all setups. To make Duo’s integration with AD even easier to implement, we have introduced Duo SSO support for multiple Active Directory forests for organizations that have users in multiple domains. Additionally, we added the Expired Password Resets feature in Duo SSO. It provides an easy experience for users to quickly reset their expired Active Directory password, log into their application, and carry on with their day. Continuing the theme of self service, we introduced a hosted device management portal – a highly requested feature from customers. Now administrators no longer need to host and manage the portal, and end users can login with Duo SSO to manage their authentication devices (e.g.: TouchID, security keys, mobile phone etc.) without needing to open IT helpdesk tickets.

We are also simplifying the administrator experience. We have made it easy for administrators to configure Duo SSO with Microsoft 365 using an out of the box integration. Duo SSO layers Duo’s strong authentication and flexible policy engine on top of Microsoft 365 logins. Further, we have heard from many customers that they want to deliver a seamless on-brand login experience for their workforce. To support this, we have made custom branding so simple that administrators can quickly customize their end-user authentication experience from the settings page in the Duo Admin Panel.

Device Trust is a critical capability required to enable secure access for the modern workforce from any location. We have made it easy for organizations to adopt device trust and distinguish between managed and unmanaged devices. Organizations can enforce a Trusted Endpoint policy to allow access only from managed devices for critical applications. We have eliminated the requirement to deploy and manage device certificates to enforce this policy. Device Health application now checks the managed status of a device. This lowers administrative overhead while enabling organizations to achieve a better balance between security and usability. We have also added out-of-box integrations with unified endpoint management solutions such as Active Directory domain-joined devices, Microsoft Intune, Jamf Pro and VMware Workspace ONE. For organizations that have deployed a solution that is not listed above, Duo provides a Device API that works with any enterprise device management system.

 Supporting Global Data Sovereignty Requirements 

To support our growing customer base around the world, Duo expanded its data center presence to  Australia, Singapore, and Japan in September last year. And now Duo is thrilled to announce the launch of the two new data centers in the UK and India. Both the new and existing data centers will allow customers to meet all local requirements, all while maintaining ISO27001 and SOC2 compliance and a 99.999% service availability goal.

The launch of the new data centers is the backbone of Duo’s international expansion strategy. In the last two years, Duo has met key international growth milestones and completed the C5 attestation (Germany), AgID certification (Italy) and IRAP assessment (Australia) – all of which demonstrate that Duo meets the mandatory baseline standards for use by the public sector in the countries listed above. Check out this Privacy Data Sheet to learn more about Cisco Duo’s commitment to our customer’s data privacy and data sovereignty.

Cisco Duo Continues to Democratize Security 

That is a summary of what we have been up to here at Cisco Duo in the past few months. But we are not done yet! Stay tuned for more exciting announcements at RSA Conference 2022 next week. Visit us at our booth at RSAC 2022 and World of solutions at Cisco Live 2022.

In the meanwhile, check out this on-demand #CiscoChat panel discussion with real-world security practitioners on how they have implemented secure access best practices for hybrid work using Duo. And if you do not want to wait, sign-up for a 30 day trial and experience how Duo can simplify secure access for your workforce.

 


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

Critical cryptographic Java security blunder patched – update now!

Either know the private key and use it scrupulously in your digital signature calculation.... or just send a bunch of zeros instead.

IRP Panel Sanctions Afilias, Clears the Way for ICANN to Decide .web Disputes

Verisign Logo

The .web Independent Review Process (IRP) Panel issued a Final Decision six months ago, in May 2021. Immediately thereafter, the claimant, Afilias Domains No. 3 Limited (now a shell entity known as AltaNovo Domains Limited), filed an application seeking reconsideration of the Final Decision under Rule 33 of the arbitration rules. Rule 33 allows for the clarification of an ambiguous ruling and allows the Panel the opportunity to supplement its decision if it inadvertently failed to consider a claim or defense, but specifically does not permit wholesale reconsideration of a final decision. The problem for Afilias’ application, as we said at the time, was that it sought exactly that.

The Panel ruled on Afilias’ application on Dec. 21, 2021. In this latest ruling, the Panel not only rejected Afilias’ application in its entirety, but went further and sanctioned Afilias for having filed it in the first place. Quoting from the ruling:

In the opinion of the Panel, under the guise of seeking an additional decision, the Application is seeking reconsideration of core elements of the Final Decision. Likewise, under the guise of seeking interpretation, the Application is requesting additional declarations and advisory opinions on a number of questions, some of which had not been discussed in the proceedings leading to the Final Decision.

In such circumstances, the Panel cannot escape the conclusion that the Application is “frivolous” in the sense of it “having no sound basis (as in fact or law).” This finding suffices to entitle [ICANN] to the cost shifting decision it is seeking…the Panel hereby unanimously…Grants [ICANN’s] request that the Panel shift liability for the legal fees incurred by [ICANN] in connection with the Application, fixes at US $236,884.39 the amount of the legal fees to be reimbursed to [ICANN] by [Afilias]…and orders [Afilias] to pay this amount to [ICANN] within thirty (30) days….

In light of the Panel’s finding that Afililas’ Rule 33 application was so improper and frivolous as to be sanctionable, a serious question arises about the motives in filing it. Reading the history of the .web proceedings, one possible motivation is becoming more clear. The community will recall that, five years ago, Donuts (through its wholly-owned subsidiary Ruby Glen) failed in its bid to enjoin the .web auction when a federal court rejected false allegations that Nu Dot Co (NDC) had failed to disclose an ownership change. After the auction was conducted, Afilias then picked up the litigation baton from Donuts. Afilias’ IRP complaint demanded that the arbitration Panel nullify the auction results, and award .web to itself, thereby bypassing ICANN completely. In the May 2021 Final Decision the IRP Panel gave an unsurprising but firm “no” to Afilias’ request to supplant ICANN’s role, and instead directed ICANN’s Board to review the complaints about the conduct of the .web contention set members and then make a determination on delegation.

A result of this five-year battle has been to prevent ICANN from passing judgment on the .web situation. These proceedings have unsuccessfully sought to have courts and arbitrators stand in the shoes of ICANN, rather than letting ICANN discharge its mandated duty to determine what, if anything, should be done in response to the allegations regarding the pre-auction conduct of the contention set. This conduct includes Afilias’ own wrongdoing in violating the pre-auction communications blackout imposed in the Auction Rules. That misconduct is set forth in a July 23, 2021 letter by NDC to ICANN, since published by ICANN, containing written proof of Afilias’ violation of the auction rules. In its Dec. 21 ruling, the Panel made it unmistakably clear that it is ICANN – not a judge or a panel of arbitrators – who must first review all allegations of misconduct by the contention set, including the powerful evidence indicating that it is Afilias’ .web application, not NDC’s, that should be disqualified.

If Afilias’ motivation has been to avoid ICANN’s scrutiny of its own pre-auction misconduct, especially after exiting the registry business when it appears that its only significant asset is the .web application itself, then what we should expect to see next is for Afilias/AltaNovo to manufacture another delaying attack on the Final Decision. Perhaps this is why its litigation counsel has already written ICANN threatening to continue litigation “in all available fora whether within or outside of the United States of America.…”

It is long past time to put an end to this five-year campaign, which has interfered with ICANN’s duty to decide on the delegation of .web, harming the interests of the broader internet community. The new ruling obliges ICANN to take a decisive step in that direction.

The post IRP Panel Sanctions Afilias, Clears the Way for ICANN to Decide .web Disputes appeared first on Verisign Blog.

Industry Insights: RDAP Becomes Internet Standard

Technical header image of code

This article originally appeared in The Domain Name Industry Brief (Volume 18, Issue 3)

Earlier this year, the Internet Engineering Task Force’s (IETF’s) Internet Engineering Steering Group (IESG) announced that several Proposed Standards related to the Registration Data Access Protocol (RDAP), including three that I co-authored, were being promoted to the prestigious designation of Internet Standard. Initially accepted as proposed standards six years ago, RFC 7480, RFC 7481, RFC 9082 and RFC 9083 now comprise the new Standard 95. RDAP allows users to access domain registration data and could one day replace its predecessor the WHOIS protocol. RDAP is designed to address some widely recognized deficiencies in the WHOIS protocol and can help improve the registration data chain of custody.

In the discussion that follows, I’ll look back at the registry data model, given the evolution from WHOIS to the RDAP protocol, and examine how the RDAP protocol can help improve upon the more traditional, WHOIS-based registry models.

Registration Data Directory Services Evolution, Part 1: The WHOIS Protocol

In 1998, Network Solutions was responsible for providing both consumer-facing registrar and back-end registry functions for the legacy .com, .net and .org generic top-level domains (gTLDs). Network Solutions collected information from domain name registrants, used that information to process domain name registration requests, and published both collected data and data derived from processing registration requests (such as expiration dates and status values) in a public-facing directory service known as WHOIS.

From Network Solution’s perspective as the registry, the chain of custody for domain name registration data involved only two parties: the registrant (or their agent) and Network Solutions. With the introduction of a Shared Registration System (SRS) in 1999, multiple registrars began to compete for domain name registration business by using the registry services operated by Network Solutions. The introduction of additional registrars and the separation of registry and registrar functions added parties to the chain of custody of domain name registration data. Information flowed from the registrant, to the registrar, and then to the registry, typically crossing multiple networks and jurisdictions, as depicted in Figure 1.

Flowchart of registration process. Information flowed from the registrant, to the registrar, and then to the registry.
Figure 1. Flow of information in early data registration process.

Registration Data Directory Services Evolution, Part 2: The RDAP Protocol

Over time, new gTLDs and new registries came into existence, new WHOIS services (with different output formats) were launched, and countries adopted new laws and regulations focused on protecting the personal information associated with domain name registration data. As time progressed, it became clear that WHOIS lacked several needed features, such as:

  • Standardized command structures
  • Output and error structures
  • Support for internationalization and localization
  • User identification
  • Authentication and access control

The IETF made multiple attempts to add features to WHOIS to address some of these issues, but none of them were widely adopted. A possible replacement protocol known as the Internet Registry Information Service (IRIS) was standardized in 2005, but it was not widely adopted. Something else was needed, and the IETF went back to work to produce what became known as RDAP.

RDAP was specified in a series of five IETF Proposed Standard RFC documents, including the following, all of which were published in March 2015:

  • RFC 7480, HTTP Usage in the Registration Data Access Protocol (RDAP)
  • RFC 7481, Security Services for the Registration Data Access Protocol (RDAP)
  • RFC 7482, Registration Data Access Protocol (RDAP) Query Format
  • RFC 7483, JSON Responses for the Registration Data Access Protocol (RDAP)
  • RFC 7484, Finding the Authoritative Registration Data (RDAP) Service

Only when RDAP was standardized did we start to see broad deployment of a possible WHOIS successor by domain name registries, domain name registrars and address registries.

The broad deployment of RDAP led to RFCs 7480 and 7481 becoming Internet Standard RFCs (part of Internet Standard 95) without modification in March 2021. As operators of registration data directory services implemented and deployed RDAP, they found places in the other specifications where minor corrections and clarifications were needed without changing the protocol itself. RFC 7482 was updated to become Internet Standard RFC 9082, which was published in June 2021. RFC 7483 was updated to become Internet Standard RFC 9083, which was also published in June 2021. All were added to Standard 95. As of the writing of this article, RFC 7484 is in the process of being reviewed and updated for elevation to Internet Standard status.

RDAP Advantages

Operators of registration data directory services who implemented RDAP can take advantage of key features not available in the WHOIS protocol. I’ve highlighted some of these important features in the table below.

RDAP Feature Benefit
Standard, well-understood, and widely available HTTP transport Relatively easy to implement, deploy and operate using common web service tools, infrastructure and applications.
Securable via HTTPS Helps provide confidentiality for RDAP queries and responses, reducing the amount of information that is disclosed to monitors.
Structured output in JavaScript Object Notation (JSON) JSON is well-understood and tool friendly, which makes it easier for clients to parse and format responses from all servers without the need for software that’s customized for different service providers.
Easily extensible Designed to support the addition of new features without breaking existing implementations. This makes it easier to address future function needs with less risk of implementation incompatibility.
Internationalized output, with full support for Unicode character sets Allows implementations to provide human-readable inputs and outputs that are represented in a language appropriate to the local operating environment.
Referral capability, leveraging HTTP constructs Provides information to software clients that allow the client to retrieve additional information from other RDAP servers. This can be used to hide complexity from human users.
Support of standardized authentication RDAP can take full advantage of all of the client identification, authentication and authorization methods that are available to web services. This means that RDAP can be used to provide the basic framework for differentiated access to registration data based on attributes associated with the user and the user’s query.

Verisign and RDAP

Verisign’s RDAP service, which was originally launched as an experimental implementation several years before gaining widespread adoption, allows users to look up records in the registry database for all registered .com, .net, .name, .cc and .tv domain names. It also supports Internationalized Domain Names (IDNs).

We at Verisign were pleased not only to see the IETF recognize the importance of RDAP by elevating it to an Internet Standard, but also that the protocol became a requirement for ICANN-accredited registrars and registries as of August 2019. Widespread implementation of the RDAP protocol makes registration data more secure, stable and resilient, and we are hopeful that the community will evolve the prescribed implementation of RDAP such that the full power of this rich protocol will be deployed.

You can learn more in the RDAP Help section of the Verisign website, and access helpful documents such as the RDAP technical implementation guide and the RDAP response profile.

The post Industry Insights: RDAP Becomes Internet Standard appeared first on Verisign Blog.

Afilias’ Rule Violations Continue to Delay .WEB

Verisign Logo

As I noted on May 26, the final decision issued on May 20 in the Independent Review Process (IRP) brought by Afilias against the Internet Corporation for Assigned Names and Numbers (ICANN) rejected Afilias’ petition to nullify the results of the public auction for .WEB, and it further rejected Afilias’ demand to have it be awarded .WEB (at a price substantially lower than the winning bid). Instead, as we urged, the IRP Panel determined that the ICANN Board should move forward with reviewing the objections made about .WEB, and to make a decision on delegation thereafter.

Afilias and its counsel both issued press releases claiming victory in an attempt to put a positive spin on the decision. In contrast to this public position, Afilias then quickly filed a 68-page application asking the Panel to reverse its decision. This application is, however, not permitted by the arbitration rules – which expressly prohibit such requests for “do overs.”

In addition to Afilias’ facially improper application, there is an even more serious instance of rule-breaking now described in a July 23 letter from Nu Dot Co (NDC) to ICANN. This letter sets out in considerable detail how Afilias engaged in prohibited conduct during the blackout period immediately before the .WEB auction in 2016, in violation of the auction rules. The letter shows how this rule violation is more than just a technicality; it was part of a broader scheme to rig the auction. The attachments to the letter shed light on how, during the blackout period, Afilias offered NDC money to stop ICANN’s public auction in favor of a private process – which would in turn deny the broader internet community the benefit of the proceeds of a public auction.

Afilias’ latest application to reverse the Panel’s decision, like its pre-auction misconduct 5 years ago, has only led to unnecessary delay of the delegation of .WEB. It is long past time for this multi-year campaign to come to an end. The Panel’s unanimous ruling makes clear that it strongly agrees.

The post Afilias’ Rule Violations Continue to Delay .WEB appeared first on Verisign Blog.

IRP Panel Dismisses Afilias’ Claims to Reverse .WEB Auction and Award .WEB to Afilias

Verisign Logo

On Thursday, May 20, a final decision was issued in the Independent Review Process (IRP) brought by Afilias against the Internet Corporation for Assigned Names and Numbers (ICANN), rejecting Afilias’ petition to nullify the results of the July 27, 2016 public auction for the .WEB new generic top level domain (gTLD) and to award .WEB to Afilias at a substantially lower, non-competitive price. Nu Dotco, LLC (NDC) submitted the highest bid at the auction and was declared the winner, over Afilias’ lower, losing bid. Despite Afilias’ repeated objections to participation by NDC or Verisign in the IRP, the Panel ordered that NDC and Verisign could participate in the IRP in a limited way each as amicus curiae.

Consistent with NDC, Verisign and ICANN’s position in the IRP, the Order dismisses “the Claimant’s [Afilias’] request that Respondent [ICANN] be ordered by the Panel to disqualify NDC’s bid for .WEB, proceed with contracting the Registry Agreement for .WEB with the Claimant in accordance with the New gTLD Program Rules, and specify the bid price to be paid by the Claimant.” Contrary to Afilias’ position, all objections to the auction are referred to ICANN for determination. This includes Afilias’ objections as well as objections by NDC that Afilias violated the auction rules by engaging in secret discussions during the Blackout Period under the Program Rules.

The Order Dismisses All of Afilias’ Claims of Violations by NDC or Verisign

Afilias’ claims for relief were based on its allegation that NDC violated the New gTLD Program Rules by entering into an agreement with Verisign, under which Verisign provided funds for NDC’s participation in the .WEB auction in exchange for NDC’s commitment, if it prevailed at the auction and entered into a registry agreement with ICANN, to assign its .WEB registry agreement to Verisign upon ICANN’s consent to the assignment. As the Panel determined, the relief requested by Afilias far exceeded the scope of proper IRP relief provided for in ICANN’s Bylaws, which limit an IRP to a determination whether or not ICANN has exceeded its mission or otherwise failed to comply with its Articles of Incorporation and Bylaws.

Issued two and a half years after Afilias initiated its IRP, the Panel’s decision unequivocally rejects Afilias’ attempt to misuse the IRP to rule on claims of NDC or Verisign misconduct or obtain the .WEB gTLD for itself despite its losing bid. The Panel held that it is for ICANN, which has the requisite knowledge, expertise, and experience, to determine whether NDC’s contract with Verisign violated ICANN’s Program Rules. The Panel further determined that it would be improper for the Panel to dictate what should be the consequences of an alleged violation of the rules contained in the gTLD Applicant Guidebook, if any took place. The Panel therefore denied Afilias’ requests for a binding declaration that ICANN must disqualify NDC’s bid for violating the Guidebook rules and award .WEB to Afilias.

Despite pursuing its claims in the IRP for over two years — at a cost of millions of dollars — Afilias failed to offer any evidence to support its allegations that NDC improperly failed to update its application and/or assigned its application to Verisign. Instead, the evidence was to the contrary. Indeed, Afilias failed to offer testimony from a single Afilias witness during the hearing on the merits, including witnesses with direct knowledge of relevant events and industry practices. It is apparent that Afilias failed to call as witnesses any of its own officers or employees because, testifying under penalty of perjury, they would have been forced to contradict the false allegations advanced by Afilias during the IRP. By contrast, ICANN, NDC and Verisign each supported their respective positions appropriately by calling witnesses to testify and be subject to cross-examination by the three-arbitrator panel and Afilias, under oath, with respect to the facts refuting Afilias’ claims.

Afilias also argued in the IRP that ICANN is a competition regulator and that ICANN’s commitment, contained in its Bylaws, to promote competition required ICANN to disqualify NDC’s bid for the .WEB gTLD because NDC’s contract with Verisign may lead to Verisign’s operation of .WEB. The Panel rejected Afilias’ claim, agreeing with ICANN and Verisign that “ICANN does not have the power, authority, or expertise to act as a competition regulator by challenging or policing anticompetitive transactions or conduct.” The Panel found ICANN’s evidence “compelling” that it fulfills its mission to promote competition through the expansion of the domain name space and facilitation of innovative approaches to the delivery of domain name registry services, not by acting as an antitrust regulator. The Panel quoted Afilias’ own statements to this effect, which were made outside of the IRP proceedings when Afilias had different interests.

Although the Panel rejected Afilias’ Guidebook and competition claims, it did find that the manner in which ICANN addressed complaints about the .WEB matter did not meet all of the commitments in its Bylaws. But even so, Afilias was awarded only a small portion of the legal fees it hoped to recover from ICANN.

Moving Forward with .WEB

It is now up to ICANN to move forward expeditiously to determine, consistent with its Bylaws, the validity of any objections under the New gTLD Program Rules in connection with the .WEB auction, including NDC and Verisign’s position that Afilias should be disqualified from making any further objections to NDC’s application.

As Verisign and NDC pointed out in 2016, the evidence during the IRP establishes that collusive conduct by Afilias in connection with the auction violated the Guidebook. The Guidebook and Auction Rules both prohibit applicants within a contention set from discussing “bidding strategies” or “settlement” during a designated Blackout Period in advance of an auction. Violation of the Blackout Period is a “serious violation” of ICANN’s rules and may result in forfeiture of an applicant’s application. The evidence adduced in the IRP proves that Afilias committed such violations and should be disqualified. On July 22, just four days before the public ICANN auction for .WEB, Afilias contacted NDC, following Afilias’ discussions with other applicants, to try to negotiate a private auction if ICANN would delay the public auction. Afilias knew the Blackout Period was in effect, but nonetheless violated it in an attempt to persuade NDC to participate in a private auction. Under the settlement Afilias proposed, Afilias would make millions of dollars even if it lost the auction, rather than auction proceeds being used for the internet community through the investment of such proceeds by ICANN as determined by the community.

All of the issues raised during the IRP were the subject of extensive briefing, evidentiary submissions and live testimony during the hearing on the merits, providing ICANN with a substantial record on which to render a determination with respect to .WEB and proceed forward with delegation of the new gTLD. Verisign stands ready to assist ICANN in any way we can to quickly resolve this matter so that domain names within the .WEB gTLD can finally be made available to businesses and consumers.

As a final observation: Afilias no longer operates a registry business, and has neither the platform, organization, nor necessary consents from ICANN, to support one. Inconsistent with Afilias’ claims in the IRP, Afilias transferred its entire registry business to Donuts during the pendency of the IRP. Although long in the works, the sale was not disclosed by Afilias either before or during the IRP hearings, nor, remarkably, did Afilias produce any company witness for examination who might have disclosed the sale to the panel of arbitrators or others. Based on a necessary public disclosure of the Donuts sale after the hearings and before entry of the Panel’s Order, the Panel included in its final Order a determination that it is for ICANN to determine whether the Afilias’ sale is itself a basis for a denial of Afilias’ claims with respect to .WEB.

Verisign’s analysis of the Independent Review Process decision regarding the awarding of the .web top level domain.

The post IRP Panel Dismisses Afilias’ Claims to Reverse .WEB Auction and Award .WEB to Afilias appeared first on Verisign Blog.

Verisign Support for AAPI Communities and COVID Relief in India

Verisign Logo

At Verisign we have a commitment to making a positive and lasting impact on the global internet community, and on the communities in which we live and work.

This commitment guided our initial efforts to help these communities respond to and recover from the effects of the COVID-19 pandemic, over a year ago. And at the end of 2020, our sense of partnership with our local communities helped shape our efforts to alleviate COVID-related food insecurity in the areas where we have our most substantial footprint. This same sense of community is reflected in our partnership with Virginia Ready, which aims to help individuals in our home State of Virginia access training and certification to pivot to new careers in the technology sector.

We also believe that our team is one of our most important assets. We particularly value the diverse origins of our people; we have colleagues from all over the world who, in turn, are closely connected to their own communities both in the United States and elsewhere. A significant proportion of our staff are of either Asian American and Pacific Islander (AAPI) or South Asian origin, and today we are pleased to announce two charitable contributions, via our Verisign Cares program, directly related to these two communities.

First, Verisign is pleased to associate ourselves with the Stand with Asian Americans initiative, launched by AAPI business leaders in response to recent and upsetting episodes of aggression toward their community. Verisign supports this initiative and the pledge for which it stands, and has made a substantial contribution to the initiative’s partner, the Asian Pacific Fund, to help uplift the AAPI community.

Second, and after consultation with our staff, we have directed significant charitable contributions to organizations helping to fight the worsening wave of COVID-19 in India. Through Direct Relief we will be helping to provide oxygen and other medical equipment to hospitals, while through GiveIndia we will be supporting families in India impacted by COVID-19.

The ‘extended Verisign family’ of our employees, and their families and their communities, means a tremendous amount to us – it is only thanks to our talented and dedicated people that we are able to continue to fulfill our mission of enabling the world to connect online with reliability and confidence, anytime, anywhere.

Verisign expands its community support initiatives, with contributions to COVID-19 relief in India and to the Stand with Asian Americans initiative.

The post Verisign Support for AAPI Communities and COVID Relief in India appeared first on Verisign Blog.

Information Protection for the Domain Name System: Encryption and Minimization

This is the final in a multi-part series on cryptography and the Domain Name System (DNS).

In previous posts in this series, I’ve discussed a number of applications of cryptography to the DNS, many of them related to the Domain Name System Security Extensions (DNSSEC).

In this final blog post, I’ll turn attention to another application that may appear at first to be the most natural, though as it turns out, may not always be the most necessary: DNS encryption. (I’ve also written about DNS encryption as well as minimization in a separate post on DNS information protection.)

DNS Encryption

In 2014, the Internet Engineering Task Force (IETF) chartered the DNS PRIVate Exchange (dprive) working group to start work on encrypting DNS queries and responses exchanged between clients and resolvers.

That work resulted in RFC 7858, published in 2016, which describes how to run the DNS protocol over the Transport Layer Security (TLS) protocol, also known as DNS over TLS, or DoT.

DNS encryption between clients and resolvers has since gained further momentum, with multiple browsers and resolvers supporting DNS over Hypertext Transport Protocol Security (HTTPS), or DoH, with the formation of the Encrypted DNS Deployment Initiative, and with further enhancements such as oblivious DoH.

The dprive working group turned its attention to the resolver-to-authoritative exchange during its rechartering in 2018. And in October of last year, ICANN’s Office of the CTO published its strategy recommendations for the ICANN-managed Root Server (IMRS, i.e., the L-Root Server), an effort motivated in part by concern about potential “confidentiality attacks” on the resolver-to-root connection.

From a cryptographer’s perspective the prospect of adding encryption to the DNS protocol is naturally quite interesting. But this perspective isn’t the only one that matters, as I’ve observed numerous times in previous posts.

Balancing Cryptographic and Operational Considerations

A common theme in this series on cryptography and the DNS has been the question of whether the benefits of a technology are sufficient to justify its cost and complexity.

This question came up not only in my review of two newer cryptographic advances, but also in my remarks on the motivation for two established tools for providing evidence that a domain name doesn’t exist.

Recall that the two tools — the Next Secure (NSEC) and Next Secure 3 (NSEC3) records — were developed because a simpler approach didn’t have an acceptable risk / benefit tradeoff. In the simpler approach, to provide a relying party assurance that a domain name doesn’t exist, a name server would return a response, signed with its private key, “<name> doesn’t exist.”

From a cryptographic perspective, the simpler approach would meet its goal: a relying party could then validate the response with the corresponding public key. However, the approach would introduce new operational risks, because the name server would now have to perform online cryptographic operations.

The name server would not only have to protect its private key from compromise, but would also have to protect the cryptographic operations from overuse by attackers. That could open another avenue for denial-of-service attacks that could prevent the name server from responding to legitimate requests.

The designers of DNSSEC mitigated these operational risks by developing NSEC and NSEC3, which gave the option of moving the private key and the cryptographic operations offline, into the name server’s provisioning system. Cryptography and operations were balanced by this better solution. The theme is now returning to view through the recent efforts around DNS encryption.

Like the simpler initial approach for authentication, DNS encryption may meet its goal from a cryptographic perspective. But the operational perspective is important as well. As designers again consider where and how to deploy private keys and cryptographic operations across the DNS ecosystem, alternatives with a better balance are a desirable goal.

Minimization Techniques

In addition to encryption, there has been research into other, possibly lower-risk alternatives that can be used in place of or in addition to encryption at various levels of the DNS.

We call these techniques collectively minimization techniques.

Qname Minimization

In “textbook” DNS resolution, a resolver sends the same full domain name to a root server, a top-level domain (TLD) server, a second-level domain (SLD) server, and any other server in the chain of referrals, until it ultimately receives an authoritative answer to a DNS query.

This is the way that DNS resolution has been practiced for decades, and it’s also one of the reasons for the recent interest in protecting information on the resolver-to-authoritative exchange: The full domain name is more information than all but the last name server needs to know.

One such minimization technique, known as qname minimization, was identified by Verisign researchers in 2011 and documented in RFC 7816 in 2016. (In 2015, Verisign announced a royalty-free license to its qname minimization patents.)

With qname minimization, instead of sending the full domain name to each name server, the resolver sends only as much as the name server needs either to answer the query or to refer the resolver to a name server at the next level. This follows the principle of minimum disclosure: the resolver sends only as much information as the name server needs to “do its job.” As Matt Thomas described in his recent blog post on the topic, nearly half of all .com and .net queries received by Verisign’s .com TLD servers were in a minimized form as of August 2020.

Additional Minimization Techniques

Other techniques that are part of this new chapter in DNS protocol evolution include NXDOMAIN cut processing [RFC 8020] and aggressive DNSSEC caching [RFC 8198]. Both leverage information present in the DNS to reduce the amount and sensitivity of DNS information exchanged with authoritative name servers. In aggressive DNSSEC caching, for example, the resolver analyzes NSEC and NSEC3 range proofs obtained in response to previous queries to determine on its own whether a domain name doesn’t exist. This means that the resolver doesn’t always have to ask the authoritative server system about a domain name it hasn’t seen before.

All of these techniques, as well as additional minimization alternatives I haven’t mentioned, have one important common characteristic: they only change how the resolver operates during the resolver-authoritative exchange. They have no impact on the authoritative name server or on other parties during the exchange itself. They thereby mitigate disclosure risk while also minimizing operational risk.

The resolver’s exchanges with authoritative name servers, prior to minimization, were already relatively less sensitive because they represented aggregate interests of the resolver’s many clients1. Minimization techniques lower the sensitivity even further at the root and TLD levels: the resolver sends only its aggregate interests in TLDs to root servers, and only its interests in SLDs to TLD servers. The resolver still sends the aggregate interests in full domain names at the SLD level and below2, and may also include certain client-related information at these levels, such as the client-subnet extension. The lower levels therefore may have different protection objectives than the upper levels.

Conclusion

Minimization techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

These tools complement those I’ve described in previous posts in this series. Some have already been deployed at scale, such as a DNSSEC with its NSEC and NSEC3 non-existence proofs. Others are at various earlier stages, like NSEC5 and tokenized queries, and still others contemplate “post-quantum” scenarios and how to address them. (And there are yet other tools that I haven’t covered in this series, such as authenticated resolution and adaptive resolution.)

Modern cryptography is just about as old as the DNS. Both have matured since their introduction in the late 1970s and early 1980s respectively. Both bring fundamental capabilities to our connected world. Both continue to evolve to support new applications and to meet new security objectives. While they’ve often moved forward separately, as this blog series has shown, there are also opportunities for them to advance together. I look forward to sharing more insights from Verisign’s research in future blog posts.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

1. This argument obviously holds more weight for large resolvers than for small ones — and doesn’t apply for the less common case of individual clients running their own resolvers. However, small resolvers and individual clients seeking additional protection retain the option of sending sensitive queries through a large, trusted resolver, or through a privacy-enhancing proxy. The focus in our discussion is primarily on large resolvers.

2. In namespaces where domain names are registered at the SLD level, i.e., under an effective TLD, the statements in this note about “root and TLD” and “SLD level and below” should be “root through effective TLD” and “below effective TLD level.” For simplicity, I’ve placed the “zone cut” between TLD and SLD in this note.

Minimization et al. techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

The post Information Protection for the Domain Name System: Encryption and Minimization appeared first on Verisign Blog.

Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys

This is the fifth in a multi-part series on cryptography and the Domain Name System (DNS).

In my last article, I described efforts underway to standardize new cryptographic algorithms that are designed to be less vulnerable to potential future advances in quantum computing. I also reviewed operational challenges to be considered when adding new algorithms to the DNS Security Extensions (DNSSEC).

In this post, I’ll look at hash-based signatures, a family of post-quantum algorithms that could be a good match for DNSSEC from the perspective of infrastructure stability.

I’ll also describe Verisign Labs research into a new concept called synthesized zone signing keys that could mitigate the impact of the large signature size for hash-based signatures, while still maintaining this family’s protections against quantum computing.

(Caveat: The concepts reviewed in this post are part of Verisign’s long-term research program and do not necessarily represent Verisign’s plans or positions on new products or services. Concepts developed in our research program may be subject to U.S. and/or international patents and/or patent applications.)

A Stable Algorithm Rollover

The DNS community’s root key signing key (KSK) rollover illustrates how complicated a change to DNSSEC infrastructure can be. Although successfully accomplished, this change was delayed by ICANN to ensure that enough resolvers had the public key required to validate signatures generated with the new root KSK private key.

Now imagine the complications if the DNS community also had to ensure that enough resolvers not only had a new key but also had a brand-new algorithm.

Imagine further what might happen if a weakness in this new algorithm were to be found after it was deployed. While there are procedures for emergency key rollovers, emergency algorithm rollovers would be more complicated, and perhaps controversial as well if a clear successor algorithm were not available.

I’m not suggesting that any of the post-quantum algorithms that might be standardized by NIST will be found to have a weakness. But confidence in cryptographic algorithms can be gained and lost over many years, sometimes decades.

From the perspective of infrastructure stability, therefore, it may make sense for DNSSEC to have a backup post-quantum algorithm built in from the start — one for which cryptographers already have significant confidence and experience. This algorithm might not be as efficient as other candidates, but there is less of a chance that it would ever need to be changed. This means that the more efficient candidates could be deployed in DNSSEC with the confidence that they have a stable fallback. It’s also important to keep in mind that the prospect of quantum computing is not the only reason system developers need to be considering new algorithms from time to time. As public-key cryptography pioneer Martin Hellman wisely cautioned, new classical (non-quantum) attacks could also emerge, whether or not a quantum computer is realized.

Hash-Based Signatures

The 1970s were a foundational time for public-key cryptography, producing not only the RSA algorithm and the Diffie-Hellman algorithm (which also provided the basic model for elliptic curve cryptography), but also hash-based signatures, invented in 1979 by another public-key cryptography founder, Ralph Merkle.

Hash-based signatures are interesting because their security depends only on the security of an underlying hash function.

It turns out that hash functions, as a concept, hold up very well against quantum computing advances — much better than currently established public-key algorithms do.

This means that Merkle’s hash-based signatures, now more than 40 years old, can rightly be considered the oldest post-quantum digital signature algorithm.

If it turns out that an individual hash function doesn’t hold up — whether against a quantum computer or a classical computer — then the hash function itself can be replaced, as cryptographers have been doing for years. That will likely be easier than changing to an entirely different post-quantum algorithm, especially one that involves very different concepts.

The conceptual stability of hash-based signatures is a reason that interoperable specifications are already being developed for variants of Merkle’s original algorithm. Two approaches are described in RFC 8391, “XMSS: eXtended Merkle Signature Scheme” and RFC 8554, “Leighton-Micali Hash-Based Signatures.” Another approach, SPHINCS+, is an alternate in NIST’s post-quantum project.

Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.
Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.

Hash-based signatures can potentially be applied to any part of the DNSSEC trust chain. For example, in Figure 1, the DNS record sets can be signed with a zone signing key (ZSK) that employs a hash-based signature algorithm.

The main challenge with hash-based signatures is that the signature size is large, on the order of tens or even hundreds of thousands of bits. This is perhaps why they haven’t seen significant adoption in security protocols over the past four decades.

Synthesizing ZSKs with Merkle Trees

Verisign Labs has been exploring how to mitigate the size impact of hash-based signatures on DNSSEC, while still basing security on hash functions only in the interest of stable post-quantum protections.

One of the ideas we’ve come up with uses another of Merkle’s foundational contributions: Merkle trees.

Merkle trees authenticate multiple records by hashing them together in a tree structure. The records are the “leaves” of the tree. Pairs of leaves are hashed together to form a branch, then pairs of branches are hashed together to form a larger branch, and so on. The hash of the largest branches is the tree’s “root.” (This is a data-structure root, unrelated to the DNS root.)

Each individual leaf of a Merkle tree can be authenticated by retracing the “path” from the leaf to the root. The path consists of the hashes of each of the adjacent branches encountered along the way.

Authentication paths can be much shorter than typical hash-based signatures. For instance, with a tree depth of 20 and a 256-bit hash value, the authentication path for a leaf would only be 5,120 bits long, yet a single tree could authenticate more than a million leaves.

Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.
Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.

Returning to the example above, suppose that instead of signing each DNS record set with a hash-based signature, each record set were considered a leaf of a Merkle tree. Suppose further that the root of this tree were to be published as the ZSK public key (see Figure 2). The authentication path to the leaf could then serve as the record set’s signature.

The validation logic at a resolver would be the same as in ordinary DNSSEC:

  • The resolver would obtain the ZSK public key from a DNSKEY record set signed by the KSK.
  • The resolver would then validate the signature on the record set of interest with the ZSK public key.

The only difference on the resolver’s side would be that signature validation would involve retracing the authentication path to the ZSK public key, rather than a conventional signature validation operation.

The ZSK public key produced by the Merkle tree approach would be a “synthesized” public key, in that it is obtained from the records being signed. This is noteworthy from a cryptographer’s perspective, because the public key wouldn’t have a corresponding private key, yet the DNS records would still, in effect, be “signed by the ZSK!”

Additional Design Considerations

In this type of DNSSEC implementation, the Merkle tree approach only applies to the ZSK level. Hash-based signatures would still be applied at the KSK level, although their overhead would now be “amortized” across all records in the zone.

In addition, each new ZSK would need to be signed “on demand,” rather than in advance, as in current operational practice.

This leads to tradeoffs, such as how many changes to accumulate before constructing and publishing a new tree. Fewer changes and the tree will be available sooner. More changes and the tree will be larger, so the per-record overhead of the signatures at the KSK level will be lower.

Conclusion

My last few posts have discussed cryptographic techniques that could potentially be applied to the DNS in the long term — or that might not even be applied at all. In my next post, I’ll return to more conventional subjects, and explain how Verisign sees cryptography fitting into the DNS today, as well as some important non-cryptographic techniques that are part of our vision for a secure, stable and resilient DNS.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Research into concepts such as hash-based signatures and synthesized zone signing keys indicates that these techniques have the potential to keep the Domain Name System (DNS) secure for the long term if added into the Domain Name System Security Extensions (DNSSEC).

The post Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys appeared first on Verisign Blog.

Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon

This is the fourth in a multi-part series on cryptography and the Domain Name System (DNS).

One of the “key” questions cryptographers have been asking for the past decade or more is what to do about the potential future development of a large-scale quantum computer.

If theory holds, a quantum computer could break established public-key algorithms including RSA and elliptic curve cryptography (ECC), building on Peter Shor’s groundbreaking result from 1994.

This prospect has motivated research into new so-called “post-quantum” algorithms that are less vulnerable to quantum computing advances. These algorithms, once standardized, may well be added into the Domain Name System Security Extensions (DNSSEC) — thus also adding another dimension to a cryptographer’s perspective on the DNS.

(Caveat: Once again, the concepts I’m discussing in this post are topics we’re studying in our long-term research program as we evaluate potential future applications of technology. They do not necessarily represent Verisign’s plans or position on possible new products or services.)

Post-Quantum Algorithms

The National Institute of Standards and Technology (NIST) started a Post-Quantum Cryptography project in 2016 to “specify one or more additional unclassified, publicly disclosed digital signature, public-key encryption, and key-establishment algorithms that are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers.”

Security protocols that NIST is targeting for these algorithms, according to its 2019 status report (Section 2.2.1), include: “Transport Layer Security (TLS), Secure Shell (SSH), Internet Key Exchange (IKE), Internet Protocol Security (IPsec), and Domain Name System Security Extensions (DNSSEC).”

The project is now in its third round, with seven finalists, including three digital signature algorithms, and eight alternates.

NIST’s project timeline anticipates that the draft standards for the new post-quantum algorithms will be available between 2022 and 2024.

It will likely take several additional years for standards bodies such as the Internet Engineering Task (IETF) to incorporate the new algorithms into security protocols. Broad deployments of the upgraded protocols will likely take several years more.

Post-quantum algorithms can therefore be considered a long-term issue, not a near-term one. However, as with other long-term research, it’s appropriate to draw attention to factors that need to be taken into account well ahead of time.

DNSSEC Operational Considerations

The three candidate digital signature algorithms in NIST’s third round have one common characteristic: all of them have a key size or signature size (or both) that is much larger than for current algorithms.

Key and signature sizes are important operational considerations for DNSSEC because most of the DNS traffic exchanged with authoritative data servers is sent and received via the User Datagram Protocol (UDP), which has a limited response size.

Response size concerns were evident during the expansion of the root zone signing key (ZSK) from 1024-bit to 2048-bit RSA in 2016, and in the rollover of the root key signing key (KSK) in 2018. In the latter case, although the signature and key sizes didn’t change, total response size was still an issue because responses during the rollover sometimes carried as many as four keys rather than the usual two.

Thanks to careful design and implementation, response sizes during these transitions generally stayed within typical UDP limits. Equally important, response sizes also appeared to have stayed within the Maximum Transmission Unit (MTU) of most networks involved, thereby also avoiding the risk of packet fragmentation. (You can check how well your network handles various DNSSEC response sizes with this tool developed by Verisign Labs.)

Modeling Assumptions

The larger sizes associated with certain post-quantum algorithms do not appear to be a significant issue either for TLS, according to one benchmarking study, or for public-key infrastructures, according to another report. However, a recently published study of post-quantum algorithms and DNSSEC observes that “DNSSEC is particularly challenging to transition” to the new algorithms.

Verisign Labs offers the following observations about DNSSEC-related queries that may help researchers to model DNSSEC impact:

A typical resolver that implements both DNSSEC validation and qname minimization will send a combination of queries to Verisign’s root and top-level domain (TLD) servers.

Because the resolver is a validating resolver, these queries will all have the “DNSSEC OK” bit set, indicating that the resolver wants the DNSSEC signatures on the records.

The content of typical responses by Verisign’s root and TLD servers to these queries are given in Table 1 below. (In the table, <SLD>.<TLD> are the final two labels of a domain name of interest, including the TLD and the second-level domain (SLD); record types involved include A, Name Server (NS), and DNSKEY.)

Name Server Resolver Query Scenario Typical Response Content from Verisign’s Servers
Root DNSKEY record set for root zone • DNSKEY record set including root KSK RSA-2048 public key and root ZSK RSA-2048 public key
• Root KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <TLD> — when <TLD> exists • NS referral to <TLD> name server
• DS record set for <TLD> zone
• Root ZSK RSA-2048 signature on DS record set
A or NS record set for <TLD> — when <TLD> doesn’t exist • Up to two NSEC records for non-existence of <TLD>
• Root ZSK RSA-2048 signatures on NSEC records
.com / .net DNSKEY record set for <TLD> zone • DNSKEY record set including <TLD> KSK RSA-2048 public key and <TLD> ZSK RSA-1280 public key
• <TLD> KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> exists • NS referral to <SLD>.<TLD> name server
• DS record set for <SLD>.<TLD> zone (if <SLD>.<TLD> supports DNSSEC)
• <TLD> ZSK RSA-1280 signature on DS record set (if present)
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> doesn’t exist • Up to three NSEC3 records for non-existence of <SLD>.<TLD>
• <TLD> ZSK RSA-1280 signatures on NSEC3 records
Table 1. Combination of queries that may be sent to Verisign’s root and TLD servers by a typical resolver that implements both DNSSEC validation and qname minimization, and content of associated responses.


For an A or NS query, the typical response, when the domain of interest exists, includes a referral to another name server. If the domain supports DNSSEC, the response also includes a set of Delegation Signer (DS) records providing the hashes of each of the referred zone’s KSKs — the next link in the DNSSEC trust chain. When the domain of interest doesn’t exist, the response includes one or more Next Secure (NSEC) or Next Secure 3 (NSEC3) records.

Researchers can estimate the effect of post-quantum algorithms on response size by replacing the sizes of the various RSA keys and signatures with those for their post-quantum counterparts. As discussed above, it is important to keep in mind that the number of keys returned may be larger during key rollovers.

Most of the queries from qname-minimizing, validating resolvers to the root and TLD name servers will be for A or NS records (the choice depends on the implementation of qname minimization, and has recently trended toward A). The signature size for a post-quantum algorithm, which affects all DNSSEC-related responses, will therefore generally have a much larger impact on average response size than will the key size, which affects only the DNSKEY responses.

Conclusion

Post-quantum algorithms are among the newest developments in cryptography. They add another dimension to a cryptographer’s perspective on the DNS because of the possibility that these algorithms, or other variants, may be added to DNSSEC in the long term.

In my next post, I’ll make the case for why the oldest post-quantum algorithm, hash-based signatures, could be a particularly good match for DNSSEC. I’ll also share the results of some research at Verisign Labs into how the large signature sizes of hash-based signatures could potentially be overcome.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Post-quantum algorithms are among the newest developments in cryptography. When standardized, they could eventually be added into the Domain Name System Security Extensions (DNSSEC) to help keep the DNS secure for the long term.

The post Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon appeared first on Verisign Blog.

Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries

This is the third in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my last post, I looked at what happens when a DNS query renders a “negative” response – i.e., when a domain name doesn’t exist. I then examined two cryptographic approaches to handling negative responses: NSEC and NSEC3. In this post, I will examine a third approach, NSEC5, and a related concept that protects client information, tokenized queries.

The concepts I discuss below are topics we’ve studied in our long-term research program as we evaluate new technologies. They do not necessarily represent Verisign’s plans or position on a new product or service. Concepts developed in our research program may be subject to U.S. and international patents and patent applications.

NSEC5

NSEC5 is a result of research by cryptographers at Boston University and the Weizmann Institute. In this approach, which is still in an experimental stage, the endpoints are the outputs of a verifiable random function (VRF), a cryptographic primitive that has been gaining interest in recent years. NSEC5 is documented in an Internet Draft (currently expired) and in several research papers.

A VRF is like a hash function but with two important differences:

  1. In addition to a message input, a VRF has a second input, a private key. (As in public-key cryptography, there’s also a corresponding public key.) No one can compute the outputs without the private key, hence the “random.”
  2. A VRF has two outputs: a token and a proof. (I’ve adopted the term “token” for alignment with the research that I describe next. NSEC5 itself simply uses “hash.”) Anyone can check that the token is correct given the proof and the public key, hence the “verifiable.”

So, it’s not only hard for an adversary to reverse the VRF – which is also a property the hash function has – but it’s also hard for the adversary to compute the VRF in the forward direction, thus preventing dictionary attacks. And yet a relying party can still confirm that the VRF output for a given input is correct, because of the proof.

How does this work in practice? As in NSEC and NSEC3, range statements are prepared in advance and signed with the zone signing key (ZSK). With NSEC5, however, the range endpoints are two consecutive tokens.

When a domain name doesn’t exist, the name server applies the VRF to the domain name to obtain a token and a proof. The name sever then returns a range statement where the token falls within the range, as well as the proof, as shown in the figure below. Note that the token values are for illustration only.

Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.
Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.

Because the range statement reveals only tokenized versions of other domain names in a zone, an adversary who doesn’t know the private key doesn’t learn any new existing domain names from the response. Indeed, to find out which domain name corresponds to one of the tokenized endpoints, the adversary would need access to the VRF itself to see if a candidate domain name has a matching hash value, which would involve an online dictionary attack. This significantly reduces disclosure risk.

The name server needs a copy of the zone’s NSEC5 private key so that it can generate proofs for non-existent domain names. The ZSK itself can stay in the provisioning system. As the designers of NSEC5 have pointed out, if the NSEC5 private key does happen to be compromised, this only makes it possible to do a dictionary attack offline— not to generate signatures on new range statements, or on new positive responses.

NSEC5 is interesting from a cryptographer’s perspective because it uses a less common cryptographic technique, a VRF, to achieve a design goal that was at best partially met by previous approaches. As with other new technologies, DNS operators will need to consider whether NSEC5’s benefits are sufficient to justify its cost and complexity. Verisign doesn’t have any plans to implement NSEC5, as we consider NSEC and NSEC3 adequate for the name servers we currently operate. However, we will continue to track NSEC5 and related developments as part of our long-term research program.

Tokenized Queries

A few years before NSEC5 was published, Verisign Labs had started some research on an opposite application of tokenization to the DNS, to protect a client’s information from disclosure.

In our approach, instead of asking the resolver “What is <name>’s IP address,” the client would ask “What is token 3141…’s IP address,” where 3141… is the tokenization of <name>.

(More precisely, the client would specify both the token and the parent zone that the token relates to, e.g., the TLD of the domain name. Only the portion of the domain name below the parent would be obscured, just as in NSEC5. I’ve omitted the zone information for simplicity in this discussion.)

Suppose now that the domain name corresponding to token 3141… does exist. Then the resolver would respond with the domain name’s IP address as usual, as shown in the next figure.

Figure 2. Tokenized queries
Figure 2. Tokenized queries.

In this case, the resolver would know that the domain name associated with the token does exist, because it would have a mapping between the token and the DNS record, i.e., the IP address. Thus, the resolver would effectively “know” the domain name as well for practical purposes. (We’ve developed another approach that can protect both the domain name and the DNS record from disclosure to the resolver in this case, but that’s perhaps a topic for another post.)

Now, consider a domain name that doesn’t exist and suppose that its token is 2718… .

In this case, the resolver would respond that the domain name doesn’t exist, as usual, as shown below.

Figure 3. Non-existence with tokenized queries
Figure 3. Non-existence with tokenized queries.

But because the domain name is tokenized and no other information about the domain name is returned, the resolver would only learn the token 2718… (and the parent zone), not the actual domain name that the client is interested in.

The resolver could potentially know that the name doesn’t exist via a range statement from the parent zone, as in NSEC5.

How does the client tokenize the domain name, if it doesn’t have the private key for the VRF? The name server would offer a public interface to the tokenization function. This can be done in what cryptographers call an “oblivious” VRF protocol, where the name server doesn’t see the actual domain name during the protocol, yet the client still gets the token.

To keep the resolver itself from using this interface to do an online dictionary attack that matches candidate domain names with tokens, the name server could rate-limit access, or restrict it only to authorized requesters.

Additional details on this technology may be found in U.S. Patent 9,202,079B2, entitled “Privacy preserving data querying,” and related patents.

It’s interesting from a cryptographer’s perspective that there’s a way for a client to find out whether a DNS record exists, without necessarily revealing the domain name of interest. However, as before, the benefits of this new technology will be weighed against its operational cost and complexity and compared to other approaches. Because this technique focuses on client-to-resolver interactions, it’s already one step removed from the name servers that Verisign currently operates, so it is not as relevant to our business today in a way it might have been when we started the research. This one will stay under our long-term tracking as well.

Conclusion

The examples I’ve shared in these last two blog posts make it clear that cryptography has the potential to bring interesting new capabilities to the DNS. While the particular examples I’ve shared here do not meet the criteria for our product roadmap, researching advances in cryptography and other techniques remains important because new events can sometimes change the calculus. That point will become even more evident in my next post, where I’ll consider the kinds of cryptography that may be needed in the event that one or more of today’s algorithms is compromised, possibly through the introduction of a quantum computer.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries appeared first on Verisign Blog.

Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3

This is the second in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my previous post, I described the first broad scale deployment of cryptography in the DNS, known as the Domain Name System Security Extensions (DNSSEC). I described how a name server can enable a requester to validate the correctness of a “positive” response to a query — when a queried domain name exists — by adding a digital signature to the DNS response returned.

The designers of DNSSEC, as well as academic researchers, have separately considered the answer of “negative” responses – when the domain name doesn’t exist. In this case, as I’ll explain, responding with a signed “does not exist” is not the best design. This makes the non-existence case interesting from a cryptographer’s perspective as well.

Initial Attempts

Consider a domain name like example.arpa that doesn’t exist.

If it did exist, then as I described in my previous post, the second-level domain (SLD) server for example.arpa would return a response signed by example.arpa’s zone signing key (ZSK).

So a first try for the case that the domain name doesn’t exist is for the SLD server to return the response “example.arpa doesn’t exist,” signed by example.arpa’s ZSK.

However, if example.arpa doesn’t exist, then example.arpa won’t have either an SLD server or a ZSK to sign with. So, this approach won’t work.

A second try is for the parent name server — the .arpa top-level domain (TLD) server in the example — to return the response “example.arpa doesn’t exist,” signed by the parent’s ZSK.

This could work if the .arpa DNS server knows the ZSK for .arpa. However, for security and performance reasons, the design preference for DNSSEC has been to keep private keys offline, within the zone’s provisioning system.

The provisioning system can precompute statements about domain names that do exist — but not about every possible individual domain name that doesn’t exist. So, this won’t work either, at least not for the servers that keep their private keys offline.

The third try is the design that DNSSEC settled on. The parent name server returns a “range statement,” previously signed with the ZSK, that states that there are no domain names in an ordered sequence between two “endpoints” where the endpoints depend on domain names that do exist. The range statements can therefore be signed offline, and yet the name server can still choose an appropriate signed response to return, based on the (non-existent) domain name in the query.

The DNS community has considered several approaches to constructing range statements, and they have varying cryptographic properties. Below I’ve described two such approaches. For simplicity, I’ve focused just on the basics in the discussion that follows. The astute reader will recognize that there are many more details involved both in the specification and the implementation of these techniques.

NSEC

The first approach, called NSEC, involved no additional cryptography beyond the DNSSEC signature on the range statement. In NSEC, the endpoints are actual domain names that exist. NSEC stands for “Next Secure,” referring to the fact that the second endpoint in the range is the “next” existing domain name following the first endpoint.

The NSEC resource record is documented in one of the original DNSSEC specifications, RFC4033, which was co-authored by Verisign.

The .arpa zone implements NSEC. When the .arpa server receives the request “What is the IP address of example.arpa,” it returns the response “There are no names between e164.arpa and home.arpa.” This exchange is shown in the figure below and is analyzed in the associated DNSviz graph. (The response is accurate as of the writing of this post; it could be different in the future if names were added to or removed from the .arpa zone.)

NSEC has a side effect: responses immediately reveal unqueried domain names in the zone. Depending on the sensitivity of the zone, this may be undesirable from the perspective of the minimum disclosure principle.

Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post)
Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post).

NSEC3

A second approach, called NSEC3 reduces the disclosure risk somewhat by defining the endpoints as hashes of existing domain names. (NSEC3 is documented in RFC 5155, which was also co-authored by Verisign.)

An example of NSEC3 can be seen with example.name, another domain that doesn’t exist. Here, the .name TLD server returns a range statement that “There are no domain names with hashes between 5SU9… and 5T48…”. Because the hash of example.name is “5SVV…” the response implies that “example.name” doesn’t exist.

This statement is shown in the figure below and in another DNSviz graph. (As above, the actual response could change if the .name zone changes.)

Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post)
Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post).

To find out which domain name corresponds to one of the hashed endpoints, an adversary would have to do a trial-and-error or “dictionary” attack across multiple guesses of domain names, to see if any has a matching hash value. Such a search could be performed “offline,” i.e., without further interaction with the name server, which is why the disclosure risk is only somewhat reduced.

NSEC and NSEC3 are mutually exclusive. Nearly all TLDs, including all TLDs operated by Verisign, implement NSEC3. In addition to .arpa, the root zone also implements NSEC.

In my next post, I’ll describe NSEC5, an approach still in the experimental stage, that replaces the hash function in NSEC3 with a verifiable random function (VRF) to protect against offline dictionary attacks. I’ll also share some research Verisign Labs has done on a complementary approach that helps protect a client’s queries for non-existent domain names from disclosure.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3 appeared first on Verisign Blog.

The Domain Name System: A Cryptographer’s Perspective

Man looking at technical imagery

This is the first in a multi-part blog series on cryptography and the Domain Name System (DNS).

As one of the earliest protocols in the internet, the DNS emerged in an era in which today’s global network was still an experiment. Security was not a primary consideration then, and the design of the DNS, like other parts of the internet of the day, did not have cryptography built in.

Today, cryptography is part of almost every protocol, including the DNS. And from a cryptographer’s perspective, as I described in my talk at last year’s International Cryptographic Module Conference (ICMC20), there’s so much more to the story than just encryption.

Where It All Began: DNSSEC

The first broad-scale deployment of cryptography in the DNS was not for confidentiality but for data integrity, through the Domain Name System Security Extensions (DNSSEC), introduced in 2005.

The story begins with the usual occurrence that happens millions of times a second around the world: a client asks a DNS resolver a query like “What is example.com’s Internet Protocol (IP) address?” The resolver in this case answers: “example.com’s IP address is 93.184.216.34”. (This is the correct answer.)

If the resolver doesn’t already know the answer to the request, then the process to find the answer goes something like this:

  • With qname minimization, when the resolver receives this request, it starts by asking a related question to one of the DNS’s 13 root servers, such as the A and J root servers operated by Verisign: “Where is the name server for the .com top-level domain (TLD)?”
  • The root server refers the resolver to the .com TLD server.
  • The resolver asks the TLD server, “Where is the name server for the example.com second-level domain (SLD)?”
  • The TLD server then refers the resolver to the example.com server.
  • Finally, the resolver asks the SLD server, “What is example.com’s IP address?” and receives an answer: “93.184.216.34”.

Digital Signatures

But how does the resolver know that the answer it ultimately receives is correct? The process defined by DNSSEC follows the same “delegation” model from root to TLD to SLD as I’ve described above.

Indeed, DNSSEC provides a way for the resolver to check that the answer is correct by validating a chain of digital signatures, by examining digital signatures at each level of the DNS hierarchy (or technically, at each “zone” in the delegation process). These digital signatures are generated using public key cryptography, a well-understood process that involves encryption using key pairs, one public and one private.

In a typical DNSSEC deployment, there are two active public keys per zone: a Key Signing Key (KSK) public key and a Zone Signing Key (ZSK) public key. (The reason for having two keys is so that one key can be changed locally, without the other key being changed.)

The responses returned to the resolver include digital signatures generated by either the corresponding KSK private key or the corresponding ZSK private key.

Using mathematical operations, the resolver checks all the digital signatures it receives in association with a given query. If they are valid, the resolver returns the “Digital Signature Validated” indicator to the client that initiated the query.

Trust Chains

Figure 1 A Simplified View of the DNSSEC Chain.
Figure 1: A Simplified View of the DNSSEC Chain.

A convenient way to visualize the collection of digital signatures is as a “trust chain” from a “trust anchor” to the DNS record of interest, as shown in the figure above. The chain includes “chain links” at each level of the DNS hierarchy. Here’s how the “chain links” work:

The root KSK public key is the “trust anchor.” This key is widely distributed in resolvers so that they can independently authenticate digital signatures on records in the root zone, and thus authenticate everything else in the chain.

The root zone chain links consist of three parts:

  1. The root KSK public key is published as a DNS record in the root zone. It must match the trust anchor.
  2. The root ZSK public key is also published as a DNS record. It is signed by the root KSK private key, thus linking the two keys together.
  3. The hash of the TLD KSK public key is published as a DNS record. It is signed by the root ZSK private key, further extending the chain.

The TLD zone chain links also consist of three parts:

  1. The TLD KSK public key is published as a DNS record; its hash must match the hash published in the root zone.
  2. The TLD ZSK public key is published as a DNS record, which is signed by the TLD KSK private key.
  3. The hash of the SLD KSK public key is published as a DNS record. It is signed by the TLD ZSK private key.

The SLD zone chain links once more consist of three parts:

  1. The SLD KSK public key is published as a DNS record. Its hash, as expected, must match the hash published in the TLD zone.
  2. The SLD ZSK public key is published as a DNS record signed by the SLD KSK private key.
  3. A set of DNS records – the ultimate response to the query – is signed by the SLD ZSK private key.

A resolver (or anyone else) can thereby verify the signature on any set of DNS records given the chain of public keys leading up to the trust anchor.

Note that this is a simplified view, and there are other details in practice. For instance, the various KSK public keys are also signed by their own private KSK, but I’ve omitted these signatures for brevity. The DNSViz tool provides a very nice interactive interface for browsing DNSSEC trust chains in detail, including the trust chain for example.com discussed here.

Updating the Root KSK Public Key

The effort to update the root KSK public key, the aforementioned “trust anchor” was one of the challenging and successful projects by the DNS community over the past couple of years. This initiative – the so-called “root KSK rollover” – was challenging because there was no easy way to determine whether resolvers actually had been updated to use the latest root KSK — remember that cryptography and security was added on rather than built into the DNS. There are many resolvers that needed to be updated, each independently managed.

The research paper “Roll, Roll, Roll your Root: A Comprehensive Analysis of the First Ever DNSSEC Root KSK Rollover” details the process of updating the root KSK. The paper, co-authored by Verisign researchers and external colleagues, received the distinguished paper award at the 2019 Internet Measurement Conference.

Final Thoughts

I’ve focused here on how a resolver validates correctness when the response to a query has a “positive” answer — i.e., when the DNS record exists. Checking correctness when the answer doesn’t exist gets even more interesting from a cryptographer’s perspective. I’ll cover this topic in my next post.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post The Domain Name System: A Cryptographer’s Perspective appeared first on Verisign Blog.

Chromium’s Reduction of Root DNS Traffic

Search Bar

As we begin a new year, it is important to look back and reflect on our accomplishments and how we can continue to improve. A significant positive the DNS community could appreciate from 2020 is the receptiveness and responsiveness of the Chromium team to address the large amount of DNS queries being sent to the root server system.

In a previous blog post, we quantified that upwards of 45.80% of total DNS traffic to the root servers was, at the time, the result of Chromium intranet redirection detection tests. Since then, the Chromium team has redesigned its code to disable the redirection test on Android systems and introduced a multi-state DNS interception policy that supports disabling the redirection test for desktop browsers. This functionality was released mid-November of 2020 for Android systems in Chromium 87 and, quickly thereafter, the root servers experienced a rapid decline of DNS queries.

The figure below highlights the significant decline of query volume to the root server system immediately after the Chromium 87 release. Prior to the software release, the root server system saw peaks of ~143 billion queries per day. Traffic volumes have since decreased to ~84 billion queries a day. This represents more than a 41% reduction of total query volume.

Note: Some data from root operators was not available at the time of this publication.

This type of broad root system measurement is facilitated by ICANN’s Root Server System Advisory Committee standards document RSSAC002, which establishes a set of baseline metrics for the root server system. These root server metrics are readily available to the public for analysis, monitoring, and research. These metrics represent another milestone the DNS community could appreciate and continue to support and refine going forward.

Rightly noted in ICANN’s Root Name Service Strategy and Implementation publication, the root server system currently “faces growing volumes of traffic” from legitimate users but also from misconfigurations, misuse, and malicious actors and that “the costs incurred by the operators of the root server system continue to climb to mitigate these attacks using the traditional approach”.

As we reflect on how Chromium’s large impact to root server traffic was identified and then resolved, we as a DNS community could consider how outreach and engagement should be incorporated into a traditional approach of addressing DNS security, stability, and resiliency. All too often, technologists solve problems by introducing additional layers of technology abstractions and disregarding simpler solutions, such as outreach and engagement.

We believe our efforts show how such outreach and community engagement can have significant impact both to the parties directly involved, and to the broader community. Chromium’s actions will directly aide and ease the operational costs to mitigate attacks at the root. Reducing the root server system load by 41%, with potential further reduction depending on future Chromium deployment decisions, will lighten operational costs incurred to mitigate attacks by relinquishing their computing and network resources.

In pursuit of maintaining a responsible and common-sense root hygiene regimen, Verisign will continue to analyze root telemetry data and engage with entities such as Chromium to highlight operational concerns, just as Verisign has done in the past to address name collisions problems. We’ll be sharing more information on this shortly.

This piece was co-authored by Matt Thomas and Duane Wessels, Distinguished Engineers at Verisign.

The post Chromium’s Reduction of Root DNS Traffic appeared first on Verisign Blog.

Meeting the Evolving Challenges of COVID-19

Verisign Logo

The COVID-19 pandemic, when it struck earlier this year, ushered in an immediate period of adjustment for all of us. And just as the challenges posed by COVID-19 in 2020 have been truly unprecedented, Verisign’s mission – enabling the world to connect online with reliability and confidence, anytime, anywhere – has never been more relevant. We are grateful for the continued dedication of our workforce, which enables us to provide the building blocks people need for remote working and learning, and simply for keeping in contact with each other.

At Verisign we took early action to adopt a COVID-19 work posture to protect our people, their families, and our operations. This involved the majority of our employees working from home, and implementing new cleaning and health safety protocols to protect those employees and contractors for whom on-site presence was essential to maintain key functions.

Our steps to address the pandemic did not stop there. On March 25 we announced a series of measures to help the communities where we live and work, and the broader DNS community in which we operate. This included, under our Verisign Cares program, making contributions to organizations supporting key workers, first responders and medical personnel, and doubling the company’s matching program for employee giving so that employee donations to support the COVID-19 response could have a greater impact.

Today, while vaccines may offer signs of long term hope, the pandemic has plunged many families into economic hardship and has had a dramatic effect on food insecurity in the U.S., with an estimated 50 million people affected. With this hardship in mind, we have this week made contributions totaling $275,000 to food banks in the areas where we have our most substantial footprint: the Washington DC-Maryland-Virginia region; Delaware; and the canton of Fribourg, in Switzerland. This will help local families put food on their tables during what will be a difficult winter for many.

The pandemic has also had a disproportionate, and potentially permanent, impact on certain sectors of the economy. So today Verisign is embarking on a partnership with Virginia Ready, which helps people affected by COVID-19 access training and certification for in-demand jobs in sectors such as technology. We are making an initial contribution of $250,000 to Virginia Ready, and will look to establish further partnerships of this kind across the country in 2021.

As people around the world gather online to address the global challenges posed by COVID-19, we want to share some of the steps we have taken so far to support the communities we serve, while keeping our critical internet infrastructure running smoothly.

The post Meeting the Evolving Challenges of COVID-19 appeared first on Verisign Blog.

❌