A 22-year-old Oregon man has been arrested on suspicion of operating “Rapper Bot,” a massive botnet used to power a service for launching distributed denial-of-service (DDoS) attacks against targets — including a March 2025 DDoS that knocked Twitter/X offline. The Justice Department asserts the suspect and an unidentified co-conspirator rented out the botnet to online extortionists, and tried to stay off the radar of law enforcement by ensuring that their botnet was never pointed at KrebsOnSecurity.
The control panel for the Rapper Bot botnet greets users with the message “Welcome to the Ball Pit, Now with refrigerator support,” an apparent reference to a handful of IoT-enabled refrigerators that were enslaved in their DDoS botnet.
On August 6, 2025, federal agents arrested Ethan J. Foltz of Springfield, Ore. on suspicion of operating Rapper Bot, a globally dispersed collection of tens of thousands of hacked Internet of Things (IoT) devices.
The complaint against Foltz explains the attacks usually clocked in at more than two terabits of junk data per second (a terabit is one trillion bits of data), which is more than enough traffic to cause serious problems for all but the most well-defended targets. The government says Rapper Bot consistently launched attacks that were “hundreds of times larger than the expected capacity of a typical server located in a data center,” and that some of its biggest attacks exceeded six terabits per second.
Indeed, Rapper Bot was reportedly responsible for the March 10, 2025 attack that caused intermittent outages on Twitter/X. The government says Rapper Bot’s most lucrative and frequent customers were involved in extorting online businesses — including numerous gambling operations based in China.
The criminal complaint was written by Elliott Peterson, an investigator with the Defense Criminal Investigative Service (DCIS), the criminal investigative division of the Department of Defense (DoD) Office of Inspector General. The complaint notes the DCIS got involved because several Internet addresses maintained by the DoD were the target of Rapper Bot attacks.
Peterson said he tracked Rapper Bot to Foltz after a subpoena to an ISP in Arizona that was hosting one of the botnet’s control servers showed the account was paid for via PayPal. More legal process to PayPal revealed Foltz’s Gmail account and previously used IP addresses. A subpoena to Google showed the defendant searched security blogs constantly for news about Rapper Bot, and for updates about competing DDoS-for-hire botnets.
According to the complaint, after having a search warrant served on his residence the defendant admitted to building and operating Rapper Bot, sharing the profits 50/50 with a person he claimed to know only by the hacker handle “Slaykings.” Foltz also shared with investigators the logs from his Telegram chats, wherein Foltz and Slaykings discussed how best to stay off the radar of law enforcement investigators while their competitors were getting busted.
Specifically, the two hackers chatted about a May 20 attack against KrebsOnSecurity.com that clocked in at more than 6.3 terabits of data per second. The brief attack was notable because at the time it was the largest DDoS that Google had ever mitigated (KrebsOnSecurity sits behind the protection of Project Shield, a free DDoS defense service that Google provides to websites offering news, human rights, and election-related content).
The May 2025 DDoS was launched by an IoT botnet called Aisuru, which I discovered was operated by a 21-year-old man in Brazil named Kaike Southier Leite. This individual was more commonly known online as “Forky,” and Forky told me he wasn’t afraid of me or U.S. federal investigators. Nevertheless, the complaint against Foltz notes that Forky’s botnet seemed to diminish in size and firepower at the same time that Rapper Bot’s infection numbers were on the upswing.
“Both FOLTZ and Slaykings were very dismissive of attention seeking activities, the most extreme of which, in their view, was to launch DDoS attacks against the website of the prominent cyber security journalist Brian Krebs,” Peterson wrote in the criminal complaint.
“You see, they’ll get themselves [expletive],” Slaykings wrote in response to Foltz’s comments about Forky and Aisuru bringing too much heat on themselves.
“Prob cuz [redacted] hit krebs,” Foltz wrote in reply.
“Going against Krebs isn’t a good move,” Slaykings concurred. “It isn’t about being a [expletive] or afraid, you just get a lot of problems for zero money. Childish, but good. Let them die.”
“Ye, it’s good tho, they will die,” Foltz replied.
The government states that just prior to Foltz’s arrest, Rapper Bot had enslaved an estimated 65,000 devices globally. That may sound like a lot, but the complaint notes the defendants weren’t interested in making headlines for building the world’s largest or most powerful botnet.
Quite the contrary: The complaint asserts that the accused took care to maintain their botnet in a “Goldilocks” size — ensuring that “the number of devices afforded powerful attacks while still being manageable to control and, in the hopes of Foltz and his partners, small enough to not be detected.”
The complaint states that several days later, Foltz and Slaykings returned to discussing what that they expected to befall their rival group, with Slaykings stating, “Krebs is very revenge. He won’t stop until they are [expletive] to the bone.”
“Surprised they have any bots left,” Foltz answered.
“Krebs is not the one you want to have on your back. Not because he is scary or something, just because he will not give up UNTIL you are [expletive] [expletive]. Proved it with Mirai and many other cases.”
[Unknown expletives aside, that may well be the highest compliment I’ve ever been paid by a cybercriminal. I might even have part of that quote made into a t-shirt or mug or something. It’s also nice that they didn’t let any of their customers attack my site — if even only out of a paranoid sense of self-preservation.]
Foltz admitted to wiping the user and attack logs for the botnet approximately once a week, so investigators were unable to tally the total number of attacks, customers and targets of this vast crime machine. But the data that was still available showed that from April 2025 to early August, Rapper Bot conducted over 370,000 attacks, targeting 18,000 unique victims across 1,000 networks, with the bulk of victims residing in China, Japan, the United States, Ireland and Hong Kong (in that order).
According to the government, Rapperbot borrows much of its code from fBot, a DDoS malware strain also known as Satori. In 2020, authorities in Northern Ireland charged a then 20-year-old man named Aaron “Vamp” Sterritt with operating fBot with a co-conspirator. U.S. prosecutors are still seeking Sterritt’s extradition to the United States. fBot is itself a variation of the Mirai IoT botnet that has ravaged the Internet with DDoS attacks since its source code was leaked back in 2016.
The complaint says Foltz and his partner did not allow most customers to launch attacks that were more than 60 seconds in duration — another way they tried to keep public attention to the botnet at a minimum. However, the government says the proprietors also had special arrangements with certain high-paying clients that allowed much larger and longer attacks.
The accused and his alleged partner made light of this blog post about the fallout from one of their botnet attacks.
Most people who have never been on the receiving end of a monster DDoS attack have no idea of the cost and disruption that such sieges can bring. The DCIS’s Peterson wrote that he was able to test the botnet’s capabilities while interviewing Foltz, and that found that “if this had been a server upon which I was running a website, using services such as load balancers, and paying for both outgoing and incoming data, at estimated industry average rates the attack (2+ Terabits per second times 30 seconds) might have cost the victim anywhere from $500 to $10,000.”
“DDoS attacks at this scale often expose victims to devastating financial impact, and a potential alternative, network engineering solutions that mitigate the expected attacks such as overprovisioning, i.e. increasing potential Internet capacity, or DDoS defense technologies, can themselves be prohibitively expensive,” the complaint continues. “This ‘rock and a hard place’ reality for many victims can leave them acutely exposed to extortion demands – ‘pay X dollars and the DDoS attacks stop’.”
The Telegram chat records show that the day before Peterson and other federal agents raided Foltz’s residence, Foltz allegedly told his partner he’d found 32,000 new devices that were vulnerable to a previously unknown exploit.
Foltz and Slaykings discussing the discovery of an IoT vulnerability that will give them 32,000 new devices.
Shortly before the search warrant was served on his residence, Foltz allegedly told his partner that “Once again we have the biggest botnet in the community.” The following day, Foltz told his partner that it was going to be a great day — the biggest so far in terms of income generated by Rapper Bot.
“I sat next to Foltz while the messages poured in — promises of $800, then $1,000, the proceeds ticking up as the day went on,” Peterson wrote. “Noticing a change in Foltz’ behavior and concerned that Foltz was making changes to the botnet configuration in real time, Slaykings asked him ‘What’s up?’ Foltz deftly typed out some quick responses. Reassured by Foltz’ answer, Slaykings responded, ‘Ok, I’m the paranoid one.”
The case is being prosecuted by Assistant U.S. Attorney Adam Alexander in the District of Alaska (at least some of the devices found to be infected with Rapper Bot were located there, and it is where Peterson is stationed). Foltz faces one count of aiding and abetting computer intrusions. If convicted, he faces a maximum penalty of 10 years in prison, although a federal judge is unlikely to award anywhere near that kind of sentence for a first-time conviction.
Microsoft today released updates to fix more than 100 security flaws in its Windows operating systems and other software. At least 13 of the bugs received Microsoft’s most-dire “critical” rating, meaning they could be abused by malware or malcontents to gain remote access to a Windows system with little or no help from users.
August’s patch batch from Redmond includes an update for CVE-2025-53786, a vulnerability that allows an attacker to pivot from a compromised Microsoft Exchange Server directly into an organization’s cloud environment, potentially gaining control over Exchange Online and other connected Microsoft Office 365 services. Microsoft first warned about this bug on Aug. 6, saying it affects Exchange Server 2016 and Exchange Server 2019, as well as its flagship Exchange Server Subscription Edition.
Ben McCarthy, lead cyber security engineer at Immersive, said a rough search reveals approximately 29,000 Exchange servers publicly facing on the internet that are vulnerable to this issue, with many of them likely to have even older vulnerabilities.
McCarthy said the fix for CVE-2025-53786 requires more than just installing a patch, such as following Microsoft’s manual instructions for creating a dedicated service to oversee and lock down the hybrid connection.
“In effect, this vulnerability turns a significant on-premise Exchange breach into a full-blown, difficult-to-detect cloud compromise with effectively living off the land techniques which are always harder to detect for defensive teams,” McCarthy said.
CVE-2025-53779 is a weakness in the Windows Kerberos authentication system that allows an unauthenticated attacker to gain domain administrator privileges. Microsoft credits the discovery of the flaw to Akamai researcher Yuval Gordon, who dubbed it “BadSuccessor” in a May 2025 blog post. The attack exploits a weakness in “delegated Managed Service Account” or dMSA — a feature that was introduced in Windows Server 2025.
Some of the critical flaws addressed this month with the highest severity (between 9.0 and 9.9 CVSS scores) include a remote code execution bug in the Windows GDI+ component that handles graphics rendering (CVE-2025-53766) and CVE-2025-50165, another graphics rendering weakness. Another critical patch involves CVE-2025-53733, a vulnerability in Microsoft Word that can be exploited without user interaction and triggered through the Preview Pane.
One final critical bug tackled this month deserves attention: CVE-2025-53778, a bug in Windows NTLM, a core function of how Windows systems handle network authentication. According to Microsoft, the flaw could allow an attacker with low-level network access and basic user privileges to exploit NTLM and elevate to SYSTEM-level access — the highest level of privilege in Windows. Microsoft rates the exploitation of this bug as “more likely,” although there is no evidence the vulnerability is being exploited at the moment.
Feel free to holler in the comments if you experience problems installing any of these updates. As ever, the SANS Internet Storm Center has its useful breakdown of the Microsoft patches indexed by severity and CVSS score, and AskWoody.com is keeping an eye out for Windows patches that may cause problems for enterprises and end users.
Windows 10 users out there likely have noticed by now that Microsoft really wants you to upgrade to Windows 11. The reason is that after the Patch Tuesday on October 14, 2025, Microsoft will stop shipping free security updates for Windows 10 computers. The trouble is, many PCs running Windows 10 do not meet the hardware specifications required to install Windows 11 (or they do, but just barely).
If the experience with Windows XP is any indicator, many of these older computers will wind up in landfills or else will be left running in an unpatched state. But if your Windows 10 PC doesn’t have the hardware chops to run Windows 11 and you’d still like to get some use out of it safely, consider installing a newbie-friendly version of Linux, like Linux Mint.
Like most modern Linux versions, Mint will run on anything with a 64-bit CPU that has at least 2GB of memory, although 4GB is recommended. In other words, it will run on almost any computer produced in the last decade.
There are many versions of Linux available, but Linux Mint is likely to be the most intuitive interface for regular Windows users, and it is largely configurable without any fuss at the text-only command-line prompt. Mint and other flavors of Linux come with LibreOffice, which is an open source suite of tools that includes applications similar to Microsoft Office, and it can open, edit and save documents as Microsoft Office files.
If you’d prefer to give Linux a test drive before installing it on a Windows PC, you can always just download it to a removable USB drive. From there, reboot the computer (with the removable drive plugged in) and select the option at startup to run the operating system from the external USB drive. If you don’t see an option for that after restarting, try restarting again and hitting the F8 button, which should open a list of bootable drives. Here’s a fairly thorough tutorial that walks through exactly how to do all this.
And if this is your first time trying out Linux, relax and have fun: The nice thing about a “live” version of Linux (as it’s called when the operating system is run from a removable drive such as a CD or a USB stick) is that none of your changes persist after a reboot. Even if you somehow manage to break something, a restart will return the system back to its original state.
On July 22, 2025, the European police agency Europol said a long-running investigation led by the French Police resulted in the arrest of a 38-year-old administrator of XSS, a Russian-language cybercrime forum with more than 50,000 members. The action has triggered an ongoing frenzy of speculation and panic among XSS denizens about the identity of the unnamed suspect, but the consensus is that he is a pivotal figure in the crime forum scene who goes by the hacker handle “Toha.” Here’s a deep dive on what’s knowable about Toha, and a short stab at who got nabbed.
An unnamed 38-year-old man was arrested in Kiev last month on suspicion of administering the cybercrime forum XSS. Image: ssu.gov.ua.
Europol did not name the accused, but published partially obscured photos of him from the raid on his residence in Kiev. The police agency said the suspect acted as a trusted third party — arbitrating disputes between criminals — and guaranteeing the security of transactions on XSS. A statement from Ukraine’s SBU security service said XSS counted among its members many cybercriminals from various ransomware groups, including REvil, LockBit, Conti, and Qiliin.
Since the Europol announcement, the XSS forum resurfaced at a new address on the deep web (reachable only via the anonymity network Tor). But from reviewing the recent posts, there appears to be little consensus among longtime members about the identity of the now-detained XSS administrator.
The most frequent comment regarding the arrest was a message of solidarity and support for Toha, the handle chosen by the longtime administrator of XSS and several other major Russian forums. Toha’s accounts on other forums have been silent since the raid.
Europol said the suspect has enjoyed a nearly 20-year career in cybercrime, which roughly lines up with Toha’s history. In 2005, Toha was a founding member of the Russian-speaking forum Hack-All. That is, until it got massively hacked a few months after its debut. In 2006, Toha rebranded the forum to exploit[.]in, which would go on to draw tens of thousands of members, including an eventual Who’s-Who of wanted cybercriminals.
Toha announced in 2018 that he was selling the Exploit forum, prompting rampant speculation on the forums that the buyer was secretly a Russian or Ukrainian government entity or front person. However, those suspicions were unsupported by evidence, and Toha vehemently denied the forum had been given over to authorities.
One of the oldest Russian-language cybercrime forums was DaMaGeLaB, which operated from 2004 to 2017, when its administrator “Ar3s” was arrested. In 2018, a partial backup of the DaMaGeLaB forum was reincarnated as xss[.]is, with Toha as its stated administrator.
Clues about Toha’s early presence on the Internet — from ~2004 to 2010 — are available in the archives of Intel 471, a cyber intelligence firm that tracks forum activity. Intel 471 shows Toha used the same email address across multiple forum accounts, including at Exploit, Antichat, Carder[.]su and inattack[.]ru.
DomainTools.com finds Toha’s email address — toschka2003@yandex.ru — was used to register at least a dozen domain names — most of them from the mid- to late 2000s. Apart from exploit[.]in and a domain called ixyq[.]com, the other domains registered to that email address end in .ua, the top-level domain for Ukraine (e.g. deleted.org[.]ua, lj.com[.]ua, and blogspot.org[.]ua).
A 2008 snapshot of a domain registered to toschka2003@yandex.ru and to Anton Medvedovsky in Kiev. Note the message at the bottom left, “Protected by Exploit,in.” Image: archive.org.
Nearly all of the domains registered to toschka2003@yandex.ru contain the name Anton Medvedovskiy in the registration records, except for the aforementioned ixyq[.]com, which is registered to the name Yuriy Avdeev in Moscow.
This Avdeev surname came up in a lengthy conversation with Lockbitsupp, the leader of the rapacious and destructive ransomware affiliate group Lockbit. The conversation took place in February 2024, when Lockbitsupp asked for help identifying Toha’s real-life identity.
In early 2024, the leader of the Lockbit ransomware group — Lockbitsupp — asked for help investigating the identity of the XSS administrator Toha, which he claimed was a Russian man named Anton Avdeev.
Lockbitsupp didn’t share why he wanted Toha’s details, but he maintained that Toha’s real name was Anton Avdeev. I declined to help Lockbitsupp in whatever revenge he was planning on Toha, but his question made me curious to look deeper.
It appears Lockbitsupp’s query was based on a now-deleted Twitter post from 2022, when a user by the name “3xp0rt” asserted that Toha was a Russian man named Anton Viktorovich Avdeev, born October 27, 1983.
Searching the web for Toha’s email address toschka2003@yandex.ru reveals a 2010 sales thread on the forum bmwclub.ru where a user named Honeypo was selling a 2007 BMW X5. The ad listed the contact person as Anton Avdeev and gave the contact phone number 9588693.
A search on the phone number 9588693 in the breach tracking service Constella Intelligence finds plenty of official Russian government records with this number, date of birth and the name Anton Viktorovich Avdeev. For example, hacked Russian government records show this person has a Russian tax ID and SIN (Social Security number), and that they were flagged for traffic violations on several occasions by Moscow police; in 2004, 2006, 2009, and 2014.
Astute readers may have noticed by now that the ages of Mr. Avdeev (41) and the XSS admin arrested this month (38) are a bit off. This would seem to suggest that the person arrested is someone other than Mr. Avdeev, who did not respond to requests for comment.
For further insight on this question, KrebsOnSecurity sought comments from Sergeii Vovnenko, a former cybercriminal from Ukraine who now works at the security startup paranoidlab.com. I reached out to Vovnenko because for several years beginning around 2010 he was the owner and operator of thesecure[.]biz, an encrypted “Jabber” instant messaging server that Europol said was operated by the suspect arrested in Kiev. Thesecure[.]biz grew quite popular among many of the top Russian-speaking cybercriminals because it scrupulously kept few records of its users’ activity, and its administrator was always a trusted member of the community.
The reason I know this historic tidbit is that in 2013, Vovnenko — using the hacker nicknames “Fly,” and “Flycracker” — hatched a plan to have a gram of heroin purchased off of the Silk Road darknet market and shipped to our home in Northern Virginia. The scheme was to spoof a call from one of our neighbors to the local police, saying this guy Krebs down the street was a druggie who was having narcotics delivered to his home.
I happened to be lurking on Flycracker’s private cybercrime forum when his heroin-framing plan was carried out, and called the police myself before the smack eventually arrived in the U.S. Mail. Vovnenko was later arrested for unrelated cybercrime activities, extradited to the United States, convicted, and deported after a 16-month stay in the U.S. prison system [on several occasions, he has expressed heartfelt apologies for the incident, and we have since buried the hatchet].
Vovnenko said he purchased a device for cloning credit cards from Toha in 2009, and that Toha shipped the item from Russia. Vovnenko explained that he (Flycracker) was the owner and operator of thesecure[.]biz from 2010 until his arrest in 2014.
Vovnenko believes thesecure[.]biz was stolen while he was in jail, either by Toha and/or an XSS administrator who went by the nicknames N0klos and Sonic.
“When I was in jail, [the] admin of xss.is stole that domain, or probably N0klos bought XSS from Toha or vice versa,” Vovnenko said of the Jabber domain. “Nobody from [the forums] spoke with me after my jailtime, so I can only guess what really happened.”
N0klos was the owner and administrator of an early Russian-language cybercrime forum known as Darklife[.]ws. However, N0kl0s also appears to be a lifelong Russian resident, and in any case seems to have vanished from Russian cybercrime forums several years ago.
Asked whether he believes Toha was the XSS administrator who was arrested this month in Ukraine, Vovnenko maintained that Toha is Russian, and that “the French cops took the wrong guy.”
So who did the Ukrainian police arrest in response to the investigation by the French authorities? It seems plausible that the BMW ad invoking Toha’s email address and the name and phone number of a Russian citizen was simply misdirection on Toha’s part — intended to confuse and throw off investigators. Perhaps this even explains the Avdeev surname surfacing in the registration records from one of Toha’s domains.
But sometimes the simplest answer is the correct one. “Toha” is a common Slavic nickname for someone with the first name “Anton,” and that matches the name in the registration records for more than a dozen domains tied to Toha’s toschka2003@yandex.ru email address: Anton Medvedovskiy.
Constella Intelligence finds there is an Anton Gannadievich Medvedovskiy living in Kiev who will be 38 years old in December. This individual owns the email address itsmail@i.ua, as well an an Airbnb account featuring a profile photo of a man with roughly the same hairline as the suspect in the blurred photos released by the Ukrainian police. Mr. Medvedovskiy did not respond to a request for comment.
My take on the takedown is that the Ukrainian authorities likely arrested Medvedovskiy. Toha shared on DaMaGeLab in 2005 that he had recently finished the 11th grade and was studying at a university — a time when Mevedovskiy would have been around 18 years old. On Dec. 11, 2006, fellow Exploit members wished Toha a happy birthday. Records exposed in a 2022 hack at the Ukrainian public services portal diia.gov.ua show that Mr. Medvedovskiy’s birthday is Dec. 11, 1987.
The law enforcement action and resulting confusion about the identity of the detained has thrown the Russian cybercrime forum scene into disarray in recent weeks, with lengthy and heated arguments about XSS’s future spooling out across the forums.
XSS relaunched on a new Tor address shortly after the authorities plastered their seizure notice on the forum’s homepage, but all of the trusted moderators from the old forum were dismissed without explanation. Existing members saw their forum account balances drop to zero, and were asked to plunk down a deposit to register at the new forum. The new XSS “admin” said they were in contact with the previous owners and that the changes were to help rebuild security and trust within the community.
However, the new admin’s assurances appear to have done little to assuage the worst fears of the forum’s erstwhile members, most of whom seem to be keeping their distance from the relaunched site for now.
Indeed, if there is one common understanding amid all of these discussions about the seizure of XSS, it is that Ukrainian and French authorities now have several years worth of private messages between XSS forum users, as well as contact rosters and other user data linked to the seized Jabber server.
“The myth of the ‘trusted person’ is shattered,” the user “GordonBellford” cautioned on Aug. 3 in an Exploit forum thread about the XSS admin arrest. “The forum is run by strangers. They got everything. Two years of Jabber server logs. Full backup and forum database.”
GordonBellford continued:
And the scariest thing is: this data array is not just an archive. It is material for analysis that has ALREADY BEEN DONE . With the help of modern tools, they see everything:
Graphs of your contacts and activity.
Relationships between nicknames, emails, password hashes and Jabber ID.
Timestamps, IP addresses and digital fingerprints.
Your unique writing style, phraseology, punctuation, consistency of grammatical errors, and even typical typos that will link your accounts on different platforms.They are not looking for a needle in a haystack. They simply sifted the haystack through the AI sieve and got ready-made dossiers.
KrebsOnSecurity recently heard from a reader whose boss’s email account got phished and was used to trick one of the company’s customers into sending a large payment to scammers. An investigation into the attacker’s infrastructure points to a long-running Nigerian cybercrime ring that is actively targeting established companies in the transportation and aviation industries.
Image: Shutterstock, Mr. Teerapon Tiuekhom.
A reader who works in the transportation industry sent a tip about a recent successful phishing campaign that tricked an executive at the company into entering their credentials at a fake Microsoft 365 login page. From there, the attackers quickly mined the executive’s inbox for past communications about invoices, copying and modifying some of those messages with new invoice demands that were sent to some of the company’s customers and partners.
Speaking on condition of anonymity, the reader said the resulting phishing emails to customers came from a newly registered domain name that was remarkably similar to their employer’s domain, and that at least one of their customers fell for the ruse and paid a phony invoice. They said the attackers had spun up a look-alike domain just a few hours after the executive’s inbox credentials were phished, and that the scam resulted in a customer suffering a six-figure financial loss.
The reader also shared that the email addresses in the registration records for the imposter domain — roomservice801@gmail.com — is tied to many such phishing domains. Indeed, a search on this email address at DomainTools.com finds it is associated with at least 240 domains registered in 2024 or 2025. Virtually all of them mimic legitimate domains for companies in the aerospace and transportation industries worldwide.
An Internet search for this email address reveals a humorous blog post from 2020 on the Russian forum hackware[.]ru, which found roomservice801@gmail.com was tied to a phishing attack that used the lure of phony invoices to trick the recipient into logging in at a fake Microsoft login page. We’ll come back to this research in a moment.
DomainTools shows that some of the early domains registered to roomservice801@gmail.com in 2016 include other useful information. For example, the WHOIS records for alhhomaidhicentre[.]biz reference the technical contact of “Justy John” and the email address justyjohn50@yahoo.com.
A search at DomainTools found justyjohn50@yahoo.com has been registering one-off phishing domains since at least 2012. At this point, I was convinced that some security company surely had already published an analysis of this particular threat group, but I didn’t yet have enough information to draw any solid conclusions.
DomainTools says the Justy John email address is tied to more than two dozen domains registered since 2012, but we can find hundreds more phishing domains and related email addresses simply by pivoting on details in the registration records for these Justy John domains. For example, the street address used by the Justy John domain axisupdate[.]net — 7902 Pelleaux Road in Knoxville, TN — also appears in the registration records for accountauthenticate[.]com, acctlogin[.]biz, and loginaccount[.]biz, all of which at one point included the email address rsmith60646@gmail.com.
That Rsmith Gmail address is connected to the 2012 phishing domain alibala[.]biz (one character off of the Chinese e-commerce giant alibaba.com, with a different top-level domain of .biz). A search in DomainTools on the phone number in those domain records — 1.7736491613 — reveals even more phishing domains as well as the Nigerian phone number “2348062918302” and the email address michsmith59@gmail.com.
DomainTools shows michsmith59@gmail.com appears in the registration records for the domain seltrock[.]com, which was used in the phishing attack documented in the 2020 Russian blog post mentioned earlier. At this point, we are just two steps away from identifying the threat actor group.
The same Nigerian phone number shows up in dozens of domain registrations that reference the email address sebastinekelly69@gmail.com, including 26i3[.]net, costamere[.]com, danagruop[.]us, and dividrilling[.]com. A Web search on any of those domains finds they were indexed in an “indicator of compromise” list on GitHub maintained by Palo Alto Networks‘ Unit 42 research team.
According to Unit 42, the domains are the handiwork of a vast cybercrime group based in Nigeria that it dubbed “SilverTerrier” back in 2014. In an October 2021 report, Palo Alto said SilverTerrier excels at so-called “business e-mail compromise” or BEC scams, which target legitimate business email accounts through social engineering or computer intrusion activities. BEC criminals use that access to initiate or redirect the transfer of business funds for personal gain.
Palo Alto says SilverTerrier encompasses hundreds of BEC fraudsters, some of whom have been arrested in various international law enforcement operations by Interpol. In 2022, Interpol and the Nigeria Police Force arrested 11 alleged SilverTerrier members, including a prominent SilverTerrier leader who’d been flaunting his wealth on social media for years. Unfortunately, the lure of easy money, endemic poverty and corruption, and low barriers to entry for cybercrime in Nigeria conspire to provide a constant stream of new recruits.
BEC scams were the 7th most reported crime tracked by the FBI’s Internet Crime Complaint Center (IC3) in 2024, generating more than 21,000 complaints. However, BEC scams were the second most costly form of cybercrime reported to the feds last year, with nearly $2.8 billion in claimed losses. In its 2025 Fraud and Control Survey Report, the Association for Financial Professionals found 63 percent of organizations experienced a BEC last year.
Poking at some of the email addresses that spool out from this research reveals a number of Facebook accounts for people residing in Nigeria or in the United Arab Emirates, many of whom do not appear to have tried to mask their real-life identities. Palo Alto’s Unit 42 researchers reached a similar conclusion, noting that although a small subset of these crooks went to great lengths to conceal their identities, it was usually simple to learn their identities on social media accounts and the major messaging services.
Palo Alto said BEC actors have become far more organized over time, and that while it remains easy to find actors working as a group, the practice of using one phone number, email address or alias to register malicious infrastructure in support of multiple actors has made it far more time consuming (but not impossible) for cybersecurity and law enforcement organizations to sort out which actors committed specific crimes.
“We continue to find that SilverTerrier actors, regardless of geographical location, are often connected through only a few degrees of separation on social media platforms,” the researchers wrote.
Palo Alto has published a useful list of recommendations that organizations can adopt to minimize the incidence and impact of BEC attacks. Many of those tips are prophylactic, such as conducting regular employee security training and reviewing network security policies.
But one recommendation — getting familiar with a process known as the “financial fraud kill chain” or FFKC — bears specific mention because it offers the single best hope for BEC victims who are seeking to claw back payments made to fraudsters, and yet far too many victims don’t know it exists until it is too late.
Image: ic3.gov.
As explained in this FBI primer, the International Financial Fraud Kill Chain is a partnership between federal law enforcement and financial entities whose purpose is to freeze fraudulent funds wired by victims. According to the FBI, viable victim complaints filed with ic3.gov promptly after a fraudulent transfer (generally less than 72 hours) will be automatically triaged by the Financial Crimes Enforcement Network (FinCEN).
The FBI noted in its IC3 annual report (PDF) that the FFKC had a 66 percent success rate in 2024. Viable ic3.gov complaints involve losses of at least $50,000, and include all records from the victim or victim bank, as well as a completed FFKC form (provided by FinCEN) containing victim information, recipient information, bank names, account numbers, location, SWIFT, and any additional information.
Security researchers recently revealed that the personal information of millions of people who applied for jobs at McDonald’s was exposed after they guessed the password (“123456”) for the fast food chain’s account at Paradox.ai, a company that makes artificial intelligence based hiring chatbots used by many Fortune 500 firms. Paradox.ai said the security oversight was an isolated incident that did not affect its other customers, but recent security breaches involving its employees in Vietnam tell a more nuanced story.
A screenshot of the paradox.ai homepage showing its AI hiring chatbot “Olivia” interacting with potential hires.
Earlier this month, security researchers Ian Carroll and Sam Curry wrote about simple methods they found to access the backend of the AI chatbot platform on McHire.com, the McDonald’s website that many of its franchisees use to screen job applicants. As first reported by Wired, the researchers discovered that the weak password used by Paradox exposed 64 million records, including applicants’ names, email addresses and phone numbers.
Paradox.ai acknowledged the researchers’ findings but said the company’s other client instances were not affected, and that no sensitive information — such as Social Security numbers — was exposed.
“We are confident, based on our records, this test account was not accessed by any third party other than the security researchers,” the company wrote in a July 9 blog post. “It had not been logged into since 2019 and frankly, should have been decommissioned. We want to be very clear that while the researchers may have briefly had access to the system containing all chat interactions (NOT job applications), they only viewed and downloaded five chats in total that had candidate information within. Again, at no point was any data leaked online or made public.”
However, a review of stolen password data gathered by multiple breach-tracking services shows that at the end of June 2025, a Paradox.ai administrator in Vietnam suffered a malware compromise on their device that stole usernames and passwords for a variety of internal and third-party online services. The results were not pretty.
The password data from the Paradox.ai developer was stolen by a malware strain known as “Nexus Stealer,” a form grabber and password stealer that is sold on cybercrime forums. The information snarfed by stealers like Nexus is often recovered and indexed by data leak aggregator services like Intelligence X, which reports that the malware on the Paradox.ai developer’s device exposed hundreds of mostly poor and recycled passwords (using the same base password but slightly different characters at the end).
Those purloined credentials show the developer in question at one point used the same seven-digit password to log in to Paradox.ai accounts for a number of Fortune 500 firms listed as customers on the company’s website, including Aramark, Lockheed Martin, Lowes, and Pepsi.
Seven-character passwords, particularly those consisting entirely of numerals, are highly vulnerable to “brute-force” attacks that can try a large number of possible password combinations in quick succession. According to a much-referenced password strength guide maintained by Hive Systems, modern password-cracking systems can work out a seven number password more or less instantly.
Image: hivesystems.com.
In response to questions from KrebsOnSecurity, Paradox.ai confirmed that the password data was recently stolen by a malware infection on the personal device of a longtime Paradox developer based in Vietnam, and said the company was made aware of the compromise shortly after it happened. Paradox maintains that few of the exposed passwords were still valid, and that a majority of them were present on the employee’s personal device only because he had migrated the contents of a password manager from an old computer.
Paradox also pointed out that it has been requiring single sign-on (SSO) authentication since 2020 that enforces multi-factor authentication for its partners. Still, a review of the exposed passwords shows they included the Vietnamese administrator’s credentials to the company’s SSO platform — paradoxai.okta.com. The password for that account ended in 202506 — possibly a reference to the month of June 2025 — and the digital cookie left behind after a successful Okta login with those credentials says it was valid until December 2025.
Also exposed were the administrator’s credentials and authentication cookies for an account at Atlassian, a platform made for software development and project management. The expiration date for that authentication token likewise was December 2025.
Infostealer infections are among the leading causes of data breaches and ransomware attacks today, and they result in the theft of stored passwords and any credentials the victim types into a browser. Most infostealer malware also will siphon authentication cookies stored on the victim’s device, and depending on how those tokens are configured thieves may be able to use them to bypass login prompts and/or multi-factor authentication.
Quite often these infostealer infections will open a backdoor on the victim’s device that allows attackers to access the infected machine remotely. Indeed, it appears that remote access to the Paradox administrator’s compromised device was offered for sale recently.
In February 2019, Paradox.ai announced it had successfully completed audits for two fairly comprehensive security standards (ISO 27001 and SOC 2 Type II). Meanwhile, the company’s security disclosure this month says the test account with the atrocious 123456 username and password was last accessed in 2019, but somehow missed in their annual penetration tests. So how did it manage to pass such stringent security audits with these practices in place?
Paradox.ai told KrebsOnSecurity that at the time of the 2019 audit, the company’s various contractors were not held to the same security standards the company practices internally. Paradox emphasized that this has changed, and that it has updated its security and password requirements multiple times since then.
It is unclear how the Paradox developer in Vietnam infected his computer with malware, but a closer review finds a Windows device for another Paradox.ai employee from Vietnam was compromised by similar data-stealing malware at the end of 2024 (that compromise included the victim’s GitHub credentials). In the case of both employees, the stolen credential data includes Web browser logs that indicate the victims repeatedly downloaded pirated movies and television shows, which are often bundled with malware disguised as a video codec needed to view the pirated content.
Marko Elez, a 25-year-old employee at Elon Musk’s Department of Government Efficiency (DOGE), has been granted access to sensitive databases at the U.S. Social Security Administration, the Treasury and Justice departments, and the Department of Homeland Security. So it should fill all Americans with a deep sense of confidence to learn that Mr. Elez over the weekend inadvertently published a private key that allowed anyone to interact directly with more than four dozen large language models (LLMs) developed by Musk’s artificial intelligence company xAI.
Image: Shutterstock, @sdx15.
On July 13, Mr. Elez committed a code script to GitHub called “agent.py” that included a private application programming interface (API) key for xAI. The inclusion of the private key was first flagged by GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.
Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, said the exposed API key allowed access to at least 52 different LLMs used by xAI. The most recent LLM in the list was called “grok-4-0709” and was created on July 9, 2025.
Grok, the generative AI chatbot developed by xAI and integrated into Twitter/X, relies on these and other LLMs (a query to Grok before publication shows Grok currently uses Grok-3, which was launched in Feburary 2025). Earlier today, xAI announced that the Department of Defense will begin using Grok as part of a contract worth up to $200 million. The contract award came less than a week after Grok began spewing antisemitic rants and invoking Adolf Hitler.
Mr. Elez did not respond to a request for comment. The code repository containing the private xAI key was removed shortly after Caturegli notified Elez via email. However, Caturegli said the exposed API key still works and has not yet been revoked.
“If a developer can’t keep an API key private, it raises questions about how they’re handling far more sensitive government information behind closed doors,” Caturegli told KrebsOnSecurity.
Prior to joining DOGE, Marko Elez worked for a number of Musk’s companies. His DOGE career began at the Department of the Treasury, and a legal battle over DOGE’s access to Treasury databases showed Elez was sending unencrypted personal information in violation of the agency’s policies.
While still at Treasury, Elez resigned after The Wall Street Journal linked him to social media posts that advocated racism and eugenics. When Vice President J.D. Vance lobbied for Elez to be rehired, President Trump agreed and Musk reinstated him.
Since his re-hiring as a DOGE employee, Elez has been granted access to databases at one federal agency after another. TechCrunch reported in February 2025 that he was working at the Social Security Administration. In March, Business Insider found Elez was part of a DOGE detachment assigned to the Department of Labor.
Marko Elez, in a photo from a social media profile.
In April, The New York Times reported that Elez held positions at the U.S. Customs and Border Protection and the Immigration and Customs Enforcement (ICE) bureaus, as well as the Department of Homeland Security. The Washington Post later reported that Elez, while serving as a DOGE advisor at the Department of Justice, had gained access to the Executive Office for Immigration Review’s Courts and Appeals System (EACS).
Elez is not the first DOGE worker to publish internal API keys for xAI: In May, KrebsOnSecurity detailed how another DOGE employee leaked a private xAI key on GitHub for two months, exposing LLMs that were custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X.
Caturegli said it’s difficult to trust someone with access to confidential government systems when they can’t even manage the basics of operational security.
“One leak is a mistake,” he said. “But when the same type of sensitive key gets exposed again and again, it’s not just bad luck, it’s a sign of deeper negligence and a broken security culture.”
Authorities in the United Kingdom this week arrested four people aged 17 to 20 in connection with recent data theft and extortion attacks against the retailers Marks & Spencer and Harrods, and the British food retailer Co-op Group. The breaches have been linked to a prolific but loosely-affiliated cybercrime group dubbed “Scattered Spider,” whose other recent victims include multiple airlines.
The U.K.’s National Crime Agency (NCA) declined verify the names of those arrested, saying only that they included two males aged 19, another aged 17, and 20-year-old female.
Scattered Spider is the name given to an English-speaking cybercrime group known for using social engineering tactics to break into companies and steal data for ransom, often impersonating employees or contractors to deceive IT help desks into granting access. The FBI warned last month that Scattered Spider had recently shifted to targeting companies in the retail and airline sectors.
KrebsOnSecurity has learned the identities of two of the suspects. Multiple sources close to the investigation said those arrested include Owen David Flowers, a U.K. man alleged to have been involved in the cyber intrusion and ransomware attack that shut down several MGM Casino properties in September 2023. Those same sources said the woman arrested is or recently was in a relationship with Flowers.
Sources told KrebsOnSecurity that Flowers, who allegedly went by the hacker handles “bo764,” “Holy,” and “Nazi,” was the group member who anonymously gave interviews to the media in the days after the MGM hack. His real name was omitted from a September 2024 story about the group because he was not yet charged in that incident.
The bigger fish arrested this week is 19-year-old Thalha Jubair, a U.K. man whose alleged exploits under various monikers have been well-documented in stories on this site. Jubair is believed to have used the nickname “Earth2Star,” which corresponds to a founding member of the cybercrime-focused Telegram channel “Star Fraud Chat.”
In 2023, KrebsOnSecurity published an investigation into the work of three different SIM-swapping groups that phished credentials from T-Mobile employees and used that access to offer a service whereby any T-Mobile phone number could be swapped to a new device. Star Chat was by far the most active and consequential of the three SIM-swapping groups, who collectively broke into T-Mobile’s network more than 100 times in the second half of 2022.
Jubair allegedly used the handles “Earth2Star” and “Star Ace,” and was a core member of a prolific SIM-swapping group operating in 2022. Star Ace posted this image to the Star Fraud chat channel on Telegram, and it lists various prices for SIM-swaps.
Sources tell KrebsOnSecurity that Jubair also was a core member of the LAPSUS$ cybercrime group that broke into dozens of technology companies in 2022, stealing source code and other internal data from tech giants including Microsoft, Nvidia, Okta, Rockstar Games, Samsung, T-Mobile, and Uber.
In April 2022, KrebsOnSecurity published internal chat records from LAPSUS$, and those chats indicated Jubair was using the nicknames Amtrak and Asyntax. At one point in the chats, Amtrak told the LAPSUS$ group leader not to share T-Mobile’s logo in images sent to the group because he’d been previously busted for SIM-swapping and his parents would suspect he was back at it again.
As shown in those chats, the leader of LAPSUS$ eventually decided to betray Amtrak by posting his real name, phone number, and other hacker handles into a public chat room on Telegram.
In March 2022, the leader of the LAPSUS$ data extortion group exposed Thalha Jubair’s name and hacker handles in a public chat room on Telegram.
That story about the leaked LAPSUS$ chats connected Amtrak/Asyntax/Jubair to the identity “Everlynn,” the founder of a cybercriminal service that sold fraudulent “emergency data requests” targeting the major social media and email providers. In such schemes, the hackers compromise email accounts tied to police departments and government agencies, and then send unauthorized demands for subscriber data while claiming the information being requested can’t wait for a court order because it relates to an urgent matter of life and death.
The roster of the now-defunct “Infinity Recursion” hacking team, from which some member of LAPSUS$ hail.
Sources say Jubair also used the nickname “Operator,” and that until recently he was the administrator of the Doxbin, a long-running and highly toxic online community that is used to “dox” or post deeply personal information on people. In May 2024, several popular cybercrime channels on Telegram ridiculed Operator after it was revealed that he’d staged his own kidnapping in a botched plan to throw off law enforcement investigators.
In November 2024, U.S. authorities charged five men aged 20 to 25 in connection with the Scattered Spider group, which has long relied on recruiting minors to carry out its most risky activities. Indeed, many of the group’s core members were recruited from online gaming platforms like Roblox and Minecraft in their early teens, and have been perfecting their social engineering tactics for years.
“There is a clear pattern that some of the most depraved threat actors first joined cybercrime gangs at an exceptionally young age,” said Allison Nixon, chief research officer at the New York based security firm Unit 221B. “Cybercriminals arrested at 15 or younger need serious intervention and monitoring to prevent a years long massive escalation.”
In May 2025, the U.S. government sanctioned a Chinese national for operating a cloud provider linked to the majority of virtual currency investment scam websites reported to the FBI. But a new report finds the accused continues to operate a slew of established accounts at American tech companies — including Facebook, Github, PayPal and Twitter/X.
On May 29, the U.S. Department of the Treasury announced economic sanctions against Funnull Technology Inc., a Philippines-based company alleged to provide infrastructure for hundreds of thousands of websites involved in virtual currency investment scams known as “pig butchering.” In January 2025, KrebsOnSecurity detailed how Funnull was designed as a content delivery network that catered to foreign cybercriminals seeking to route their traffic through U.S.-based cloud providers.
The Treasury also sanctioned Funnull’s alleged operator, a 40-year-old Chinese national named Liu “Steve” Lizhi. The government says Funnull directly facilitated financial schemes resulting in more than $200 million in financial losses by Americans, and that the company’s operations were linked to the majority of pig butchering scams reported to the FBI.
It is generally illegal for U.S. companies or individuals to transact with people sanctioned by the Treasury. However, as Mr. Lizhi’s case makes clear, just because someone is sanctioned doesn’t necessarily mean big tech companies are going to suspend their online accounts.
The government says Lizhi was born November 13, 1984, and used the nicknames “XXL4” and “Nice Lizhi.” Nevertheless, Steve Liu’s 17-year-old account on LinkedIn (in the name “Liulizhi”) had hundreds of followers (Lizhi’s LinkedIn profile helpfully confirms his birthday) until quite recently: The account was deleted this morning, just hours after KrebsOnSecurity sought comment from LinkedIn.
Mr. Lizhi’s LinkedIn account was suspended sometime in the last 24 hours, after KrebsOnSecurity sought comment from LinkedIn.
In an emailed response, a LinkedIn spokesperson said the company’s “Prohibited countries policy” states that LinkedIn “does not sell, license, support or otherwise make available its Premium accounts or other paid products and services to individuals and companies sanctioned by the U.S. government.” LinkedIn declined to say whether the profile in question was a premium or free account.
Mr. Lizhi also maintains a working PayPal account under the name Liu Lizhi and username “@nicelizhi,” another nickname listed in the Treasury sanctions. A 15-year-old Twitter/X account named “Lizhi” that links to Mr. Lizhi’s personal domain remains active, although it has few followers and hasn’t posted in years.
These accounts and many others were flagged by the security firm Silent Push, which has been tracking Funnull’s operations for the past year and calling out U.S. cloud providers like Amazon and Microsoft for failing to more quickly sever ties with the company.
Liu Lizhi’s PayPal account.
In a report released today, Silent Push found Lizhi still operates numerous Facebook accounts and groups, including a private Facebook account under the name Liu Lizhi. Another Facebook account clearly connected to Lizhi is a tourism page for Ganzhou, China called “EnjoyGanzhou” that was named in the Treasury Department sanctions.
“This guy is the technical administrator for the infrastructure that is hosting a majority of scams targeting people in the United States, and hundreds of millions have been lost based on the websites he’s been hosting,” said Zach Edwards, senior threat researcher at Silent Push. “It’s crazy that the vast majority of big tech companies haven’t done anything to cut ties with this guy.”
The FBI says it received nearly 150,000 complaints last year involving digital assets and $9.3 billion in losses — a 66 percent increase from the previous year. Investment scams were the top crypto-related crimes reported, with $5.8 billion in losses.
In a statement, a Meta spokesperson said the company continuously takes steps to meet its legal obligations, but that sanctions laws are complex and varied. They explained that sanctions are often targeted in nature and don’t always prohibit people from having a presence on its platform. Nevertheless, Meta confirmed it had removed the account, unpublished Pages, and removed Groups and events associated with the user for violating its policies.
Attempts to reach Mr. Lizhi via his primary email addresses at Hotmail and Gmail bounced as undeliverable. Likewise, his 14-year-old YouTube channel appears to have been taken down recently.
However, anyone interested in viewing or using Mr. Lizhi’s 146 computer code repositories will have no problem finding GitHub accounts for him, including one registered under the NiceLizhi and XXL4 nicknames mentioned in the Treasury sanctions.
One of multiple GitHub profiles used by Liu “Steve” Lizhi, who uses the nickname XXL4 (a moniker listed in the Treasury sanctions for Mr. Lizhi).
Mr. Lizhi also operates a GitHub page for an open source e-commerce platform called NexaMerchant, which advertises itself as a payment gateway working with numerous American financial institutions. Interestingly, this profile’s “followers” page shows several other accounts that appear to be Mr. Lizhi’s. All of the account’s followers are tagged as “suspended,” even though that suspended message does not display when one visits those individual profiles.
In response to questions, GitHub said it has a process in place to identify when users and customers are Specially Designated Nationals or other denied or blocked parties, but that it locks those accounts instead of removing them. According to its policy, GitHub takes care that users and customers aren’t impacted beyond what is required by law.
All of the follower accounts for the XXL4 GitHub account appear to be Mr. Lizhi’s, and have been suspended by GitHub, but their code is still accessible.
“This includes keeping public repositories, including those for open source projects, available and accessible to support personal communications involving developers in sanctioned regions,” the policy states. “This also means GitHub will advocate for developers in sanctioned regions to enjoy greater access to the platform and full access to the global open source community.”
Edwards said it’s great that GitHub has a process for handling sanctioned accounts, but that the process doesn’t seem to communicate risk in a transparent way, noting that the only indicator on the locked accounts is the message, “This repository has been archived by the owner. It is not read-only.”
“It’s an odd message that doesn’t communicate, ‘This is a sanctioned entity, don’t fork this code or use it in a production environment’,” Edwards said.
Mark Rasch is a former federal cybercrime prosecutor who now serves as counsel for the New York City based security consulting firm Unit 221B. Rasch said when Treasury’s Office of Foreign Assets Control (OFAC) sanctions a person or entity, it then becomes illegal for businesses or organizations to transact with the sanctioned party.
Rasch said financial institutions have very mature systems for severing accounts tied to people who become subject to OFAC sanctions, but that tech companies may be far less proactive — particularly with free accounts.
“Banks have established ways of checking [U.S. government sanctions lists] for sanctioned entities, but tech companies don’t necessarily do a good job with that, especially for services that you can just click and sign up for,” Rasch said. “It’s potentially a risk and liability for the tech companies involved, but only to the extent OFAC is willing to enforce it.”
Liu Lizhi operates numerous Facebook accounts and groups, including this one for an entity specified in the OFAC sanctions: The “Enjoy Ganzhou” tourism page for Ganzhou, China. Image: Silent Push.
In July 2024, Funnull purchased the domain polyfill[.]io, the longtime home of a legitimate open source project that allowed websites to ensure that devices using legacy browsers could still render content in newer formats. After the Polyfill domain changed hands, at least 384,000 websites were caught in a supply-chain attack that redirected visitors to malicious sites. According to the Treasury, Funnull used the code to redirect people to scam websites and online gambling sites, some of which were linked to Chinese criminal money laundering operations.
The U.S. government says Funnull provides domain names for websites on its purchased IP addresses, using domain generation algorithms (DGAs) — programs that generate large numbers of similar but unique names for websites — and that it sells web design templates to cybercriminals.
“These services not only make it easier for cybercriminals to impersonate trusted brands when creating scam websites, but also allow them to quickly change to different domain names and IP addresses when legitimate providers attempt to take the websites down,” reads a Treasury statement.
Meanwhile, Funnull appears to be morphing nearly all aspects of its business in the wake of the sanctions, Edwards said.
“Whereas before they might have used 60 DGA domains to hide and bounce their traffic, we’re seeing far more now,” he said. “They’re trying to make their infrastructure harder to track and more complicated, so for now they’re not going away but more just changing what they’re doing. And a lot more organizations should be holding their feet to the fire.”
Update, 2:48 PM ET: Added response from Meta, which confirmed it has closed the accounts and groups connected to Mr. Lizhi.
Update, July 7, 6:56 p.m. ET: In a written statement, PayPal said it continually works to combat and prevent the illicit use of its services.
“We devote significant resources globally to financial crime compliance, and we proactively refer cases to and assist law enforcement officials around the world in their efforts to identify, investigate and stop illegal activity,” the statement reads.
Agents with the Federal Bureau of Investigation (FBI) briefed Capitol Hill staff recently on hardening the security of their mobile devices, after a contacts list stolen from the personal phone of the White House Chief of Staff Susie Wiles was reportedly used to fuel a series of text messages and phone calls impersonating her to U.S. lawmakers. But in a letter this week to the FBI, one of the Senate’s most tech-savvy lawmakers says the feds aren’t doing enough to recommend more appropriate security protections that are already built into most consumer mobile devices.
A screenshot of the first page from Sen. Wyden’s letter to FBI Director Kash Patel.
On May 29, The Wall Street Journal reported that federal authorities were investigating a clandestine effort to impersonate Ms. Wiles via text messages and in phone calls that may have used AI to spoof her voice. According to The Journal, Wiles told associates her cellphone contacts were hacked, giving the impersonator access to the private phone numbers of some of the country’s most influential people.
The execution of this phishing and impersonation campaign — whatever its goals may have been — suggested the attackers were financially motivated, and not particularly sophisticated.
“It became clear to some of the lawmakers that the requests were suspicious when the impersonator began asking questions about Trump that Wiles should have known the answers to—and in one case, when the impersonator asked for a cash transfer, some of the people said,” the Journal wrote. “In many cases, the impersonator’s grammar was broken and the messages were more formal than the way Wiles typically communicates, people who have received the messages said. The calls and text messages also didn’t come from Wiles’s phone number.”
Sophisticated or not, the impersonation campaign was soon punctuated by the murder of Minnesota House of Representatives Speaker Emerita Melissa Hortman and her husband, and the shooting of Minnesota State Senator John Hoffman and his wife. So when FBI agents offered in mid-June to brief U.S. Senate staff on mobile threats, more than 140 staffers took them up on that invitation (a remarkably high number considering that no food was offered at the event).
But according to Sen. Ron Wyden (D-Ore.), the advice the FBI provided to Senate staffers was largely limited to remedial tips, such as not clicking on suspicious links or attachments, not using public wifi networks, turning off bluetooth, keeping phone software up to date, and rebooting regularly.
“This is insufficient to protect Senate employees and other high-value targets against foreign spies using advanced cyber tools,” Wyden wrote in a letter sent today to FBI Director Kash Patel. “Well-funded foreign intelligence agencies do not have to rely on phishing messages and malicious attachments to infect unsuspecting victims with spyware. Cyber mercenary companies sell their government customers advanced ‘zero-click’ capabilities to deliver spyware that do not require any action by the victim.”
Wyden stressed that to help counter sophisticated attacks, the FBI should be encouraging lawmakers and their staff to enable anti-spyware defenses that are built into Apple’s iOS and Google’s Android phone software.
These include Apple’s Lockdown Mode, which is designed for users who are worried they may be subject to targeted attacks. Lockdown Mode restricts non-essential iOS features to reduce the device’s overall attack surface. Google Android devices carry a similar feature called Advanced Protection Mode.
Wyden also urged the FBI to update its training to recommend a number of other steps that people can take to make their mobile devices less trackable, including the use of ad blockers to guard against malicious advertisements, disabling ad tracking IDs in mobile devices, and opting out of commercial data brokers (the suspect charged in the Minnesota shootings reportedly used multiple people-search services to find the home addresses of his targets).
The senator’s letter notes that while the FBI has recommended all of the above precautions in various advisories issued over the years, the advice the agency is giving now to the nation’s leaders needs to be more comprehensive, actionable and urgent.
“In spite of the seriousness of the threat, the FBI has yet to provide effective defensive guidance,” Wyden said.
Nicholas Weaver is a researcher with the International Computer Science Institute, a nonprofit in Berkeley, Calif. Weaver said Lockdown Mode or Advanced Protection will mitigate many vulnerabilities, and should be the default setting for all members of Congress and their staff.
“Lawmakers are at exceptional risk and need to be exceptionally protected,” Weaver said. “Their computers should be locked down and well administered, etc. And the same applies to staffers.”
Weaver noted that Apple’s Lockdown Mode has a track record of blocking zero-day attacks on iOS applications; in September 2023, Citizen Lab documented how Lockdown Mode foiled a zero-click flaw capable of installing spyware on iOS devices without any interaction from the victim.
Earlier this month, Citizen Lab researchers documented a zero-click attack used to infect the iOS devices of two journalists with Paragon’s Graphite spyware. The vulnerability could be exploited merely by sending the target a booby-trapped media file delivered via iMessage. Apple also recently updated its advisory for the zero-click flaw (CVE-2025-43200), noting that it was mitigated as of iOS 18.3.1, which was released in February 2025.
Apple has not commented on whether CVE-2025-43200 could be exploited on devices with Lockdown Mode turned on. But HelpNetSecurity observed that at the same time Apple addressed CVE-2025-43200 back in February, the company fixed another vulnerability flagged by Citizen Lab researcher Bill Marczak: CVE-2025-24200, which Apple said was used in an extremely sophisticated physical attack against specific targeted individuals that allowed attackers to disable USB Restricted Mode on a locked device.
In other words, the flaw could apparently be exploited only if the attacker had physical access to the targeted vulnerable device. And as the old infosec industry adage goes, if an adversary has physical access to your device, it’s most likely not your device anymore.
I can’t speak to Google’s Advanced Protection Mode personally, because I don’t use Google or Android devices. But I have had Apple’s Lockdown Mode enabled on all of my Apple devices since it was first made available in September 2022. I can only think of a single occasion when one of my apps failed to work properly with Lockdown Mode turned on, and in that case I was able to add a temporary exception for that app in Lockdown Mode’s settings.
My main gripe with Lockdown Mode was captured in a March 2025 column by TechCrunch’s Lorenzo Francheschi-Bicchierai, who wrote about its penchant for periodically sending mystifying notifications that someone has been blocked from contacting you, even though nothing then prevents you from contacting that person directly. This has happened to me at least twice, and in both cases the person in question was already an approved contact, and said they had not attempted to reach out.
Although it would be nice if Apple’s Lockdown Mode sent fewer, less alarming and more informative alerts, the occasional baffling warning message is hardly enough to make me turn it off.
Late last year, security researchers made a startling discovery: Kremlin-backed disinformation campaigns were bypassing moderation on social media platforms by leveraging the same malicious advertising technology that powers a sprawling ecosystem of online hucksters and website hackers. A new report on the fallout from that investigation finds this dark ad tech industry is far more resilient and incestuous than previously known.
Image: Infoblox.
In November 2024, researchers at the security firm Qurium published an investigation into “Doppelganger,” a disinformation network that promotes pro-Russian narratives and infiltrates Europe’s media landscape by pushing fake news through a network of cloned websites.
Doppelganger campaigns use specialized links that bounce the visitor’s browser through a long series of domains before the fake news content is served. Qurium found Doppelganger relies on a sophisticated “domain cloaking” service, a technology that allows websites to present different content to search engines compared to what regular visitors see. The use of cloaking services helps the disinformation sites remain online longer than they otherwise would, while ensuring that only the targeted audience gets to view the intended content.
Qurium discovered that Doppelganger’s cloaking service also promoted online dating sites, and shared much of the same infrastructure with VexTrio, which is thought to be the oldest malicious traffic distribution system (TDS) in existence. While TDSs are commonly used by legitimate advertising networks to manage traffic from disparate sources and to track who or what is behind each click, VexTrio’s TDS largely manages web traffic from victims of phishing, malware, and social engineering scams.
Digging deeper, Qurium noticed Doppelganger’s cloaking service used an Internet provider in Switzerland as the first entry point in a chain of domain redirections. They also noticed the same infrastructure hosted a pair of co-branded affiliate marketing services that were driving traffic to sketchy adult dating sites: LosPollos[.]com and TacoLoco[.]co.
The LosPollos ad network incorporates many elements and references from the hit series “Breaking Bad,” mirroring the fictional “Los Pollos Hermanos” restaurant chain that served as a money laundering operation for a violent methamphetamine cartel.
The LosPollos advertising network invokes characters and themes from the hit show Breaking Bad. The logo for LosPollos (upper left) is the image of Gustavo Fring, the fictional chicken restaurant chain owner in the show.
Affiliates who sign up with LosPollos are given JavaScript-heavy “smartlinks” that drive traffic into the VexTrio TDS, which in turn distributes the traffic among a variety of advertising partners, including dating services, sweepstakes offers, bait-and-switch mobile apps, financial scams and malware download sites.
LosPollos affiliates typically stitch these smart links into WordPress websites that have been hacked via known vulnerabilities, and those affiliates will earn a small commission each time an Internet user referred by any of their hacked sites falls for one of these lures.
The Los Pollos advertising network promoting itself on LinkedIn.
According to Qurium, TacoLoco is a traffic monetization network that uses deceptive tactics to trick Internet users into enabling “push notifications,” a cross-platform browser standard that allows websites to show pop-up messages which appear outside of the browser. For example, on Microsoft Windows systems these notifications typically show up in the bottom right corner of the screen — just above the system clock.
In the case of VexTrio and TacoLoco, the notification approval requests themselves are deceptive — disguised as “CAPTCHA” challenges designed to distinguish automated bot traffic from real visitors. For years, VexTrio and its partners have successfully tricked countless users into enabling these site notifications, which are then used to continuously pepper the victim’s device with a variety of phony virus alerts and misleading pop-up messages.
Examples of VexTrio landing pages that lead users to accept push notifications on their device.
According to a December 2024 annual report from GoDaddy, nearly 40 percent of compromised websites in 2024 redirected visitors to VexTrio via LosPollos smartlinks.
On November 14, 2024, Qurium published research to support its findings that LosPollos and TacoLoco were services operated by Adspro Group, a company registered in the Czech Republic and Russia, and that Adspro runs its infrastructure at the Swiss hosting providers C41 and Teknology SA.
Qurium noted the LosPollos and TacoLoco sites state that their content is copyrighted by ByteCore AG and SkyForge Digital AG, both Swiss firms that are run by the owner of Teknology SA, Giulio Vitorrio Leonardo Cerutti. Further investigation revealed LosPollos and TacoLoco were apps developed by a company called Holacode, which lists Cerutti as its CEO.
The apps marketed by Holacode include numerous VPN services, as well as one called Spamshield that claims to stop unwanted push notifications. But in January, Infoblox said they tested the app on their own mobile devices, and found it hides the user’s notifications, and then after 24 hours stops hiding them and demands payment. Spamshield subsequently changed its developer name from Holacode to ApLabz, although Infoblox noted that the Terms of Service for several of the rebranded ApLabz apps still referenced Holacode in their terms of service.
Incredibly, Cerutti threatened to sue me for defamation before I’d even uttered his name or sent him a request for comment (Cerutti sent the unsolicited legal threat back in January after his company and my name were merely tagged in an Infoblox post on LinkedIn about VexTrio).
Asked to comment on the findings by Qurium and Infoblox, Cerutti vehemently denied being associated with VexTrio. Cerutti asserted that his companies all strictly adhere to the regulations of the countries in which they operate, and that they have been completely transparent about all of their operations.
“We are a group operating in the advertising and marketing space, with an affiliate network program,” Cerutti responded. “I am not [going] to say we are perfect, but I strongly declare we have no connection with VexTrio at all.”
“Unfortunately, as a big player in this space we also get to deal with plenty of publisher fraud, sketchy traffic, fake clicks, bots, hacked, listed and resold publisher accounts, etc, etc.,” Cerutti continued. “We bleed lots of money to such malpractices and conduct regular internal screenings and audits in a constant battle to remove bad traffic sources. It is also a highly competitive space, where some upstarts will often play dirty against more established mainstream players like us.”
Working with Qurium, researchers at the security firm Infoblox released details about VexTrio’s infrastructure to their industry partners. Just four days after Qurium published its findings, LosPollos announced it was suspending its push monetization service. Less than a month later, Adspro had rebranded to Aimed Global.
A mind map illustrating some of the key findings and connections in the Infoblox and Qurium investigations. Click to enlarge.
In March 2025, researchers at GoDaddy chronicled how DollyWay — a malware strain that has consistently redirected victims to VexTrio throughout its eight years of activity — suddenly stopped doing that on November 20, 2024. Virtually overnight, DollyWay and several other malware families that had previously used VexTrio began pushing their traffic through another TDS called Help TDS.
Digging further into historical DNS records and the unique code scripts used by the Help TDS, Infoblox determined it has long enjoyed an exclusive relationship with VexTrio (at least until LosPollos ended its push monetization service in November).
In a report released today, Infoblox said an exhaustive analysis of the JavaScript code, website lures, smartlinks and DNS patterns used by VexTrio and Help TDS linked them with at least four other TDS operators (not counting TacoLoco). Those four entities — Partners House, BroPush, RichAds and RexPush — are all Russia-based push monetization programs that pay affiliates to drive signups for a variety of schemes, but mostly online dating services.
“As Los Pollos push monetization ended, we’ve seen an increase in fake CAPTCHAs that drive user acceptance of push notifications, particularly from Partners House,” the Infoblox report reads. “The relationship of these commercial entities remains a mystery; while they are certainly long-time partners redirecting traffic to one another, and they all have a Russian nexus, there is no overt common ownership.”
Renee Burton, vice president of threat intelligence at Infoblox, said the security industry generally treats the deceptive methods used by VexTrio and other malicious TDSs as a kind of legally grey area that is mostly associated with less dangerous security threats, such as adware and scareware.
But Burton argues that this view is myopic, and helps perpetuate a dark adtech industry that also pushes plenty of straight-up malware, noting that hundreds of thousands of compromised websites around the world every year redirect victims to the tangled web of VexTrio and VexTrio-affiliate TDSs.
“These TDSs are a nefarious threat, because they’re the ones you can connect to the delivery of things like information stealers and scams that cost consumers billions of dollars a year,” Burton said. “From a larger strategic perspective, my takeaway is that Russian organized crime has control of malicious adtech, and these are just some of the many groups involved.”
As KrebsOnSecurity warned way back in 2020, it’s a good idea to be very sparing in approving notifications when browsing the Web. In many cases these notifications are benign, but as we’ve seen there are numerous dodgy firms that are paying site owners to install their notification scripts, and then reselling that communications pathway to scammers and online hucksters.
If you’d like to prevent sites from ever presenting notification requests, all of the major browser makers let you do this — either across the board or on a per-website basis. While it is true that blocking notifications entirely can break the functionality of some websites, doing this for any devices you manage on behalf of your less tech-savvy friends or family members might end up saving everyone a lot of headache down the road.
To modify site notification settings in Mozilla Firefox, navigate to Settings, Privacy & Security, Permissions, and click the “Settings” tab next to “Notifications.” That page will display any notifications already permitted and allow you to edit or delete any entries. Tick the box next to “Block new requests asking to allow notifications” to stop them altogether.
In Google Chrome, click the icon with the three dots to the right of the address bar, scroll all the way down to Settings, Privacy and Security, Site Settings, and Notifications. Select the “Don’t allow sites to send notifications” button if you want to banish notification requests forever.
In Apple’s Safari browser, go to Settings, Websites, and click on Notifications in the sidebar. Uncheck the option to “allow websites to ask for permission to send notifications” if you wish to turn off notification requests entirely.
Microsoft today released security updates to fix at least 67 vulnerabilities in its Windows operating systems and software. Redmond warns that one of the flaws is already under active attack, and that software blueprints showing how to exploit a pervasive Windows bug patched this month are now public.
The sole zero-day flaw this month is CVE-2025-33053, a remote code execution flaw in the Windows implementation of WebDAV — an HTTP extension that lets users remotely manage files and directories on a server. While WebDAV isn’t enabled by default in Windows, its presence in legacy or specialized systems still makes it a relevant target, said Seth Hoyt, senior security engineer at Automox.
Adam Barnett, lead software engineer at Rapid7, said Microsoft’s advisory for CVE-2025-33053 does not mention that the Windows implementation of WebDAV is listed as deprecated since November 2023, which in practical terms means that the WebClient service no longer starts by default.
“The advisory also has attack complexity as low, which means that exploitation does not require preparation of the target environment in any way that is beyond the attacker’s control,” Barnett said. “Exploitation relies on the user clicking a malicious link. It’s not clear how an asset would be immediately vulnerable if the service isn’t running, but all versions of Windows receive a patch, including those released since the deprecation of WebClient, like Server 2025 and Windows 11 24H2.”
Microsoft warns that an “elevation of privilege” vulnerability in the Windows Server Message Block (SMB) client (CVE-2025-33073) is likely to be exploited, given that proof-of-concept code for this bug is now public. CVE-2025-33073 has a CVSS risk score of 8.8 (out of 10), and exploitation of the flaw leads to the attacker gaining “SYSTEM” level control over a vulnerable PC.
“What makes this especially dangerous is that no further user interaction is required after the initial connection—something attackers can often trigger without the user realizing it,” said Alex Vovk, co-founder and CEO of Action1. “Given the high privilege level and ease of exploitation, this flaw poses a significant risk to Windows environments. The scope of affected systems is extensive, as SMB is a core Windows protocol used for file and printer sharing and inter-process communication.”
Beyond these highlights, 10 of the vulnerabilities fixed this month were rated “critical” by Microsoft, including eight remote code execution flaws.
Notably absent from this month’s patch batch is a fix for a newly discovered weakness in Windows Server 2025 that allows attackers to act with the privileges of any user in Active Directory. The bug, dubbed “BadSuccessor,” was publicly disclosed by researchers at Akamai on May 21, and several public proof-of-concepts are now available. Tenable’s Satnam Narang said organizations that have at least one Windows Server 2025 domain controller should review permissions for principals and limit those permissions as much as possible.
Adobe has released updates for Acrobat Reader and six other products addressing at least 259 vulnerabilities, most of them in an update for Experience Manager. Mozilla Firefox and Google Chrome both recently released security updates that require a restart of the browser to take effect. The latest Chrome update fixes two zero-day exploits in the browser (CVE-2025-5419 and CVE-2025-4664).
For a detailed breakdown on the individual security updates released by Microsoft today, check out the Patch Tuesday roundup from the SANS Internet Storm Center. Action 1 has a breakdown of patches from Microsoft and a raft of other software vendors releasing fixes this month. As always, please back up your system and/or data before patching, and feel free to drop a note in the comments if you run into any problems applying these updates.
Image: Mark Rademaker, via Shutterstock.
Ukraine has seen nearly one-fifth of its Internet space come under Russian control or sold to Internet address brokers since February 2022, a new study finds. The analysis indicates large chunks of Ukrainian Internet address space are now in the hands of shadowy proxy and anonymity services that are nested at some of America’s largest Internet service providers (ISPs).
The findings come in a report examining how the Russian invasion has affected Ukraine’s domestic supply of Internet Protocol Version 4 (IPv4) addresses. Researchers at Kentik, a company that measures the performance of Internet networks, found that while a majority of ISPs in Ukraine haven’t changed their infrastructure much since the war began in 2022, others have resorted to selling swathes of their valuable IPv4 address space just to keep the lights on.
For example, Ukraine’s incumbent ISP Ukrtelecom is now routing just 29 percent of the IPv4 address ranges that the company controlled at the start of the war, Kentik found. Although much of that former IP space remains dormant, Ukrtelecom told Kentik’s Doug Madory they were forced to sell many of their address blocks “to secure financial stability and continue delivering essential services.”
“Leasing out a portion of our IPv4 resources allowed us to mitigate some of the extraordinary challenges we have been facing since the full-scale invasion began,” Ukrtelecom told Madory.
Madory found much of the IPv4 space previously allocated to Ukrtelecom is now scattered to more than 100 providers globally, particularly at three large American ISPs — Amazon (AS16509), AT&T (AS7018), and Cogent (AS174).
Another Ukrainian Internet provider — LVS (AS43310) — in 2022 was routing approximately 6,000 IPv4 addresses across the nation. Kentik learned that by November 2022, much of that address space had been parceled out to over a dozen different locations, with the bulk of it being announced at AT&T.
IP addresses routed over time by Ukrainian provider LVS (AS43310) shows a large chunk of it being routed by AT&T (AS7018). Image: Kentik.
Ditto for the Ukrainian ISP TVCOM, which currently routes nearly 15,000 fewer IPv4 addresses than it did at the start of the war. Madory said most of those addresses have been scattered to 37 other networks outside of Eastern Europe, including Amazon, AT&T, and Microsoft.
The Ukrainian ISP Trinity (AS43554) went offline in early March 2022 during the bloody siege of Mariupol, but its address space eventually began showing up in more than 50 different networks worldwide. Madory found more than 1,000 of Trinity’s IPv4 addresses suddenly appeared on AT&T’s network.
Why are all these former Ukrainian IP addresses being routed by U.S.-based networks like AT&T? According to spur.us, a company that tracks VPN and proxy services, nearly all of the address ranges identified by Kentik now map to commercial proxy services that allow customers to anonymously route their Internet traffic through someone else’s computer.
From a website’s perspective, the traffic from a proxy network user appears to originate from the rented IP address, not from the proxy service customer. These services can be used for several business purposes, such as price comparisons, sales intelligence, web crawlers and content-scraping bots. However, proxy services also are massively abused for hiding cybercrime activity because they can make it difficult to trace malicious traffic to its original source.
IPv4 address ranges are always in high demand, which means they are also quite valuable. There are now multiple companies that will pay ISPs to lease out their unwanted or unused IPv4 address space. Madory said these IPv4 brokers will pay between $100-$500 per month to lease a block of 256 IPv4 addresses, and very often the entities most willing to pay those rental rates are proxy and VPN providers.
A cursory review of all Internet address blocks currently routed through AT&T — as seen in public records maintained by the Internet backbone provider Hurricane Electric — shows a preponderance of country flags other than the United States, including networks originating in Hungary, Lithuania, Moldova, Mauritius, Palestine, Seychelles, Slovenia, and Ukraine.
AT&T’s IPv4 address space seems to be routing a great deal of proxy traffic, including a large number of IP address ranges that were until recently routed by ISPs in Ukraine.
Asked about the apparent high incidence of proxy services routing foreign address blocks through AT&T, the telecommunications giant said it recently changed its policy about originating routes for network blocks that are not owned and managed by AT&T. That new policy, spelled out in a February 2025 update to AT&T’s terms of service, gives those customers until Sept. 1, 2025 to originate their own IP space from their own autonomous system number (ASN), a unique number assigned to each ISP (AT&T’s is AS7018).
“To ensure our customers receive the best quality of service, we changed our terms for dedicated internet in February 2025,” an AT&T spokesperson said in an emailed reply. “We no longer permit static routes with IP addresses that we have not provided. We have been in the process of identifying and notifying affected customers that they have 90 days to transition to Border Gateway Protocol routing using their own autonomous system number.”
Ironically, the co-mingling of Ukrainian IP address space with proxy providers has resulted in many of these addresses being used in cyberattacks against Ukraine and other enemies of Russia. Earlier this month, the European Union sanctioned Stark Industries Solutions Inc., an ISP that surfaced two weeks before the Russian invasion and quickly became the source of large-scale DDoS attacks and spear-phishing attempts by Russian state-sponsored hacking groups. A deep dive into Stark’s considerable address space showed some of it was sourced from Ukrainian ISPs, and most of it was connected to Russia-based proxy and anonymity services.
According to Spur, the proxy service IPRoyal is the current beneficiary of IP address blocks from several Ukrainian ISPs profiled in Kentik’s report. Customers can chose proxies by specifying the city and country they would to proxy their traffic through. Image: Trend Micro.
Spur’s Chief Technology Officer Riley Kilmer said AT&T’s policy change will likely force many proxy services to migrate to other U.S. providers that have less stringent policies.
“AT&T is the first one of the big ISPs that seems to be actually doing something about this,” Kilmer said. “We track several services that explicitly sell AT&T IP addresses, and it will be very interesting to see what happens to those services come September.”
Still, Kilmer said, there are several other large U.S. ISPs that continue to make it easy for proxy services to bring their own IP addresses and host them in ranges that give the appearance of residential customers. For example, Kentik’s report identified former Ukrainian IP ranges showing up as proxy services routed by Cogent Communications (AS174), a tier-one Internet backbone provider based in Washington, D.C.
Kilmer said Cogent has become an attractive home base for proxy services because it is relatively easy to get Cogent to route an address block.
“In fairness, they transit a lot of traffic,” Kilmer said of Cogent. “But there’s a reason a lot of this proxy stuff shows up as Cogent: Because it’s super easy to get something routed there.”
Cogent declined a request to comment on Kentik’s findings.
Authorities in Pakistan have arrested 21 individuals accused of operating “Heartsender,” a once popular spam and malware dissemination service that operated for more than a decade. The main clientele for HeartSender were organized crime groups that tried to trick victim companies into making payments to a third party, and its alleged proprietors were publicly identified by KrebsOnSecurity in 2021 after they inadvertently infected their computers with malware.
Some of the core developers and sellers of Heartsender posing at a work outing in 2021. WeCodeSolutions boss Rameez Shahzad (in sunglasses) is in the center of this group photo, which was posted by employee Burhan Ul Haq, pictured just to the right of Shahzad.
A report from the Pakistani media outlet Dawn states that authorities there arrested 21 people alleged to have operated Heartsender, a spam delivery service whose homepage openly advertised phishing kits targeting users of various Internet companies, including Microsoft 365, Yahoo, AOL, Intuit, iCloud and ID.me. Pakistan’s National Cyber Crime Investigation Agency (NCCIA) reportedly conducted raids in Lahore’s Bahria Town and Multan on May 15 and 16.
The NCCIA told reporters the group’s tools were connected to more than $50m in losses in the United States alone, with European authorities investigating 63 additional cases.
“This wasn’t just a scam operation – it was essentially a cybercrime university that empowered fraudsters globally,” NCCIA Director Abdul Ghaffar said at a press briefing.
In January 2025, the FBI and the Dutch Police seized the technical infrastructure for the cybercrime service, which was marketed under the brands Heartsender, Fudpage and Fudtools (and many other “fud” variations). The “fud” bit stands for “Fully Un-Detectable,” and it refers to cybercrime resources that will evade detection by security tools like antivirus software or anti-spam appliances.
The FBI says transnational organized crime groups that purchased these services primarily used them to run business email compromise (BEC) schemes, wherein the cybercrime actors tricked victim companies into making payments to a third party.
Dawn reported that those arrested included Rameez Shahzad, the alleged ringleader of the Heartsender cybercrime business, which most recently operated under the Pakistani front company WeCodeSolutions. Mr. Shahzad was named and pictured in a 2021 KrebsOnSecurity story about a series of remarkable operational security mistakes that exposed their identities and Facebook pages showing employees posing for group photos and socializing at work-related outings.
Prior to folding their operations behind WeCodeSolutions, Shahzad and others arrested this month operated as a web hosting group calling itself The Manipulaters. KrebsOnSecurity first wrote about The Manipulaters in May 2015, mainly because their ads at the time were blanketing a number of popular cybercrime forums, and because they were fairly open and brazen about what they were doing — even who they were in real life.
Sometime in 2019, The Manipulaters failed to renew their core domain name — manipulaters[.]com — the same one tied to so many of the company’s business operations. That domain was quickly scooped up by Scylla Intel, a cyber intelligence firm that specializes in connecting cybercriminals to their real-life identities. Soon after, Scylla started receiving large amounts of email correspondence intended for the group’s owners.
In 2024, DomainTools.com found the web-hosted version of Heartsender leaked an extraordinary amount of user information to unauthenticated users, including customer credentials and email records from Heartsender employees. DomainTools says the malware infections on Manipulaters PCs exposed “vast swaths of account-related data along with an outline of the group’s membership, operations, and position in the broader underground economy.”
Shahzad allegedly used the alias “Saim Raza,” an identity which has contacted KrebsOnSecurity multiple times over the past decade with demands to remove stories published about the group. The Saim Raza identity most recently contacted this author in November 2024, asserting they had quit the cybercrime industry and turned over a new leaf after a brush with the Pakistani police.
The arrested suspects include Rameez Shahzad, Muhammad Aslam (Rameez’s father), Atif Hussain, Muhammad Umar Irshad, Yasir Ali, Syed Saim Ali Shah, Muhammad Nowsherwan, Burhanul Haq, Adnan Munawar, Abdul Moiz, Hussnain Haider, Bilal Ahmad, Dilbar Hussain, Muhammad Adeel Akram, Awais Rasool, Usama Farooq, Usama Mehmood and Hamad Nawaz.
The U.S. government today unsealed criminal charges against 16 individuals accused of operating and selling DanaBot, a prolific strain of information-stealing malware that has been sold on Russian cybercrime forums since 2018. The FBI says a newer version of DanaBot was used for espionage, and that many of the defendants exposed their real-life identities after accidentally infecting their own systems with the malware.
DanaBot’s features, as promoted on its support site. Image: welivesecurity.com.
Initially spotted in May 2018 by researchers at the email security firm Proofpoint, DanaBot is a malware-as-a-service platform that specializes in credential theft and banking fraud.
Today, the U.S. Department of Justice unsealed a criminal complaint and indictment from 2022, which said the FBI identified at least 40 affiliates who were paying between $3,000 and $4,000 a month for access to the information stealer platform.
The government says the malware infected more than 300,000 systems globally, causing estimated losses of more than $50 million. The ringleaders of the DanaBot conspiracy are named as Aleksandr Stepanov, 39, a.k.a. “JimmBee,” and Artem Aleksandrovich Kalinkin, 34, a.k.a. “Onix”, both of Novosibirsk, Russia. Kalinkin is an IT engineer for the Russian state-owned energy giant Gazprom. His Facebook profile name is “Maffiozi.”
According to the FBI, there were at least two major versions of DanaBot; the first was sold between 2018 and June 2020, when the malware stopped being offered on Russian cybercrime forums. The government alleges that the second version of DanaBot — emerging in January 2021 — was provided to co-conspirators for use in targeting military, diplomatic and non-governmental organization computers in several countries, including the United States, Belarus, the United Kingdom, Germany, and Russia.
“Unindicted co-conspirators would use the Espionage Variant to compromise computers around the world and steal sensitive diplomatic communications, credentials, and other data from these targeted victims,” reads a grand jury indictment dated Sept. 20, 2022. “This stolen data included financial transactions by diplomatic staff, correspondence concerning day-to-day diplomatic activity, as well as summaries of a particular country’s interactions with the United States.”
The indictment says the FBI in 2022 seized servers used by the DanaBot authors to control their malware, as well as the servers that stored stolen victim data. The government said the server data also show numerous instances in which the DanaBot defendants infected their own PCs, resulting in their credential data being uploaded to stolen data repositories that were seized by the feds.
“In some cases, such self-infections appeared to be deliberately done in order to test, analyze, or improve the malware,” the criminal complaint reads. “In other cases, the infections seemed to be inadvertent – one of the hazards of committing cybercrime is that criminals will sometimes infect themselves with their own malware by mistake.”
Image: welivesecurity.com
A statement from the DOJ says that as part of today’s operation, agents with the Defense Criminal Investigative Service (DCIS) seized the DanaBot control servers, including dozens of virtual servers hosted in the United States. The government says it is now working with industry partners to notify DanaBot victims and help remediate infections. The statement credits a number of security firms with providing assistance to the government, including ESET, Flashpoint, Google, Intel 471, Lumen, PayPal, Proofpoint, Team CYMRU, and ZScaler.
It’s not unheard of for financially-oriented malicious software to be repurposed for espionage. A variant of the ZeuS Trojan, which was used in countless online banking attacks against companies in the United States and Europe between 2007 and at least 2015, was for a time diverted to espionage tasks by its author.
As detailed in this 2015 story, the author of the ZeuS trojan created a custom version of the malware to serve purely as a spying machine, which scoured infected systems in Ukraine for specific keywords in emails and documents that would likely only be found in classified documents.
The public charging of the 16 DanaBot defendants comes a day after Microsoft joined a slew of tech companies in disrupting the IT infrastructure for another malware-as-a-service offering — Lumma Stealer, which is likewise offered to affiliates under tiered subscription prices ranging from $250 to $1,000 per month. Separately, Microsoft filed a civil lawsuit to seize control over 2,300 domain names used by Lumma Stealer and its affiliates.
Further reading:
Danabot: Analyzing a Fallen Empire
ZScaler blog: DanaBot Launches DDoS Attack Against the Ukrainian Ministry of Defense
Flashpoint: Operation Endgame DanaBot Malware
Team CYMRU: Inside DanaBot’s Infrastructure: In Support of Operation Endgame II
March 2022 criminal complaint v. Artem Aleksandrovich Kalinkin
September 2022 grand jury indictment naming the 16 defendants
KrebsOnSecurity last week was hit by a near record distributed denial-of-service (DDoS) attack that clocked in at more than 6.3 terabits of data per second (a terabit is one trillion bits of data). The brief attack appears to have been a test run for a massive new Internet of Things (IoT) botnet capable of launching crippling digital assaults that few web destinations can withstand. Read on for more about the botnet, the attack, and the apparent creator of this global menace.
For reference, the 6.3 Tbps attack last week was ten times the size of the assault launched against this site in 2016 by the Mirai IoT botnet, which held KrebsOnSecurity offline for nearly four days. The 2016 assault was so large that Akamai – which was providing pro-bono DDoS protection for KrebsOnSecurity at the time — asked me to leave their service because the attack was causing problems for their paying customers.
Since the Mirai attack, KrebsOnSecurity.com has been behind the protection of Project Shield, a free DDoS defense service that Google provides to websites offering news, human rights, and election-related content. Google Security Engineer Damian Menscher told KrebsOnSecurity the May 12 attack was the largest Google has ever handled. In terms of sheer size, it is second only to a very similar attack that Cloudflare mitigated and wrote about in April.
After comparing notes with Cloudflare, Menscher said the botnet that launched both attacks bears the fingerprints of Aisuru, a digital siege machine that first surfaced less than a year ago. Menscher said the attack on KrebsOnSecurity lasted less than a minute, hurling large UDP data packets at random ports at a rate of approximately 585 million data packets per second.
“It was the type of attack normally designed to overwhelm network links,” Menscher said, referring to the throughput connections between and among various Internet service providers (ISPs). “For most companies, this size of attack would kill them.”
The Aisuru botnet comprises a globally-dispersed collection of hacked IoT devices, including routers, digital video recorders and other systems that are commandeered via default passwords or software vulnerabilities. As documented by researchers at QiAnXin XLab, the botnet was first identified in an August 2024 attack on a large gaming platform.
Aisuru reportedly went quiet after that exposure, only to reappear in November with even more firepower and software exploits. In a January 2025 report, XLab found the new and improved Aisuru (a.k.a. “Airashi“) had incorporated a previously unknown zero-day vulnerability in Cambium Networks cnPilot routers.
The people behind the Aisuru botnet have been peddling access to their DDoS machine in public Telegram chat channels that are closely monitored by multiple security firms. In August 2024, the botnet was rented out in subscription tiers ranging from $150 per day to $600 per week, offering attacks of up to two terabits per second.
“You may not attack any measurement walls, healthcare facilities, schools or government sites,” read a notice posted on Telegram by the Aisuru botnet owners in August 2024.
Interested parties were told to contact the Telegram handle “@yfork” to purchase a subscription. The account @yfork previously used the nickname “Forky,” an identity that has been posting to public DDoS-focused Telegram channels since 2021.
According to the FBI, Forky’s DDoS-for-hire domains have been seized in multiple law enforcement operations over the years. Last year, Forky said on Telegram he was selling the domain stresser[.]best, which saw its servers seized by the FBI in 2022 as part of an ongoing international law enforcement effort aimed at diminishing the supply of and demand for DDoS-for-hire services.
“The operator of this service, who calls himself ‘Forky,’ operates a Telegram channel to advertise features and communicate with current and prospective DDoS customers,” reads an FBI seizure warrant (PDF) issued for stresser[.]best. The FBI warrant stated that on the same day the seizures were announced, Forky posted a link to a story on this blog that detailed the domain seizure operation, adding the comment, “We are buying our new domains right now.”
A screenshot from the FBI’s seizure warrant for Forky’s DDoS-for-hire domains shows Forky announcing the resurrection of their service at new domains.
Approximately ten hours later, Forky posted again, including a screenshot of the stresser[.]best user dashboard, instructing customers to use their saved passwords for the old website on the new one.
A review of Forky’s posts to public Telegram channels — as indexed by the cyber intelligence firms Unit 221B and Flashpoint — reveals a 21-year-old individual who claims to reside in Brazil [full disclosure: Flashpoint is currently an advertiser on this blog].
Since late 2022, Forky’s posts have frequently promoted a DDoS mitigation company and ISP that he operates called botshield[.]io. The Botshield website is connected to a business entity registered in the United Kingdom called Botshield LTD, which lists a 21-year-old woman from Sao Paulo, Brazil as the director. Internet routing records indicate Botshield (AS213613) currently controls several hundred Internet addresses that were allocated to the company earlier this year.
Domaintools.com reports that botshield[.]io was registered in July 2022 to a Kaike Southier Leite in Sao Paulo. A LinkedIn profile by the same name says this individual is a network specialist from Brazil who works in “the planning and implementation of robust network infrastructures, with a focus on security, DDoS mitigation, colocation and cloud server services.”
Image: Jaclyn Vernace / Shutterstock.com.
In his posts to public Telegram chat channels, Forky has hardly attempted to conceal his whereabouts or identity. In countless chat conversations indexed by Unit 221B, Forky could be seen talking about everyday life in Brazil, often remarking on the extremely low or high prices in Brazil for a range of goods, from computer and networking gear to narcotics and food.
Reached via Telegram, Forky claimed he was “not involved in this type of illegal actions for years now,” and that the project had been taken over by other unspecified developers. Forky initially told KrebsOnSecurity he had been out of the botnet scene for years, only to concede this wasn’t true when presented with public posts on Telegram from late last year that clearly showed otherwise.
Forky denied being involved in the attack on KrebsOnSecurity, but acknowledged that he helped to develop and market the Aisuru botnet. Forky claims he is now merely a staff member for the Aisuru botnet team, and that he stopped running the botnet roughly two months ago after starting a family. Forky also said the woman named as director of Botshield is related to him.
Forky offered equivocal, evasive responses to a number of questions about the Aisuru botnet and his business endeavors. But on one point he was crystal clear:
“I have zero fear about you, the FBI, or Interpol,” Forky said, asserting that he is now almost entirely focused on their hosting business — Botshield.
Forky declined to discuss the makeup of his ISP’s clientele, or to clarify whether Botshield was more of a hosting provider or a DDoS mitigation firm. However, Forky has posted on Telegram about Botshield successfully mitigating large DDoS attacks launched against other DDoS-for-hire services.
DomainTools finds the same Sao Paulo street address in the registration records for botshield[.]io was used to register several other domains, including cant-mitigate[.]us. The email address in the WHOIS records for that domain is forkcontato@gmail.com, which DomainTools says was used to register the domain for the now-defunct DDoS-for-hire service stresser[.]us, one of the domains seized in the FBI’s 2023 crackdown.
On May 8, 2023, the U.S. Department of Justice announced the seizure of stresser[.]us, along with a dozen other domains offering DDoS services. The DOJ said ten of the 13 domains were reincarnations of services that were seized during a prior sweep in December, which targeted 48 top stresser services (also known as “booters”).
Forky claimed he could find out who attacked my site with Aisuru. But when pressed a day later on the question, Forky said he’d come up empty-handed.
“I tried to ask around, all the big guys are not retarded enough to attack you,” Forky explained in an interview on Telegram. “I didn’t have anything to do with it. But you are welcome to write the story and try to put the blame on me.”
The 6.3 Tbps attack last week caused no visible disruption to this site, in part because it was so brief — lasting approximately 45 seconds. DDoS attacks of such magnitude and brevity typically are produced when botnet operators wish to test or demonstrate their firepower for the benefit of potential buyers. Indeed, Google’s Menscher said it is likely that both the May 12 attack and the slightly larger 6.5 Tbps attack against Cloudflare last month were simply tests of the same botnet’s capabilities.
In many ways, the threat posed by the Aisuru/Airashi botnet is reminiscent of Mirai, an innovative IoT malware strain that emerged in the summer of 2016 and successfully out-competed virtually all other IoT malware strains in existence at the time.
As first revealed by KrebsOnSecurity in January 2017, the Mirai authors were two U.S. men who co-ran a DDoS mitigation service — even as they were selling far more lucrative DDoS-for-hire services using the most powerful botnet on the planet.
Less than a week after the Mirai botnet was used in a days-long DDoS against KrebsOnSecurity, the Mirai authors published the source code to their botnet so that they would not be the only ones in possession of it in the event of their arrest by federal investigators.
Ironically, the leaking of the Mirai source is precisely what led to the eventual unmasking and arrest of the Mirai authors, who went on to serve probation sentences that required them to consult with FBI investigators on DDoS investigations. But that leak also rapidly led to the creation of dozens of Mirai botnet clones, many of which were harnessed to fuel their own powerful DDoS-for-hire services.
Menscher told KrebsOnSecurity that as counterintuitive as it may sound, the Internet as a whole would probably be better off if the source code for Aisuru became public knowledge. After all, he said, the people behind Aisuru are in constant competition with other IoT botnet operators who are all striving to commandeer a finite number of vulnerable IoT devices globally.
Such a development would almost certainly cause a proliferation of Aisuru botnet clones, he said, but at least then the overall firepower from each individual botnet would be greatly diminished — or at least within range of the mitigation capabilities of most DDoS protection providers.
Barring a source code leak, Menscher said, it would be nice if someone published the full list of software exploits being used by the Aisuru operators to grow their botnet so quickly.
“Part of the reason Mirai was so dangerous was that it effectively took out competing botnets,” he said. “This attack somehow managed to compromise all these boxes that nobody else knows about. Ideally, we’d want to see that fragmented out, so that no [individual botnet operator] controls too much.”
In what experts are calling a novel legal outcome, the 22-year-old former administrator of the cybercrime community Breachforums will forfeit nearly $700,000 to settle a civil lawsuit from a health insurance company whose customer data was posted for sale on the forum in 2023. Conor Brian Fitzpatrick, a.k.a. “Pompompurin,” is slated for resentencing next month after pleading guilty to access device fraud and possession of child sexual abuse material (CSAM).
A redacted screenshot of the Breachforums sales thread. Image: Ke-la.com.
On January 18, 2023, denizens of Breachforums posted for sale tens of thousands of records — including Social Security numbers, dates of birth, addresses, and phone numbers — stolen from Nonstop Health, an insurance provider based in Concord, Calif.
Class-action attorneys sued Nonstop Health, which added Fitzpatrick as a third-party defendant to the civil litigation in November 2023, several months after he was arrested by the FBI and criminally charged with access device fraud and CSAM possession. In January 2025, Nonstop agreed to pay $1.5 million to settle the class action.
Jill Fertel is a former prosecutor who runs the cyber litigation practice at Cipriani & Werner, the law firm that represented Nonstop Health. Fertel told KrebsOnSecurity this is the first and only case where a cybercriminal or anyone related to the security incident was actually named in civil litigation.
“Civil plaintiffs are not at all likely to see money seized from threat actors involved in the incident to be made available to people impacted by the breach,” Fertel said. “The best we could do was make this money available to the class, but it’s still incumbent on the members of the class who are impacted to make that claim.”
Mark Rasch is a former federal prosecutor who now represents Unit 221B, a cybersecurity firm based in New York City. Rasch said he doesn’t doubt that the civil settlement involving Fitzpatrick’s criminal activity is a novel legal development.
“It is rare in these civil cases that you know the threat actor involved in the breach, and it’s also rare that you catch them with sufficient resources to be able to pay a claim,” Rasch said.
Despite admitting to possessing more than 600 CSAM images and personally operating Breachforums, Fitzpatrick was sentenced in January 2024 to time served and 20 years of supervised release. Federal prosecutors objected, arguing that his punishment failed to adequately reflect the seriousness of his crimes or serve as a deterrent.
An excerpt from a pre-sentencing report for Fitzpatrick indicates he had more than 600 CSAM images on his devices.
Indeed, the same month he was sentenced Fitzpatrick was rearrested (PDF) for violating the terms of his release, which forbade him from using a computer that didn’t have court-required monitoring software installed.
Federal prosecutors said Fitzpatrick went on Discord following his guilty plea and professed innocence to the very crimes to which he’d pleaded guilty, stating that his plea deal was “so BS” and that he had “wanted to fight it.” The feds said Fitzpatrick also joked with his friends about selling data to foreign governments, exhorting one user to “become a foreign asset to china or russia,” and to “sell government secrets.”
In January 2025, a federal appeals court agreed with the government’s assessment, vacating Fitzpatrick’s sentence and ordering him to be resentenced on June 3, 2025.
Fitzpatrick launched BreachForums in March 2022 to replace RaidForums, a similarly popular crime forum that was infiltrated and shut down by the FBI the previous month. As administrator, his alter ego Pompompurin served as the middleman, personally reviewing all databases for sale on the forum and offering an escrow service to those interested in buying stolen data.
A yearbook photo of Fitzpatrick unearthed by the Yonkers Times.
The new site quickly attracted more than 300,000 users, and facilitated the sale of databases stolen from hundreds of hacking victims, including some of the largest consumer data breaches in recent history. In May 2024, a reincarnation of Breachforums was seized by the FBI and international partners. Still more relaunches of the forum occurred after that, with the most recent disruption last month.
As KrebsOnSecurity reported last year in The Dark Nexus Between Harm Groups and The Com, it is increasingly common for federal investigators to find CSAM material when searching devices seized from cybercriminal suspects. While the mere possession of CSAM is a serious federal crime, not all of those caught with CSAM are necessarily creators or distributors of it. Fertel said some cybercriminal communities have been known to require new entrants to share CSAM material as a way of proving that they are not a federal investigator.
“If you’re going to the darkest corners of Internet, that’s how you prove you’re not law enforcement,” Fertel said. “Law enforcement would never share that material. It would be criminal for me as a prosecutor, if I obtained and possessed those types of images.”
Further reading: The settlement between Fitzpatrick and Nonstop (PDF).
/\
_ / |
/ \ | \
| |\| |
| | | /
| /| |/
|/ |/
,/; ; ;
,'/|; ,/,/,
,'/ |;/,/,/,/|
,/; |;|/,/,/,/,/|
,/'; |;|,/,/,/,/,/|
,/'; |;|/,/,/,/,/,/|,
/ ; |;|,/,/,/,/,/,/|
/ ,'; |;|/,/,/,/,/,/,/|
/,/'; |;|,/,/,/,/,/,/,/|
/;/ '; |;|/,/,/,/,/,/,/,/|
██████╗ ███████╗ ██████╗ █████╗ ███████╗██╗ ██╗███████╗
██╔══██╗██╔════╝██╔════╝ ██╔══██╗██╔════╝██║ ██║██╔════╝
██████╔╝█████╗ ██║ ███╗███████║███████╗██║ ██║███████╗
██╔═══╝ ██╔══╝ ██║ ██║██╔══██║╚════██║██║ ██║╚════██║
██║ ███████╗╚██████╔╝██║ ██║███████║╚██████╔╝███████║
╚═╝ ╚══════╝ ╚═════╝ ╚═╝ ╚═╝╚══════╝ ╚═════╝ ╚══════╝
P E N T E S T A R S E N A L
A comprehensive web application security testing toolkit that combines 10 powerful penetration testing features into one tool.
Identifies potential security Misconfigurations" title="Misconfigurations">misconfigurations
JWT Token Inspector
Detects common JWT vulnerabilities
Parameter Pollution Finder
Detects server-side parameter handling issues
CORS Misconfiguration Scanner
Detects credential exposure risks
Upload Bypass Tester
Identifies dangerous file type handling
Exposed .git Directory Finder
Tests for sensitive information disclosure
SSRF (Server Side Request Forgery) Detector
Includes cloud metadata endpoint tests
Blind SQL Injection Time Delay Detector
Identifies injectable parameters
Local File Inclusion (LFI) Mapper
Supports various encoding bypasses
Web Application Firewall (WAF) Fingerprinter
git clone https://github.com/sobri3195/pegasus-pentest-arsenal.git
cd pegasus-pentest-arsenal
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
python pegasus_pentest.py
This project is licensed under the MIT License - see the LICENSE file for details.
This tool is provided for educational and authorized testing purposes only. Users are responsible for obtaining proper authorization before testing any target. The authors are not responsible for any misuse or damage caused by this tool.
A Texas firm recently charged with conspiring to distribute synthetic opioids in the United States is at the center of a vast network of companies in the U.S. and Pakistan whose employees are accused of using online ads to scam westerners seeking help with trademarks, book writing, mobile app development and logo designs, a new investigation reveals.
In an indictment (PDF) unsealed last month, the U.S. Department of Justice said Dallas-based eWorldTrade “operated an online business-to-business marketplace that facilitated the distribution of synthetic opioids such as isotonitazene and carfentanyl, both significantly more potent than fentanyl.”
Launched in 2017, eWorldTrade[.]com now features a seizure notice from the DOJ. eWorldTrade operated as a wholesale seller of consumer goods, including clothes, machinery, chemicals, automobiles and appliances. The DOJ’s indictment includes no additional details about eWorldTrade’s business, origins or other activity, and at first glance the website might appear to be a legitimate e-commerce platform that also just happened to sell some restricted chemicals.
A screenshot of the eWorldTrade homepage on March 25, 2025. Image: archive.org.
However, an investigation into the company’s founders reveals they are connected to a sprawling network of websites that have a history of extortionate scams involving trademark registration, book publishing, exam preparation, and the design of logos, mobile applications and websites.
Records from the U.S. Patent and Trademark Office (USPTO) show the eWorldTrade mark is owned by an Azneem Bilwani in Karachi (this name also is in the registration records for the now-seized eWorldTrade domain). Mr. Bilwani is perhaps better known as the director of the Pakistan-based IT provider Abtach Ltd., which has been singled out by the USPTO and Google for operating trademark registration scams (the main offices for eWorldtrade and Abtach share the same address in Pakistan).
In November 2021, the USPTO accused Abtach of perpetrating “an egregious scheme to deceive and defraud applicants for federal trademark registrations by improperly altering official USPTO correspondence, overcharging application filing fees, misappropriating the USPTO’s trademarks, and impersonating the USPTO.”
Abtach offered trademark registration at suspiciously low prices compared to legitimate costs of over USD $1,500, and claimed they could register a trademark in 24 hours. Abtach reportedly rebranded to Intersys Limited after the USPTO banned Abtach from filing any more trademark applications.
In a note published to its LinkedIn profile, Intersys Ltd. asserted last year that certain scam firms in Karachi were impersonating the company.
Many of Abtach’s employees are former associates of a similar company in Pakistan called Axact that was targeted by Pakistani authorities in a 2015 fraud investigation. Axact came under law enforcement scrutiny after The New York Times ran a front-page story about the company’s most lucrative scam business: Hundreds of sites peddling fake college degrees and diplomas.
People who purchased fake certifications were subsequently blackmailed by Axact employees posing as government officials, who would demand additional payments under threats of prosecution or imprisonment for having bought fraudulent “unauthorized” academic degrees. This practice created a continuous cycle of extortion, internally referred to as “upselling.”
“Axact took money from at least 215,000 people in 197 countries — one-third of them from the United States,” The Times reported. “Sales agents wielded threats and false promises and impersonated government officials, earning the company at least $89 million in its final year of operation.”
Dozens of top Axact employees were arrested, jailed, held for months, tried and sentenced to seven years for various fraud violations. But a 2019 research brief on Axact’s diploma mills found none of those convicted had started their prison sentence, and that several had fled Pakistan and never returned.
“In October 2016, a Pakistan district judge acquitted 24 Axact officials at trial due to ‘not enough evidence’ and then later admitted he had accepted a bribe (of $35,209) from Axact,” reads a history (PDF) published by the American Association of Collegiate Registrars and Admissions Officers.
In 2021, Pakistan’s Federal Investigation Agency (FIA) charged Bilwani and nearly four dozen others — many of them Abtach employees — with running an elaborate trademark scam. The authorities called it “the biggest money laundering case in the history of Pakistan,” and named a number of businesses based in Texas that allegedly helped move the proceeds of cybercrime.
A page from the March 2021 FIA report alleging that Digitonics Labs and Abtach employees conspired to extort and defraud consumers.
The FIA said the defendants operated a large number of websites offering low-cost trademark services to customers, before then “ignoring them after getting the funds and later demanding more funds from clients/victims in the name of up-sale (extortion).” The Pakistani law enforcement agency said that about 75 percent of customers received fake or fabricated trademarks as a result of the scams.
The FIA found Abtach operates in conjunction with a Karachi firm called Digitonics Labs, which earned a monthly revenue of around $2.5 million through the “extortion of international clients in the name of up-selling, the sale of fake/fabricated USPTO certificates, and the maintaining of phishing websites.”
According the Pakistani authorities, the accused also ran countless scams involving ebook publication and logo creation, wherein customers are subjected to advance-fee fraud and extortion — with the scammers demanding more money for supposed “copyright release” and threatening to release the trademark.
Also charged by the FIA was Junaid Mansoor, the owner of Digitonics Labs in Karachi. Mansoor’s U.K.-registered company Maple Solutions Direct Limited has run at least 700 ads for logo design websites since 2015, the Google Ads Transparency page reports. The company has approximately 88 ads running on Google as of today.
Junaid Mansoor. Source: youtube/@Olevels․com School.
Mr. Mansoor is actively involved with and promoting a Quran study business called quranmasteronline[.]com, which was founded by Junaid’s brother Qasim Mansoor (Qasim is also named in the FIA criminal investigation). The Google ads promoting quranmasteronline[.]com were paid for by the same account advertising a number of scam websites selling logo and web design services.
Junaid Mansoor did not respond to requests for comment. An address in Teaneck, New Jersey where Mr. Mansoor previously lived is listed as an official address of exporthub[.]com, a Pakistan-based e-commerce website that appears remarkably similar to eWorldTrade (Exporthub says its offices are in Texas). Interestingly, a search in Google for this domain shows ExportHub currently features multiple listings for fentanyl citrate from suppliers in China and elsewhere.
The CEO of Digitonics Labs is Muhammad Burhan Mirza, a former Axact official who was arrested by the FIA as part of its money laundering and trademark fraud investigation in 2021. In 2023, prosecutors in Pakistan charged Mirza, Mansoor and 14 other Digitonics employees with fraud, impersonating government officials, phishing, cheating and extortion. Mirza’s LinkedIn profile says he currently runs an educational technology/life coach enterprise called TheCoach360, which purports to help young kids “achieve financial independence.”
Reached via LinkedIn, Mr. Mirza denied having anything to do with eWorldTrade or any of its sister companies in Texas.
“Moreover, I have no knowledge as to the companies you have mentioned,” said Mr. Mirza, who did not respond to follow-up questions.
The current disposition of the FIA’s fraud case against the defendants is unclear. The investigation was marred early on by allegations of corruption and bribery. In 2021, Pakistani authorities alleged Bilwani paid a six-figure bribe to FIA investigators. Meanwhile, attorneys for Mr. Bilwani have argued that although their client did pay a bribe, the payment was solicited by government officials. Mr. Bilwani did not respond to requests for comment.
KrebsOnSecurity has learned that the people and entities at the center of the FIA investigations have built a significant presence in the United States, with a strong concentration in Texas. The Texas businesses promote websites that sell logo and web design, ghostwriting, and academic cheating services. Many of these entities have recently been sued for fraud and breach of contract by angry former customers, who claimed the companies relentlessly upsold them while failing to produce the work as promised.
For example, the FIA complaints named Retrocube LLC and 360 Digital Marketing LLC, two entities that share a street address with eWorldTrade: 1910 Pacific Avenue, Suite 8025, Dallas, Texas. Also incorporated at that Pacific Avenue address is abtach[.]ae, a web design and marketing firm based in Dubai; and intersyslimited[.]com, the new name of Abtach after they were banned by the USPTO. Other businesses registered at this address market services for logo design, mobile app development, and ghostwriting.
A list published in 2021 by Pakistan’s FIA of different front companies allegedly involved in scamming people who are looking for help with trademarks, ghostwriting, logos and web design.
360 Digital Marketing’s website 360digimarketing[.]com is owned by an Abtach front company called Abtech LTD. Meanwhile, business records show 360 Digi Marketing LTD is a U.K. company whose officers include former Abtach director Bilwani; Muhammad Saad Iqbal, formerly Abtach, now CEO of Intersys Ltd; Niaz Ahmed, a former Abtach associate; and Muhammad Salman Yousuf, formerly a vice president at Axact, Abtach, and Digitonics Labs.
Google’s Ads Transparency Center finds 360 Digital Marketing LLC ran at least 500 ads promoting various websites selling ghostwriting services . Another entity tied to Junaid Mansoor — a company called Octa Group Technologies AU — has run approximately 300 Google ads for book publishing services, promoting confusingly named websites like amazonlistinghub[.]com and barnesnoblepublishing[.]co.
360 Digital Marketing LLC ran approximately 500 ads for scam ghostwriting sites.
Rameez Moiz is a Texas resident and former Abtach product manager who has represented 360 Digital Marketing LLC and RetroCube. Moiz told KrebsOnSecurity he stopped working for 360 Digital Marketing in the summer of 2023. Mr. Moiz did not respond to follow-up questions, but an Upwork profile for him states that as of April 2025 he is employed by Dallas-based Vertical Minds LLC.
In April 2025, California resident Melinda Will sued the Texas firm Majestic Ghostwriting — which is doing business as ghostwritingsquad[.]com — alleging they scammed her out of $100,000 after she hired them to help write her book. Google’s ad transparency page shows Moiz’s employer Vertical Minds LLC paid to run approximately 55 ads for ghostwritingsquad[.]com and related sites.
Ms. Will’s lawsuit is just one of more than two dozen complaints over the past four years wherein plaintiffs sued one of this group’s web design, wiki editing or ghostwriting services. In 2021, a New Jersey man sued Octagroup Technologies, alleging they ripped him off when he paid a total of more than $26,000 for the design and marketing of a web-based mapping service.
The plaintiff in that case did not respond to requests for comment, but his complaint alleges Octagroup and a myriad other companies it contracted with produced minimal work product despite subjecting him to relentless upselling. That case was decided in favor of the plaintiff because the defendants never contested the matter in court.
In 2023, 360 Digital Marketing LLC and Retrocube LLC were sued by a woman who said they scammed her out of $40,000 over a book she wanted help writing. That lawsuit helpfully showed an image of the office front door at 1910 Pacific Ave Suite 8025, which featured the logos of 360 Digital Marketing, Retrocube, and eWorldTrade.
The front door at 1910 Pacific Avenue, Suite 8025, Dallas, Texas.
The lawsuit was filed pro se by Leigh Riley, a 64-year-old career IT professional who paid 360 Digital Marketing to have a company called Talented Ghostwriter co-author and promote a series of books she’d outlined on spirituality and healing.
“The main reason I hired them was because I didn’t understand what I call the formula for writing a book, and I know there’s a lot of marketing that goes into publishing,” Riley explained in an interview. “I know nothing about that stuff, and these guys were convincing that they could handle all aspects of it. Until I discovered they couldn’t write a damn sentence in English properly.”
Riley’s well-documented lawsuit (not linked here because it features a great deal of personal information) includes screenshots of conversations with the ghostwriting team, which was constantly assigning her to new writers and editors, and ghosting her on scheduled conference calls about progress on the project. Riley said she ended up writing most of the book herself because the work they produced was unusable.
“Finally after months of promising the books were printed and on their way, they show up at my doorstep with the wrong title on the book,” Riley said. When she demanded her money back, she said the people helping her with the website to promote the book locked her out of the site.
A conversation snippet from Leigh Riley’s lawsuit against Talented Ghostwriter, aka 360 Digital Marketing LLC. “Other companies once they have you money they don’t even respond or do anything,” the ghostwriting team manager explained.
Riley decided to sue, naming 360 Digital Marketing LLC and Retrocube LLC, among others. The companies offered to settle the matter for $20,000, which she accepted. “I didn’t have money to hire a lawyer, and I figured it was time to cut my losses,” she said.
Riley said she could have saved herself a great deal of headache by doing some basic research on Talented Ghostwriter, whose website claims the company is based in Los Angeles. According to the California Secretary of State, however, there is no registered entity by that name. Rather, the address claimed by talentedghostwriter[.]com is a vacant office building with a “space available” sign in the window.
California resident Walter Horsting discovered something similar when he sued 360 Digital Marketing in small claims court last year, after hiring a company called Vox Ghostwriting to help write, edit and promote a spy novel he’d been working on. Horsting said he paid Vox $3,300 to ghostwrite a 280-page book, and was upsold an Amazon marketing and publishing package for $7,500.
In an interview, Horsting said the prose that Vox Ghostwriting produced was “juvenile at best,” forcing him to rewrite and edit the work himself, and to partner with a graphical artist to produce illustrations. Horsting said that when it came time to begin marketing the novel, Vox Ghostwriting tried to further upsell him on marketing packages, while dodging scheduled meetings with no follow-up.
“They have a money back guarantee, and when they wouldn’t refund my money I said I’m taking you to court,” Horsting recounted. “I tried to serve them in Los Angeles but found no such office exists. I talked to a salon next door and they said someone else had recently shown up desperately looking for where the ghostwriting company went, and it appears there are a trail of corpses on this. I finally tracked down where they are in Texas.”
It was the same office that Ms. Riley served her lawsuit against. Horsting said he has a court hearing scheduled later this month, but he’s under no illusions that winning the case means he’ll be able to collect.
“At this point, I’m doing it out of pride more than actually expecting anything to come to good fortune for me,” he said.
The following mind map was helpful in piecing together key events, individuals and connections mentioned above. It’s important to note that this graphic only scratches the surface of the operations tied to this group. For example, in Case 2 we can see mention of academic cheating services, wherein people can be hired to take online proctored exams on one’s behalf. Those who hire these services soon find themselves subject to impersonation and blackmail attempts for larger and larger sums of money, with the threat of publicly exposing their unethical academic cheating activity.
A “mind map” illustrating the connections between and among entities referenced in this story. Click to enlarge.
KrebsOnSecurity reviewed the Google Ad Transparency links for nearly 500 different websites tied to this network of ghostwriting, logo, app and web development businesses. Those website names were then fed into spyfu.com, a competitive intelligence company that tracks the reach and performance of advertising keywords. Spyfu estimates that between April 2023 and April 2025, those websites spent more than $10 million on Google ads.
Reached for comment, Google said in a written statement that it is constantly policing its ad network for bad actors, pointing to an ads safety report (PDF) showing Google blocked or removed 5.1 billion bad ads last year — including more than 500 million ads related to trademarks.
“Our policy against Enabling Dishonest Behavior prohibits products or services that help users mislead others, including ads for paper-writing or exam-taking services,” the statement reads. “When we identify ads or advertisers that violate our policies, we take action, including by suspending advertiser accounts, disapproving ads, and restricting ads to specific domains when appropriate.”
Google did not respond to specific questions about the advertising entities mentioned in this story, saying only that “we are actively investigating this matter and addressing any policy violations, including suspending advertiser accounts when appropriate.”
From reviewing the ad accounts that have been promoting these scam websites, it appears Google has very recently acted to remove a large number of the offending ads. Prior to my notifying Google about the extent of this ad network on April 28, the Google Ad Transparency network listed over 500 ads for 360 Digital Marketing; as of this publication, that number had dwindled to 10.
On April 30, Google announced that starting this month its ads transparency page will display the payment profile name as the payer name for verified advertisers, if that name differs from their verified advertiser name. Searchengineland.com writes the changes are aimed at increasing accountability in digital advertising.
This spreadsheet lists the domain names, advertiser names, and Google Ad Transparency links for more than 350 entities offering ghostwriting, publishing, web design and academic cheating services.
KrebsOnSecurity would like to thank the anonymous security researcher NatInfoSec for their assistance in this investigation.
For further reading on Abtach and its myriad companies in all of the above-mentioned verticals (ghostwriting, logo design, etc.), see this Wikiwand entry.
A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.
Big thanks to @vrknetha, @cawstudios for the initial implementation!
You can also play around with our MCP Server on MCP.so's playground. Thanks to MCP.so for hosting and @gstarwd for integrating our server.
env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
npm install -g firecrawl-mcp
Configuring Cursor 🖥️ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers: Cursor MCP Server Configuration Guide
To configure Firecrawl MCP in Cursor v0.45.6
env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp
To configure Firecrawl MCP in Cursor v0.48.6
json { "mcpServers": { "firecrawl-mcp": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR-API-KEY" } } } }
If you are using Windows and are running into issues, try
cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"
Replace your-api-key
with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys
After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.
Add this to your ./codeium/windsurf/model_config.json
:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
}
}
}
}
To install Firecrawl for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude
FIRECRAWL_API_KEY
: Your Firecrawl API keyFIRECRAWL_API_URL
FIRECRAWL_API_URL
(Optional): Custom API endpoint for self-hosted instanceshttps://firecrawl.your-domain.com
FIRECRAWL_RETRY_MAX_ATTEMPTS
: Maximum number of retry attempts (default: 3)FIRECRAWL_RETRY_INITIAL_DELAY
: Initial delay in milliseconds before first retry (default: 1000)FIRECRAWL_RETRY_MAX_DELAY
: Maximum delay in milliseconds between retries (default: 10000)FIRECRAWL_RETRY_BACKOFF_FACTOR
: Exponential backoff multiplier (default: 2)FIRECRAWL_CREDIT_WARNING_THRESHOLD
: Credit usage warning threshold (default: 1000)FIRECRAWL_CREDIT_CRITICAL_THRESHOLD
: Credit usage critical threshold (default: 100)For cloud API usage with custom retry and credit monitoring:
# Required for cloud API
export FIRECRAWL_API_KEY=your-api-key
# Optional retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts
export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay
export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay
export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff
# Optional credit monitoring
export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits
export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits
For self-hosted instance:
# Required for self-hosted
export FIRECRAWL_API_URL=https://firecrawl.your-domain.com
# Optional authentication for self-hosted
export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth
# Custom retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=10
export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries
Add this to your claude_desktop_config.json
:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",
"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",
"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}
The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:
const CONFIG = {
retry: {
maxAttempts: 3, // Number of retry attempts for rate-limited requests
initialDelay: 1000, // Initial delay before first retry (in milliseconds)
maxDelay: 10000, // Maximum delay between retries (in milliseconds)
backoffFactor: 2, // Multiplier for exponential backoff
},
credit: {
warningThreshold: 1000, // Warn when credit usage reaches this level
criticalThreshold: 100, // Critical alert when credit usage reaches this level
},
};
These configurations control:
Retry Behavior
Automatically retries failed requests due to rate limits
Example: With default settings, retries will be attempted at:
Credit Usage Monitoring
The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:
firecrawl_scrape
)Scrape content from a single URL with advanced options.
{
"name": "firecrawl_scrape",
"arguments": {
"url": "https://example.com",
"formats": ["markdown"],
"onlyMainContent": true,
"waitFor": 1000,
"timeout": 30000,
"mobile": false,
"includeTags": ["article", "main"],
"excludeTags": ["nav", "footer"],
"skipTlsVerification": false
}
}
firecrawl_batch_scrape
)Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.
{
"name": "firecrawl_batch_scrape",
"arguments": {
"urls": ["https://example1.com", "https://example2.com"],
"options": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}
Response includes operation ID for status checking:
{
"content": [
{
"type": "text",
"text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress."
}
],
"isError": false
}
firecrawl_check_batch_status
)Check the status of a batch operation.
{
"name": "firecrawl_check_batch_status",
"arguments": {
"id": "batch_1"
}
}
firecrawl_search
)Search the web and optionally extract content from search results.
{
"name": "firecrawl_search",
"arguments": {
"query": "your search query",
"limit": 5,
"lang": "en",
"country": "us",
"scrapeOptions": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}
firecrawl_crawl
)Start an asynchronous crawl with advanced options.
{
"name": "firecrawl_crawl",
"arguments": {
"url": "https://example.com",
"maxDepth": 2,
"limit": 100,
"allowExternalLinks": false,
"deduplicateSimilarURLs": true
}
}
firecrawl_extract
)Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
{
"name": "firecrawl_extract",
"arguments": {
"urls": ["https://example.com/page1", "https://example.com/page2"],
"prompt": "Extract product information including name, price, and description",
"systemPrompt": "You are a helpful assistant that extracts product information",
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" },
"description": { "type": "string" }
},
"required": ["name", "price"]
},
"allowExternalLinks": false,
"enableWebSearch": false,
"includeSubdomains": false
}
}
Example response:
{
"content": [
{
"type": "text",
"text": {
"name": "Example Product",
"price": 99.99,
"description": "This is an example product description"
}
}
],
"isError": false
}
urls
: Array of URLs to extract information fromprompt
: Custom prompt for the LLM extractionsystemPrompt
: System prompt to guide the LLMschema
: JSON schema for structured data extractionallowExternalLinks
: Allow extraction from external linksenableWebSearch
: Enable web search for additional contextincludeSubdomains
: Include subdomains in extractionWhen using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.
Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.
{
"name": "firecrawl_deep_research",
"arguments": {
"query": "how does carbon capture technology work?",
"maxDepth": 3,
"timeLimit": 120,
"maxUrls": 50
}
}
Arguments:
Returns:
Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.
{
"name": "firecrawl_generate_llmstxt",
"arguments": {
"url": "https://example.com",
"maxUrls": 20,
"showFullText": true
}
}
Arguments:
Returns:
The server includes comprehensive logging:
Example log messages:
[INFO] Firecrawl MCP Server initialized successfully
[INFO] Starting scrape for URL: https://example.com
[INFO] Batch operation queued with ID: batch_1
[WARNING] Credit usage has reached warning threshold
[ERROR] Rate limit exceeded, retrying in 2s...
The server provides robust error handling:
Example error response:
{
"content": [
{
"type": "text",
"text": "Error: Rate limit exceeded. Retrying in 2 seconds..."
}
],
"isError": true
}
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
npm test
MIT License - see LICENSE file for details
Real-time face swap and video deepfake with a single click and only a single image.
This deepfake software is designed to be a productive tool for the AI-generated media industry. It can assist artists in animating custom characters, creating engaging content, and even using models for clothing design.
We are aware of the potential for unethical applications and are committed to preventative measures. A built-in check prevents the program from processing inappropriate media (nudity, graphic content, sensitive material like war footage, etc.). We will continue to develop this project responsibly, adhering to the law and ethics. We may shut down the project or add watermarks if legally required.
Ethical Use: Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online.
Content Restrictions: The software includes built-in checks to prevent processing inappropriate media, such as nudity, graphic content, or sensitive material.
Legal Compliance: We adhere to all relevant laws and ethical guidelines. If legally required, we may shut down the project or add watermarks to the output.
User Responsibility: We are not responsible for end-user actions. Users must ensure their use of the software aligns with ethical standards and legal requirements.
By using this software, you agree to these terms and commit to using it in a manner that respects the rights and dignity of others.
Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.
1. Select a face 2. Select which camera to use 3. Press live!
Retain your original mouth for accurate movement using Mouth Mask
Use different faces on multiple subjects simultaneously
Watch movies with any face in real-time
Run Live shows and performances
Create Your Most Viral Meme Yet
Created using Many Faces feature in Deep-Live-Cam
Surprise people on Omegle
Please be aware that the installation requires technical skills and is not for beginners. Consider downloading the prebuilt version.
git clone https://github.com/hacksider/Deep-Live-Cam.git
cd Deep-Live-Cam
**3. Download the Models** 1. [GFPGANv1.4](https://huggingface.co/hacksider/deep-live-cam/resolve/main/GFPGANv1.4.pth) 2. [inswapper\_128\_fp16.onnx](https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128_fp16.onnx) Place these files in the "**models**" folder. **4. Install Dependencies** We highly recommend using a `venv` to avoid issues. For Windows: python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
**For macOS:** Apple Silicon (M1/M2/M3) requires specific setup: # Install Python 3.10 (specific version is important)
brew install python@3.10
# Install tkinter package (required for the GUI)
brew install python-tk@3.10
# Create and activate virtual environment with Python 3.10
python3.10 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
** In case something goes wrong and you need to reinstall the virtual environment ** # Deactivate the virtual environment
rm -rf venv
# Reinstall the virtual environment
python -m venv venv
source venv/bin/activate
# install the dependencies again
pip install -r requirements.txt
**Run:** If you don't have a GPU, you can run Deep-Live-Cam using `python run.py`. Note that initial execution will download models (~300MB). ### GPU Acceleration **CUDA Execution Provider (Nvidia)** 1. Install [CUDA Toolkit 11.8.0](https://developer.nvidia.com/cuda-11-8-0-download-archive) 2. Install dependencies: pip uninstall onnxruntime onnxruntime-gpu
pip install onnxruntime-gpu==1.16.3
3. Usage: python run.py --execution-provider cuda
**CoreML Execution Provider (Apple Silicon)** Apple Silicon (M1/M2/M3) specific installation: 1. Make sure you've completed the macOS setup above using Python 3.10. 2. Install dependencies: pip uninstall onnxruntime onnxruntime-silicon
pip install onnxruntime-silicon==1.13.1
3. Usage (important: specify Python 3.10): python3.10 run.py --execution-provider coreml
**Important Notes for macOS:** - You **must** use Python 3.10, not newer versions like 3.11 or 3.13 - Always run with `python3.10` command not just `python` if you have multiple Python versions installed - If you get error about `_tkinter` missing, reinstall the tkinter package: `brew reinstall python-tk@3.10` - If you get model loading errors, check that your models are in the correct folder - If you encounter conflicts with other Python versions, consider uninstalling them: ```bash # List all installed Python versions brew list | grep python # Uninstall conflicting versions if needed brew uninstall --ignore-dependencies python@3.11 python@3.13 # Keep only Python 3.10 brew cleanup ``` **CoreML Execution Provider (Apple Legacy)** 1. Install dependencies: pip uninstall onnxruntime onnxruntime-coreml
pip install onnxruntime-coreml==1.13.1
2. Usage: python run.py --execution-provider coreml
**DirectML Execution Provider (Windows)** 1. Install dependencies: pip uninstall onnxruntime onnxruntime-directml
pip install onnxruntime-directml==1.15.1
2. Usage: python run.py --execution-provider directml
**OpenVINO™ Execution Provider (Intel)** 1. Install dependencies: pip uninstall onnxruntime onnxruntime-openvino
pip install onnxruntime-openvino==1.15.0
2. Usage: python run.py --execution-provider openvino
1. Image/Video Mode
python run.py
.2. Webcam Mode
python run.py
.Check out these helpful guides to get the most out of Deep-Live-Cam:
Visit our official blog for more tips and tutorials.
options:
-h, --help show this help message and exit
-s SOURCE_PATH, --source SOURCE_PATH select a source image
-t TARGET_PATH, --target TARGET_PATH select a target image or video
-o OUTPUT_PATH, --output OUTPUT_PATH select output file or directory
--frame-processor FRAME_PROCESSOR [FRAME_PROCESSOR ...] frame processors (choices: face_swapper, face_enhancer, ...)
--keep-fps keep original fps
--keep-audio keep original audio
--keep-frames keep temporary frames
--many-faces process every face
--map-faces map source target faces
--mouth-mask mask the mouth region
--video-encoder {libx264,libx265,libvpx-vp9} adjust output video encoder
--video-quality [0-51] adjust output video quality
--live-mirror the live camera display as you see it in the front-facing camera frame
--live-resizable the live camera frame is resizable
--max-memory MAX_MEMORY maximum amount of RAM in GB
--execution-provider {cpu} [{cpu} ...] available execution provider (choices: cpu, ...)
--execution-threads EXECUTION_THREADS number of execution threads
-v, --version show program's version number and exit
Looking for a CLI mode? Using the -s/--source argument will make the run program in cli mode.
We are always open to criticism and are ready to improve, that's why we didn't cherry-pick anything.
🐫 CAMEL is an open-source community dedicated to finding the scaling laws of agents. We believe that studying these agents on a large scale offers valuable insights into their behaviors, capabilities, and potential risks. To facilitate research in this field, we implement and support various types of agents, tasks, prompts, models, and simulated environments.
The framework is designed to support systems with millions of agents, ensuring efficient coordination, communication, and resource management at scale.
Agents maintain stateful memory, enabling them to perform multi-step interactions with environments and efficiently tackle sophisticated tasks.
Every line of code and comment serves as a prompt for agents. Code should be written clearly and readably, ensuring both humans and agents can interpret it effectively.
We are a community-driven research collective comprising over 100 researchers dedicated to advancing frontier research in Multi-Agent Systems. Researchers worldwide choose CAMEL for their studies based on the following reasons.
✅ | Large-Scale Agent System | Simulate up to 1M agents to study emergent behaviors and scaling laws in complex, multi-agent environments. |
✅ | Dynamic Communication | Enable real-time interactions among agents, fostering seamless collaboration for tackling intricate tasks. |
✅ | Stateful Memory | Equip agents with the ability to retain and leverage historical context, improving decision-making over extended interactions. |
✅ | Support for Multiple Benchmarks | Utilize standardized benchmarks to rigorously evaluate agent performance, ensuring reproducibility and reliable comparisons. |
✅ | Support for Different Agent Types | Work with a variety of agent roles, tasks, models, and environments, supporting interdisciplinary experiments and diverse research applications. |
✅ | Data Generation and Tool Integration | Automate the creation of large-scale, structured datasets while seamlessly integrating with multiple tools, streamlining synthetic data generation and research workflows. |
Installing CAMEL is a breeze thanks to its availability on PyPI. Simply open your terminal and run:
pip install camel-ai
This example demonstrates how to create a ChatAgent
using the CAMEL framework and perform a search query using DuckDuckGo.
bash pip install 'camel-ai[web_tools]'
bash export OPENAI_API_KEY='your_openai_api_key'
```python from camel.models import ModelFactory from camel.types import ModelPlatformType, ModelType from camel.agents import ChatAgent from camel.toolkits import SearchToolkit
model = ModelFactory.create( model_platform=ModelPlatformType.OPENAI, model_type=ModelType.GPT_4O, model_config_dict={"temperature": 0.0}, )
search_tool = SearchToolkit().search_duckduckgo
agent = ChatAgent(model=model, tools=[search_tool])
response_1 = agent.step("What is CAMEL-AI?") print(response_1.msgs[0].content) # CAMEL-AI is the first LLM (Large Language Model) multi-agent framework # and an open-source community focused on finding the scaling laws of agents. # ...
response_2 = agent.step("What is the Github link to CAMEL framework?") print(response_2.msgs[0].content) # The GitHub link to the CAMEL framework is # https://github.com/camel-ai/camel. ```
For more detailed instructions and additional configuration options, check out the installation section.
After running, you can explore our CAMEL Tech Stack and Cookbooks at docs.camel-ai.org to build powerful multi-agent systems.
We provide a demo showcasing a conversation between two ChatGPT agents playing roles as a python programmer and a stock trader collaborating on developing a trading bot for stock market.
Explore different types of agents, their roles, and their applications.
Please reach out to us on CAMEL discord if you encounter any issue set up CAMEL.
Core components and utilities to build, operate, and enhance CAMEL-AI agents and societies.
Module | Description |
---|---|
Agents | Core agent architectures and behaviors for autonomous operation. |
Agent Societies | Components for building and managing multi-agent systems and collaboration. |
Data Generation | Tools and methods for synthetic data creation and augmentation. |
Models | Model architectures and customization options for agent intelligence. |
Tools | Tools integration for specialized agent tasks. |
Memory | Memory storage and retrieval mechanisms for agent state management. |
Storage | Persistent storage solutions for agent data and states. |
Benchmarks | Performance evaluation and testing frameworks. |
Interpreters | Code and command interpretation capabilities. |
Data Loaders | Data ingestion and preprocessing tools. |
Retrievers | Knowledge retrieval and RAG components. |
Runtime | Execution environment and process management. |
Human-in-the-Loop | Interactive components for human oversight and intervention. |
--- |
We believe that studying these agents on a large scale offers valuable insights into their behaviors, capabilities, and potential risks.
Explore our research projects:
Research with US
We warmly invite you to use CAMEL for your impactful research.
Rigorous research takes time and resources. We are a community-driven research collective with 100+ researchers exploring the frontier research of Multi-agent Systems. Join our ongoing projects or test new ideas with us, reach out via email for more information.
![]()
For more details, please see our Models Documentation
.
Data (Hosted on Hugging Face)
Dataset | Chat format | Instruction format | Chat format (translated) |
---|---|---|---|
AI Society | Chat format | Instruction format | Chat format (translated) |
Code | Chat format | Instruction format | x |
Math | Chat format | x | x |
Physics | Chat format | x | x |
Chemistry | Chat format | x | x |
Biology | Chat format | x | x |
Dataset | Instructions | Tasks |
---|---|---|
AI Society | Instructions | Tasks |
Code | Instructions | Tasks |
Misalignment | Instructions | Tasks |
Practical guides and tutorials for implementing specific functionalities in CAMEL-AI agents and societies.
Cookbook | Description |
---|---|
Creating Your First Agent | A step-by-step guide to building your first agent. |
Creating Your First Agent Society | Learn to build a collaborative society of agents. |
Message Cookbook | Best practices for message handling in agents. |
Cookbook | Description |
---|---|
Tools Cookbook | Integrating tools for enhanced functionality. |
Memory Cookbook | Implementing memory systems in agents. |
RAG Cookbook | Recipes for Retrieval-Augmented Generation. |
Graph RAG Cookbook | Leveraging knowledge graphs with RAG. |
Track CAMEL Agents with AgentOps | Tools for tracking and managing agents in operations. |
Cookbook | Description |
---|---|
Data Generation with CAMEL and Finetuning with Unsloth | Learn how to generate data with CAMEL and fine-tune models effectively with Unsloth. |
Data Gen with Real Function Calls and Hermes Format | Explore how to generate data with real function calls and the Hermes format. |
CoT Data Generation and Upload Data to Huggingface | Uncover how to generate CoT data with CAMEL and seamlessly upload it to Huggingface. |
CoT Data Generation and SFT Qwen with Unsolth | Discover how to generate CoT data using CAMEL and SFT Qwen with Unsolth, and seamlessly upload your data and model to Huggingface. |
Cookbook | Description |
---|---|
Role-Playing Scraper for Report & Knowledge Graph Generation | Create role-playing agents for data scraping and reporting. |
Create A Hackathon Judge Committee with Workforce | Building a team of agents for collaborative judging. |
Dynamic Knowledge Graph Role-Playing: Multi-Agent System with dynamic, temporally-aware knowledge graphs | Builds dynamic, temporally-aware knowledge graphs for financial applications using a multi-agent system. It processes financial reports, news articles, and research papers to help traders analyze data, identify relationships, and uncover market insights. The system also utilizes diverse and optional element node deduplication techniques to ensure data integrity and optimize graph structure for financial decision-making. |
Customer Service Discord Bot with Agentic RAG | Learn how to build a robust customer service bot for Discord using Agentic RAG. |
Customer Service Discord Bot with Local Model | Learn how to build a robust customer service bot for Discord using Agentic RAG which supports local deployment. |
Cookbook | Description |
---|---|
Video Analysis | Techniques for agents in video data analysis. |
3 Ways to Ingest Data from Websites with Firecrawl | Explore three methods for extracting and processing data from websites using Firecrawl. |
Create AI Agents that work with your PDFs | Learn how to create AI agents that work with your PDFs using Chunkr and Mistral AI. |
For those who'd like to contribute code, we appreciate your interest in contributing to our open-source initiative. Please take a moment to review our contributing guidelines to get started on a smooth collaboration journey.🚀
We also welcome you to help CAMEL grow by sharing it on social media, at events, or during conferences. Your support makes a big difference!
For more information please contact camel-ai@eigent.ai
GitHub Issues: Report bugs, request features, and track development. Submit an issue
Discord: Get real-time support, chat with the community, and stay updated. Join us
X (Twitter): Follow for updates, AI insights, and key announcements. Follow us
Ambassador Project: Advocate for CAMEL-AI, host events, and contribute content. Learn more
@inproceedings{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society},
author={Li, Guohao and Hammoud, Hasan Abed Al Kader and Itani, Hani and Khizbullin, Dmitrii and Ghanem, Bernard},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023}
}
Special thanks to Nomic AI for giving us extended access to their data set exploration tool (Atlas).
We would also like to thank Haya Hammoud for designing the initial logo of our project.
We implemented amazing research ideas from other works for you to build, compare and customize your agents. If you use any of these modules, please kindly cite the original works: - TaskCreationAgent
, TaskPrioritizationAgent
and BabyAGI
from Nakajima et al.: Task-Driven Autonomous Agent. [Example]
PersonaHub
from Tao Ge et al.: Scaling Synthetic Data Creation with 1,000,000,000 Personas. [Example]
Self-Instruct
from Yizhong Wang et al.: SELF-INSTRUCT: Aligning Language Models with Self-Generated Instructions. [Example]
The source code is licensed under Apache 2.0.
SubGPT looks at subdomains you have already discovered for a domain and uses BingGPT to find more. Best part? It's free!
The following subdomains were found by this tool with these 30 subdomains as input.
call-prompts-staging.example.com
dclb02-dca1.prod.example.com
activedirectory-sjc1.example.com
iadm-staging.example.com
elevatenetwork-c.example.com
If you like my work, you can support me with as little as $1, here :)
pip install subgpt
git clone https://github.com/s0md3v/SubGPT && cd SubGPT && python setup.py install
cookies.json
Note: Any issues regarding BingGPT itself should be reported EdgeGPT, not here.
It is supposed to be used after you have discovered some subdomains using all other methods. The standard way to run SubGPT is as follows:
subgpt -i input.txt -o output.txt -c /path/to/cookies.json
If you don't specify an output file, the output will be shown in your terminal (stdout
) instead.
To generate subdomains and not resolve them, use the --dont-resolve
option. It's a great way to see all subdomains generated by SubGPT and/or use your own resolver on them.
An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned.
Image: Shutterstock, @sdx15.
Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, was the first to publicize the leak of credentials for an x.ai application programming interface (API) exposed in the GitHub code repository of a technical staff member at xAI.
Caturegli’s post on LinkedIn caught the attention of researchers at GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.
GitGuardian’s Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.
“The credentials can be used to access the X.ai API with the identity of the user,” GitGuardian wrote in an email explaining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”
Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.
“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” Fourrier said. “I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.”
xAI did not respond to a request for comment. Nor did the 28-year-old xAI technical staff member whose key was exposed.
Carole Winqwist, chief marketing officer at GitGuardian, said giving potentially hostile users free access to private LLMs is a recipe for disaster.
“If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”
The inadvertent exposure of internal LLMs for xAI comes as Musk’s so-called Department of Government Efficiency (DOGE) has been feeding sensitive government records into artificial intelligence tools. In February, The Washington Post reported DOGE officials were feeding data from across the Education Department into AI tools to probe the agency’s programs and spending.
The Post said DOGE plans to replicate this process across many departments and agencies, accessing the back-end software at different parts of the government and then using AI technology to extract and sift through information about spending on employees and programs.
“Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks,” Post reporters wrote.
Wired reported in March that DOGE has deployed a proprietary chatbot called GSAi to 1,500 federal workers at the General Services Administration, part of an effort to automate tasks previously done by humans as DOGE continues its purge of the federal workforce.
A Reuters report last month said Trump administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE team has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government, although Reuters said it could not establish exactly how Grok was being used.
Caturegli said while there is no indication that federal government or user data could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may unintentionally expose details related to internal development efforts at xAI, Twitter, or SpaceX.
“The fact that this key was publicly exposed for two months and granted access to internal models is concerning,” Caturegli said. “This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”
A 23-year-old Scottish man thought to be a member of the prolific Scattered Spider cybercrime group was extradited last week from Spain to the United States, where he is facing charges of wire fraud, conspiracy and identity theft. U.S. prosecutors allege Tyler Robert Buchanan and co-conspirators hacked into dozens of companies in the United States and abroad, and that he personally controlled more than $26 million stolen from victims.
Scattered Spider is a loosely affiliated criminal hacking group whose members have broken into and stolen data from some of the world’s largest technology companies. Buchanan was arrested in Spain last year on a warrant from the FBI, which wanted him in connection with a series of SMS-based phishing attacks in the summer of 2022 that led to intrusions at Twilio, LastPass, DoorDash, Mailchimp, and many other tech firms.
Tyler Buchanan, being escorted by Spanish police at the airport in Palma de Mallorca in June 2024.
As first reported by KrebsOnSecurity, Buchanan (a.k.a. “tylerb”) fled the United Kingdom in February 2023, after a rival cybercrime gang hired thugs to invade his home, assault his mother, and threaten to burn him with a blowtorch unless he gave up the keys to his cryptocurrency wallet. Buchanan was arrested in June 2024 at the airport in Palma de Mallorca while trying to board a flight to Italy. His extradition to the United States was first reported last week by Bloomberg.
Members of Scattered Spider have been tied to the 2023 ransomware attacks against MGM and Caesars casinos in Las Vegas, but it remains unclear whether Buchanan was implicated in that incident. The Justice Department’s complaint against Buchanan makes no mention of the 2023 ransomware attack.
Rather, the investigation into Buchanan appears to center on the SMS phishing campaigns from 2022, and on SIM-swapping attacks that siphoned funds from individual cryptocurrency investors. In a SIM-swapping attack, crooks transfer the target’s phone number to a device they control and intercept any text messages or phone calls to the victim’s device — including one-time passcodes for authentication and password reset links sent via SMS.
In August 2022, KrebsOnSecurity reviewed data harvested in a months-long cybercrime campaign by Scattered Spider involving countless SMS-based phishing attacks against employees at major corporations. The security firm Group-IB called them by a different name — 0ktapus, because the group typically spoofed the identity provider Okta in their phishing messages to employees at targeted firms.
A Scattered Spider/0Ktapus SMS phishing lure sent to Twilio employees in 2022.
The complaint against Buchanan (PDF) says the FBI tied him to the 2022 SMS phishing attacks after discovering the same username and email address was used to register numerous Okta-themed phishing domains seen in the campaign. The domain registrar NameCheap found that less than a month before the phishing spree, the account that registered those domains logged in from an Internet address in the U.K. FBI investigators said the Scottish police told them the address was leased to Buchanan from January 26, 2022 to November 7, 2022.
Authorities seized at least 20 digital devices when they raided Buchanan’s residence, and on one of those devices they found usernames and passwords for employees of three different companies targeted in the phishing campaign.
“The FBI’s investigation to date has gathered evidence showing that Buchanan and his co-conspirators targeted at least 45 companies in the United States and abroad, including Canada, India, and the United Kingdom,” the FBI complaint reads. “One of Buchanan’s devices contained a screenshot of Telegram messages between an account known to be used by Buchanan and other unidentified co-conspirators discussing dividing up the proceeds of SIM swapping.”
U.S. prosecutors allege that records obtained from Discord showed the same U.K. Internet address was used to operate a Discord account that specified a cryptocurrency wallet when asking another user to send funds. The complaint says the publicly available transaction history for that payment address shows approximately 391 bitcoin was transferred in and out of this address between October 2022 and
February 2023; 391 bitcoin is presently worth more than $26 million.
In November 2024, federal prosecutors in Los Angeles unsealed criminal charges against Buchanan and four other alleged Scattered Spider members, including Ahmed Elbadawy, 23, of College Station, Texas; Joel Evans, 25, of Jacksonville, North Carolina; Evans Osiebo, 20, of Dallas; and Noah Urban, 20, of Palm Coast, Florida. KrebsOnSecurity reported last year that another suspected Scattered Spider member — a 17-year-old from the United Kingdom — was arrested as part of a joint investigation with the FBI into the MGM hack.
Mr. Buchanan’s court-appointed attorney did not respond to a request for comment. The accused faces charges of wire fraud conspiracy, conspiracy to obtain information by computer for private financial gain, and aggravated identity theft. Convictions on the latter charge carry a minimum sentence of two years in prison.
Documents from the U.S. District Court for the Central District of California indicate Buchanan is being held without bail pending trial. A preliminary hearing in the case is slated for May 6.
Dealing with failing web scrapers due to anti-bot protections or website changes? Meet Scrapling.
Scrapling is a high-performance, intelligent web scraping library for Python that automatically adapts to website changes while significantly outperforming popular alternatives. For both beginners and experts, Scrapling provides powerful features while maintaining simplicity.
>> from scrapling.defaults import Fetcher, AsyncFetcher, StealthyFetcher, PlayWrightFetcher
# Fetch websites' source under the radar!
>> page = StealthyFetcher.fetch('https://example.com', headless=True, network_idle=True)
>> print(page.status)
200
>> products = page.css('.product', auto_save=True) # Scrape data that survives website design changes!
>> # Later, if the website structure changes, pass `auto_match=True`
>> products = page.css('.product', auto_match=True) # and Scrapling still finds them!
Fetcher
class.PlayWrightFetcher
class through your real browser, Scrapling's stealth mode, Playwright's Chrome browser, or NSTbrowser's browserless!StealthyFetcher
and PlayWrightFetcher
classes.from scrapling.fetchers import Fetcher
fetcher = Fetcher(auto_match=False)
# Do http GET request to a web page and create an Adaptor instance
page = fetcher.get('https://quotes.toscrape.com/', stealthy_headers=True)
# Get all text content from all HTML tags in the page except `script` and `style` tags
page.get_all_text(ignore_tags=('script', 'style'))
# Get all quotes elements, any of these methods will return a list of strings directly (TextHandlers)
quotes = page.css('.quote .text::text') # CSS selector
quotes = page.xpath('//span[@class="text"]/text()') # XPath
quotes = page.css('.quote').css('.text::text') # Chained selectors
quotes = [element.text for element in page.css('.quote .text')] # Slower than bulk query above
# Get the first quote element
quote = page.css_first('.quote') # same as page.css('.quote').first or page.css('.quote')[0]
# Tired of selectors? Use find_all/find
# Get all 'div' HTML tags that one of its 'class' values is 'quote'
quotes = page.find_all('div', {'class': 'quote'})
# Same as
quotes = page.find_all('div', class_='quote')
quotes = page.find_all(['div'], class_='quote')
quotes = page.find_all(class_='quote') # and so on...
# Working with elements
quote.html_content # Get Inner HTML of this element
quote.prettify() # Prettified version of Inner HTML above
quote.attrib # Get that element's attributes
quote.path # DOM path to element (List of all ancestors from <html> tag till the element itself)
To keep it simple, all methods can be chained on top of each other!
Scrapling isn't just powerful - it's also blazing fast. Scrapling implements many best practices, design patterns, and numerous optimizations to save fractions of seconds. All of that while focusing exclusively on parsing HTML documents. Here are benchmarks comparing Scrapling to popular Python libraries in two tests.
# | Library | Time (ms) | vs Scrapling |
---|---|---|---|
1 | Scrapling | 5.44 | 1.0x |
2 | Parsel/Scrapy | 5.53 | 1.017x |
3 | Raw Lxml | 6.76 | 1.243x |
4 | PyQuery | 21.96 | 4.037x |
5 | Selectolax | 67.12 | 12.338x |
6 | BS4 with Lxml | 1307.03 | 240.263x |
7 | MechanicalSoup | 1322.64 | 243.132x |
8 | BS4 with html5lib | 3373.75 | 620.175x |
As you see, Scrapling is on par with Scrapy and slightly faster than Lxml which both libraries are built on top of. These are the closest results to Scrapling. PyQuery is also built on top of Lxml but still, Scrapling is 4 times faster.
Library | Time (ms) | vs Scrapling |
---|---|---|
Scrapling | 2.51 | 1.0x |
AutoScraper | 11.41 | 4.546x |
Scrapling can find elements with more methods and it returns full element Adaptor
objects not only the text like AutoScraper. So, to make this test fair, both libraries will extract an element with text, find similar elements, and then extract the text content for all of them. As you see, Scrapling is still 4.5 times faster at the same task.
All benchmarks' results are an average of 100 runs. See our benchmarks.py for methodology and to run your comparisons.
Scrapling is a breeze to get started with; Starting from version 0.2.9, we require at least Python 3.9 to work.
pip3 install scrapling
Then run this command to install browsers' dependencies needed to use Fetcher classes
scrapling install
If you have any installation issues, please open an issue.
Fetchers are interfaces built on top of other libraries with added features that do requests or fetch pages for you in a single request fashion and then return an Adaptor
object. This feature was introduced because the only option we had before was to fetch the page as you wanted it, then pass it manually to the Adaptor
class to create an Adaptor
instance and start playing around with the page.
You might be slightly confused by now so let me clear things up. All fetcher-type classes are imported in the same way
from scrapling.fetchers import Fetcher, StealthyFetcher, PlayWrightFetcher
All of them can take these initialization arguments: auto_match
, huge_tree
, keep_comments
, keep_cdata
, storage
, and storage_args
, which are the same ones you give to the Adaptor
class.
If you don't want to pass arguments to the generated Adaptor
object and want to use the default values, you can use this import instead for cleaner code:
from scrapling.defaults import Fetcher, AsyncFetcher, StealthyFetcher, PlayWrightFetcher
then use it right away without initializing like:
page = StealthyFetcher.fetch('https://example.com')
Also, the Response
object returned from all fetchers is the same as the Adaptor
object except it has these added attributes: status
, reason
, cookies
, headers
, history
, and request_headers
. All cookies
, headers
, and request_headers
are always of type dictionary
.
[!NOTE] The
auto_match
argument is enabled by default which is the one you should care about the most as you will see later.
This class is built on top of httpx with additional configuration options, here you can do GET
, POST
, PUT
, and DELETE
requests.
For all methods, you have stealthy_headers
which makes Fetcher
create and use real browser's headers then create a referer header as if this request came from Google's search of this URL's domain. It's enabled by default. You can also set the number of retries with the argument retries
for all methods and this will make httpx retry requests if it failed for any reason. The default number of retries for all Fetcher
methods is 3.
Hence: All headers generated by
stealthy_headers
argument can be overwritten by you through theheaders
argument
You can route all traffic (HTTP and HTTPS) to a proxy for any of these methods in this format http://username:password@localhost:8030
>> page = Fetcher().get('https://httpbin.org/get', stealthy_headers=True, follow_redirects=True)
>> page = Fetcher().post('https://httpbin.org/post', data={'key': 'value'}, proxy='http://username:password@localhost:8030')
>> page = Fetcher().put('https://httpbin.org/put', data={'key': 'value'})
>> page = Fetcher().delete('https://httpbin.org/delete')
For Async requests, you will just replace the import like below:
>> from scrapling.fetchers import AsyncFetcher
>> page = await AsyncFetcher().get('https://httpbin.org/get', stealthy_headers=True, follow_redirects=True)
>> page = await AsyncFetcher().post('https://httpbin.org/post', data={'key': 'value'}, proxy='http://username:password@localhost:8030')
>> page = await AsyncFetcher().put('https://httpbin.org/put', data={'key': 'value'})
>> page = await AsyncFetcher().delete('https://httpbin.org/delete')
This class is built on top of Camoufox, bypassing most anti-bot protections by default. Scrapling adds extra layers of flavors and configurations to increase performance and undetectability even further.
>> page = StealthyFetcher().fetch('https://www.browserscan.net/bot-detection') # Running headless by default
>> page.status == 200
True
>> page = await StealthyFetcher().async_fetch('https://www.browserscan.net/bot-detection') # the async version of fetch
>> page.status == 200
True
Note: all requests done by this fetcher are waiting by default for all JS to be fully loaded and executed so you don't have to :)
This list isn't final so expect a lot more additions and flexibility to be added in the next versions!
This class is built on top of Playwright which currently provides 4 main run options but they can be mixed as you want.
>> page = PlayWrightFetcher().fetch('https://www.google.com/search?q=%22Scrapling%22', disable_resources=True) # Vanilla Playwright option
>> page.css_first("#search a::attr(href)")
'https://github.com/D4Vinci/Scrapling'
>> page = await PlayWrightFetcher().async_fetch('https://www.google.com/search?q=%22Scrapling%22', disable_resources=True) # the async version of fetch
>> page.css_first("#search a::attr(href)")
'https://github.com/D4Vinci/Scrapling'
Note: all requests done by this fetcher are waiting by default for all JS to be fully loaded and executed so you don't have to :)
Using this Fetcher class, you can make requests with: 1) Vanilla Playwright without any modifications other than the ones you chose. 2) Stealthy Playwright with the stealth mode I wrote for it. It's still a WIP but it bypasses many online tests like Sannysoft's. Some of the things this fetcher's stealth mode does include: * Patching the CDP runtime fingerprint. * Mimics some of the real browsers' properties by injecting several JS files and using custom options. * Using custom flags on launch to hide Playwright even more and make it faster. * Generates real browser's headers of the same type and same user OS then append it to the request's headers. 3) Real browsers by passing the real_chrome
argument or the CDP URL of your browser to be controlled by the Fetcher and most of the options can be enabled on it. 4) NSTBrowser's docker browserless option by passing the CDP URL and enabling nstbrowser_mode
option.
Hence using the
real_chrome
argument requires that you have Chrome browser installed on your device
Add that to a lot of controlling/hiding options as you will see in the arguments list below.
This list isn't final so expect a lot more additions and flexibility to be added in the next versions!
>>> quote.tag
'div'
>>> quote.parent
<data='<div class="col-md-8"> <div class="quote...' parent='<div class="row"> <div class="col-md-8">...'>
>>> quote.parent.tag
'div'
>>> quote.children
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<span>by <small class="author" itemprop=...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<div class="tags"> Tags: <meta class="ke...' parent='<div class="quote" itemscope itemtype="h...'>]
>>> quote.siblings
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
>>> quote.next # gets the next element, the same logic applies to `quote.previous`
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>
>>> quote.children.css_first(".author::text")
'Albert Einstein'
>>> quote.has_class('quote')
True
# Generate new selectors for any element
>>> quote.generate_css_selector
'body > div > div:nth-of-type(2) > div > div'
# Test these selectors on your favorite browser or reuse them again in the library's methods!
>>> quote.generate_xpath_selector
'//body/div/div[2]/div/div'
If your case needs more than the element's parent, you can iterate over the whole ancestors' tree of any element like below
for ancestor in quote.iterancestors():
# do something with it...
You can search for a specific ancestor of an element that satisfies a function, all you need to do is to pass a function that takes an Adaptor
object as an argument and return True
if the condition satisfies or False
otherwise like below:
>>> quote.find_ancestor(lambda ancestor: ancestor.has_class('row'))
<data='<div class="row"> <div class="col-md-8">...' parent='<div class="container"> <div class="row...'>
You can select elements by their text content in multiple ways, here's a full example on another website:
>>> page = Fetcher().get('https://books.toscrape.com/index.html')
>>> page.find_by_text('Tipping the Velvet') # Find the first element whose text fully matches this text
<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>
>>> page.urljoin(page.find_by_text('Tipping the Velvet').attrib['href']) # We use `page.urljoin` to return the full URL from the relative `href`
'https://books.toscrape.com/catalogue/tipping-the-velvet_999/index.html'
>>> page.find_by_text('Tipping the Velvet', first_match=False) # Get all matches if there are more
[<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>]
>>> page.find_by_regex(r'£[\d\.]+') # Get the first element that its text content matches my price regex
<data='<p class="price_color">£51.77</p>' parent='<div class="product_price"> <p class="pr...'>
>>> page.find_by_regex(r'£[\d\.]+', first_match=False) # Get all elements that matches my price regex
[<data='<p class="price_color">£51.77</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">£53.74</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">£50.10</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">£47.82</p>' parent='<div class="product_price"> <p class="pr...'>,
...]
Find all elements that are similar to the current element in location and attributes
# For this case, ignore the 'title' attribute while matching
>>> page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title'])
[<data='<a href="catalogue/a-light-in-the-attic_...' parent='<h3><a href="catalogue/a-light-in-the-at...'>,
<data='<a href="catalogue/soumission_998/index....' parent='<h3><a href="catalogue/soumission_998/in...'>,
<data='<a href="catalogue/sharp-objects_997/ind...' parent='<h3><a href="catalogue/sharp-objects_997...'>,
...]
# You will notice that the number of elements is 19 not 20 because the current element is not included.
>>> len(page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title']))
19
# Get the `href` attribute from all similar elements
>>> [element.attrib['href'] for element in page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title'])]
['catalogue/a-light-in-the-attic_1000/index.html',
'catalogue/soumission_998/index.html',
'catalogue/sharp-objects_997/index.html',
...]
To increase the complexity a little bit, let's say we want to get all books' data using that element as a starting point for some reason
>>> for product in page.find_by_text('Tipping the Velvet').parent.parent.find_similar():
print({
"name": product.css_first('h3 a::text'),
"price": product.css_first('.price_color').re_first(r'[\d\.]+'),
"stock": product.css('.availability::text')[-1].clean()
})
{'name': 'A Light in the ...', 'price': '51.77', 'stock': 'In stock'}
{'name': 'Soumission', 'price': '50.10', 'stock': 'In stock'}
{'name': 'Sharp Objects', 'price': '47.82', 'stock': 'In stock'}
...
The documentation will provide more advanced examples.
Let's say you are scraping a page with a structure like this:
<div class="container">
<section class="products">
<article class="product" id="p1">
<h3>Product 1</h3>
<p class="description">Description 1</p>
</article>
<article class="product" id="p2">
<h3>Product 2</h3>
<p class="description">Description 2</p>
</article>
</section>
</div>
And you want to scrape the first product, the one with the p1
ID. You will probably write a selector like this
page.css('#p1')
When website owners implement structural changes like
<div class="new-container">
<div class="product-wrapper">
<section class="products">
<article class="product new-class" data-id="p1">
<div class="product-info">
<h3>Product 1</h3>
<p class="new-description">Description 1</p>
</div>
</article>
<article class="product new-class" data-id="p2">
<div class="product-info">
<h3>Product 2</h3>
<p class="new-description">Description 2</p>
</div>
</article>
</section>
</div>
</div>
The selector will no longer function and your code needs maintenance. That's where Scrapling's auto-matching feature comes into play.
from scrapling.parser import Adaptor
# Before the change
page = Adaptor(page_source, url='example.com')
element = page.css('#p1' auto_save=True)
if not element: # One day website changes?
element = page.css('#p1', auto_match=True) # Scrapling still finds it!
# the rest of the code...
How does the auto-matching work? Check the FAQs section for that and other possible issues while auto-matching.
Let's use a real website as an example and use one of the fetchers to fetch its source. To do this we need to find a website that will change its design/structure soon, take a copy of its source then wait for the website to make the change. Of course, that's nearly impossible to know unless I know the website's owner but that will make it a staged test haha.
To solve this issue, I will use The Web Archive's Wayback Machine. Here is a copy of StackOverFlow's website in 2010, pretty old huh?Let's test if the automatch feature can extract the same button in the old design from 2010 and the current design using the same selector :)
If I want to extract the Questions button from the old design I can use a selector like this #hmenus > div:nth-child(1) > ul > li:nth-child(1) > a
This selector is too specific because it was generated by Google Chrome. Now let's test the same selector in both versions
>> from scrapling.fetchers import Fetcher
>> selector = '#hmenus > div:nth-child(1) > ul > li:nth-child(1) > a'
>> old_url = "https://web.archive.org/web/20100102003420/http://stackoverflow.com/"
>> new_url = "https://stackoverflow.com/"
>>
>> page = Fetcher(automatch_domain='stackoverflow.com').get(old_url, timeout=30)
>> element1 = page.css_first(selector, auto_save=True)
>>
>> # Same selector but used in the updated website
>> page = Fetcher(automatch_domain="stackoverflow.com").get(new_url)
>> element2 = page.css_first(selector, auto_match=True)
>>
>> if element1.text == element2.text:
... print('Scrapling found the same element in the old design and the new design!')
'Scrapling found the same element in the old design and the new design!'
Note that I used a new argument called automatch_domain
, this is because for Scrapling these are two different URLs, not the website so it isolates their data. To tell Scrapling they are the same website, we then pass the domain we want to use for saving auto-match data for them both so Scrapling doesn't isolate them.
In a real-world scenario, the code will be the same except it will use the same URL for both requests so you won't need to use the automatch_domain
argument. This is the closest example I can give to real-world cases so I hope it didn't confuse you :)
Notes: 1. For the two examples above I used one time the Adaptor
class and the second time the Fetcher
class just to show you that you can create the Adaptor
object by yourself if you have the source or fetch the source using any Fetcher
class then it will create the Adaptor
object for you. 2. Passing the auto_save
argument with the auto_match
argument set to False
while initializing the Adaptor/Fetcher object will only result in ignoring the auto_save
argument value and the following warning message text Argument `auto_save` will be ignored because `auto_match` wasn't enabled on initialization. Check docs for more info.
This behavior is purely for performance reasons so the database gets created/connected only when you are planning to use the auto-matching features. Same case with the auto_match
argument.
auto_match
parameter works only for Adaptor
instances not Adaptors
so if you do something like this you will get an error python page.css('body').css('#p1', auto_match=True)
because you can't auto-match a whole list, you have to be specific and do something like python page.css_first('body').css('#p1', auto_match=True)
Inspired by BeautifulSoup's find_all
function you can find elements by using find_all
/find
methods. Both methods can take multiple types of filters and return all elements in the pages that all these filters apply to.
So the way it works is after collecting all passed arguments and keywords, each filter passes its results to the following filter in a waterfall-like filtering system.
It filters all elements in the current page/element in the following order:
Note: The filtering process always starts from the first filter it finds in the filtering order above so if no tag name(s) are passed but attributes are passed, the process starts from that layer and so on. But the order in which you pass the arguments doesn't matter.
Examples to clear any confusion :)
>> from scrapling.fetchers import Fetcher
>> page = Fetcher().get('https://quotes.toscrape.com/')
# Find all elements with tag name `div`.
>> page.find_all('div')
[<data='<div class="container"> <div class="row...' parent='<body> <div class="container"> <div clas...'>,
<data='<div class="row header-box"> <div class=...' parent='<div class="container"> <div class="row...'>,
...]
# Find all div elements with a class that equals `quote`.
>> page.find_all('div', class_='quote')
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
# Same as above.
>> page.find_all('div', {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
# Find all elements with a class that equals `quote`.
>> page.find_all({'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
# Find all div elements with a class that equals `quote`, and contains the element `.text` which contains the word 'world' in its content.
>> page.find_all('div', {'class': 'quote'}, lambda e: "world" in e.css_first('.text::text'))
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>]
# Find all elements that don't have children.
>> page.find_all(lambda element: len(element.children) > 0)
[<data='<html lang="en"><head><meta charset="UTF...'>,
<data='<head><meta charset="UTF-8"><title>Quote...' parent='<html lang="en"><head><meta charset="UTF...'>,
<data='<body> <div class="container"> <div clas...' parent='<html lang="en"><head><meta charset="UTF...'>,
...]
# Find all elements that contain the word 'world' in its content.
>> page.find_all(lambda element: "world" in element.text)
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<a class="tag" href="/tag/world/page/1/"...' parent='<div class="tags"> Tags: <meta class="ke...'>]
# Find all span elements that match the given regex
>> page.find_all('span', re.compile(r'world'))
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>]
# Find all div and span elements with class 'quote' (No span elements like that so only div returned)
>> page.find_all(['div', 'span'], {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
# Mix things up
>> page.find_all({'itemtype':"http://schema.org/CreativeWork"}, 'div').css('.author::text')
['Albert Einstein',
'J.K. Rowling',
...]
Here's what else you can do with Scrapling:
lxml.etree
object itself of any element directly python >>> quote._root <Element div at 0x107f98870>
Saving and retrieving elements manually to auto-match them outside the css
and the xpath
methods but you have to set the identifier by yourself.
To save an element to the database: python >>> element = page.find_by_text('Tipping the Velvet', first_match=True) >>> page.save(element, 'my_special_element')
python >>> element_dict = page.retrieve('my_special_element') >>> page.relocate(element_dict, adaptor_type=True) [<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>] >>> page.relocate(element_dict, adaptor_type=True).css('::text') ['Tipping the Velvet']
if you want to keep it as lxml.etree
object, leave the adaptor_type
argument python >>> page.relocate(element_dict) [<Element a at 0x105a2a7b0>]
Filtering results based on a function
# Find all products over $50
expensive_products = page.css('.product_pod').filter(
lambda p: float(p.css('.price_color').re_first(r'[\d\.]+')) > 50
)
# Find all the products with price '53.23'
page.css('.product_pod').search(
lambda p: float(p.css('.price_color').re_first(r'[\d\.]+')) == 54.23
)
Doing operations on element content is the same as scrapy python quote.re(r'regex_pattern') # Get all strings (TextHandlers) that match the regex pattern quote.re_first(r'regex_pattern') # Get the first string (TextHandler) only quote.json() # If the content text is jsonable, then convert it to json using `orjson` which is 10x faster than the standard json library and provides more options
except that you can do more with them like python quote.re( r'regex_pattern', replace_entities=True, # Character entity references are replaced by their corresponding character clean_match=True, # This will ignore all whitespaces and consecutive spaces while matching case_sensitive= False, # Set the regex to ignore letters case while compiling it )
Hence all of these methods are methods from the TextHandler
within that contains the text content so the same can be done directly if you call the .text
property or equivalent selector function.
Doing operations on the text content itself includes
python quote.clean()
TextHandler
objects too? so in cases where you have for example a JS object assigned to a JS variable inside JS code and want to extract it with regex and then convert it to json object, in other libraries, these would be more than 1 line of code but here you can do it in 1 line like this python page.xpath('//script/text()').re_first(r'var dataLayer = (.+);').json()
Sort all characters in the string as if it were a list and return the new string python quote.sort(reverse=False)
To be clear,
TextHandler
is a sub-class of Python'sstr
so all normal operations/methods that work with Python strings will work with it.
Any element's attributes are not exactly a dictionary but a sub-class of mapping called AttributesHandler
that's read-only so it's faster and string values returned are actually TextHandler
objects so all operations above can be done on them, standard dictionary operations that don't modify the data, and more :)
python >>> for item in element.attrib.search_values('catalogue', partial=True): print(item) {'href': 'catalogue/tipping-the-velvet_999/index.html'}
python >>> element.attrib.json_string b'{"href":"catalogue/tipping-the-velvet_999/index.html","title":"Tipping the Velvet"}'
python >>> dict(element.attrib) {'href': 'catalogue/tipping-the-velvet_999/index.html', 'title': 'Tipping the Velvet'}
Scrapling is under active development so expect many more features coming soon :)
There are a lot of deep details skipped here to make this as short as possible so to take a deep dive, head to the docs section. I will try to keep it updated as possible and add complex examples. There I will explain points like how to write your storage system, write spiders that don't depend on selectors at all, and more...
Note that implementing your storage system can be complex as there are some strict rules such as inheriting from the same abstract class, following the singleton design pattern used in other classes, and more. So make sure to read the docs first.
[!IMPORTANT] A website is needed to provide detailed library documentation.
I'm trying to rush creating the website, researching new ideas, and adding more features/tests/benchmarks but time is tight with too many spinning plates between work, personal life, and working on Scrapling. I have been working on Scrapling for months for free after all.
If you likeScrapling
and want it to keep improving then this is a friendly reminder that you can help by supporting me through the sponsor button.
This section addresses common questions about Scrapling, please read this section before opening an issue.
css
or xpath
with the auto_save
parameter set to True
before structural changes happen.Now because everything about the element can be changed or removed, nothing from the element can be used as a unique identifier for the database. To solve this issue, I made the storage system rely on two things:
identifier
parameter you passed to the method while selecting. If you didn't pass one, then the selector string itself will be used as an identifier but remember you will have to use it as an identifier value later when the structure changes and you want to pass the new selector.Together both are used to retrieve the element's unique properties from the database later. 4. Now later when you enable the auto_match
parameter for both the Adaptor instance and the method call. The element properties are retrieved and Scrapling loops over all elements in the page and compares each one's unique properties to the unique properties we already have for this element and a score is calculated for each one. 5. Comparing elements is not exact but more about finding how similar these values are, so everything is taken into consideration, even the values' order, like the order in which the element class names were written before and the order in which the same element class names are written now. 6. The score for each element is stored in the table, and the element(s) with the highest combined similarity scores are returned.
Not a big problem as it depends on your usage. The word default
will be used in place of the URL field while saving the element's unique properties. So this will only be an issue if you used the same identifier later for a different website that you didn't pass the URL parameter while initializing it as well. The save process will overwrite the previous data and auto-matching uses the latest saved properties only.
For each element, Scrapling will extract: - Element tag name, text, attributes (names and values), siblings (tag names only), and path (tag names only). - Element's parent tag name, attributes (names and values), and text.
auto_save
/auto_match
parameter while selecting and it got completely ignored with a warning messageThat's because passing the auto_save
/auto_match
argument without setting auto_match
to True
while initializing the Adaptor object will only result in ignoring the auto_save
/auto_match
argument value. This behavior is purely for performance reasons so the database gets created only when you are planning to use the auto-matching features.
It could be one of these reasons: 1. No data were saved/stored for this element before. 2. The selector passed is not the one used while storing element data. The solution is simple - Pass the old selector again as an identifier to the method called. - Retrieve the element with the retrieve method using the old selector as identifier then save it again with the save method and the new selector as identifier. - Start using the identifier argument more often if you are planning to use every new selector from now on. 3. The website had some extreme structural changes like a new full design. If this happens a lot with this website, the solution would be to make your code as selector-free as possible using Scrapling features.
Pretty much yeah, almost all features you get from BeautifulSoup can be found or achieved in Scrapling one way or another. In fact, if you see there's a feature in bs4 that is missing in Scrapling, please make a feature request from the issues tab to let me know.
Of course, you can find elements by text/regex, find similar elements in a more reliable way than AutoScraper, and finally save/retrieve elements manually to use later as the model feature in AutoScraper. I have pulled all top articles about AutoScraper from Google and tested Scrapling against examples in them. In all examples, Scrapling got the same results as AutoScraper in much less time.
Yes, Scrapling instances are thread-safe. Each Adaptor instance maintains its state.
Everybody is invited and welcome to contribute to Scrapling. There is a lot to do!
Please read the contributing file before doing anything.
[!CAUTION] This library is provided for educational and research purposes only. By using this library, you agree to comply with local and international laws regarding data scraping and privacy. The authors and contributors are not responsible for any misuse of this software. This library should not be used to violate the rights of others, for unethical purposes, or to use data in an unauthorized or illegal manner. Do not use it on any website unless you have permission from the website owner or within their allowed rules like the
robots.txt
file, for example.
This work is licensed under BSD-3
This project includes code adapted from: - Parsel (BSD License) - Used for translator submodule
Frogy 2.0 is an automated external reconnaissance and Attack Surface Management (ASM) toolkit designed to map out an organization's entire internet presence. It identifies assets, IP addresses, web applications, and other metadata across the public internet and then smartly prioritizes them with highest (most attractive) to lowest (least attractive) from an attacker's playground perspective.
Comprehensive recon:
Aggregate subdomains and assets using multiple tools (CHAOS, Subfinder, Assetfinder, crt.sh) to map an organization's entire digital footprint.
Live asset verification:
Validate assets with live DNS resolution and port scanning (using DNSX and Naabu) to confirm what is publicly reachable.
In-depth web recon:
Collect detailed HTTP response data (via HTTPX) including metadata, technology stack, status codes, content lengths, and more.
Smart prioritization:
Use a composite scoring system that considers homepage status, login identification, technology stack, and DNS data and much more to generate risk score for each assets helping bug bounty hunters and pentesters focus on the most promising targets to start attacks with.
Professional reporting:
Generate a dynamic, colour-coded HTML report with a modern design and dark/light theme toggle.
In this tool, risk scoring is based on the notion of asset attractiveness—the idea that certain attributes or characteristics make an asset more interesting to attackers. If we see more of these attributes, the overall score goes up, indicating a broader "attack surface" that adversaries could leverage. Below is an overview of how each factor contributes to the final risk score.
200 OK
, it often means the page is legitimately reachable and responding with content. A 200 OK
is more interesting to attackers than a 404
or a redirect—so a 200 status modestly increases the risk.Strict-Transport-Security (HSTS)
X-Frame-Options
Content-Security-Policy
X-XSS-Protection
Referrer-Policy
Permissions-Policy
Missing or disabled headers mean an endpoint is more prone to common web exploits. Each absent header increments the score.
Each factor above contributes one or more points to the final risk score. For example:
Once all factors are tallied, we get a numeric risk score. Higher means more interesting and potentially gives more room for pentesters to test around to an attacker.
Why This Matters
This approach helps you quickly prioritize which assets warrant deeper testing. Subdomains with high counts of open ports, advanced internal usage, missing headers, or login panels are more complex, more privileged, or more likely to be misconfigured—therefore, your security team can focus on those first.
Clone the repository and run the installer script to set up all dependencies and tools:
chmod +x install.sh
./install.sh
chmod +x frogy.sh
./frogy.sh domains.txt
https://www.youtube.com/watch?v=LHlU4CYNj1M
OWASP Maryam is a modular open-source framework based on OSINT and data gathering. It is designed to provide a robust environment to harvest data from open sources and search engines quickly and thoroughly.
$ pip install maryam
Alternatively, you can install the latest version with the following command (Recommended):
pip install git+https://github.com/saeeddhqan/maryam.git
# Using dns_search. --max means all of resources. --api shows the results as json.
# .. -t means use multi-threading.
maryam -e dns_search -d ibm.com -t 5 --max --api --form
# Using youtube. -q means query
maryam -e youtube -q "<QUERY>"
maryam -e google -q "<QUERY>"
maryam -e dnsbrute -d domain.tld
# Show framework modules
maryam -e show modules
# Set framework options.
maryam -e set proxy ..
maryam -e set agent ..
maryam -e set timeout ..
# Run web API
maryam -e web api 127.0.0.1 1313
Here is a start guide: Development Guide You can add a new search engine to the util classes or use the current search engines to write a new module. The best help to write a new module is checking the current modules.
To report bugs, requests, or any other issues please create an issue.
A critical resource that cybersecurity professionals worldwide rely on to identify, mitigate and fix security vulnerabilities in software and hardware is in danger of breaking down. The federally funded, non-profit research and development organization MITRE warned today that its contract to maintain the Common Vulnerabilities and Exposures (CVE) program — which is traditionally funded each year by the Department of Homeland Security — expires on April 16.
A letter from MITRE vice president Yosry Barsoum, warning that the funding for the CVE program will expire on April 16, 2025.
Tens of thousands of security flaws in software are found and reported every year, and these vulnerabilities are eventually assigned their own unique CVE tracking number (e.g. CVE-2024-43573, which is a Microsoft Windows bug that Redmond patched last year).
There are hundreds of organizations — known as CVE Numbering Authorities (CNAs) — that are authorized by MITRE to bestow these CVE numbers on newly reported flaws. Many of these CNAs are country and government-specific, or tied to individual software vendors or vulnerability disclosure platforms (a.k.a. bug bounty programs).
Put simply, MITRE is a critical, widely-used resource for centralizing and standardizing information on software vulnerabilities. That means the pipeline of information it supplies is plugged into an array of cybersecurity tools and services that help organizations identify and patch security holes — ideally before malware or malcontents can wriggle through them.
“What the CVE lists really provide is a standardized way to describe the severity of that defect, and a centralized repository listing which versions of which products are defective and need to be updated,” said Matt Tait, chief operating officer of Corellium, a cybersecurity firm that sells phone-virtualization software for finding security flaws.
In a letter sent today to the CVE board, MITRE Vice President Yosry Barsoum warned that on April 16, 2025, “the current contracting pathway for MITRE to develop, operate and modernize CVE and several other related programs will expire.”
“If a break in service were to occur, we anticipate multiple impacts to CVE, including deterioration of national vulnerability databases and advisories, tool vendors, incident response operations, and all manner of critical infrastructure,” Barsoum wrote.
MITRE told KrebsOnSecurity the CVE website listing vulnerabilities will remain up after the funding expires, but that new CVEs won’t be added after April 16.
A representation of how a vulnerability becomes a CVE, and how that information is consumed. Image: James Berthoty, Latio Tech, via LinkedIn.
DHS officials did not immediately respond to a request for comment. The program is funded through DHS’s Cybersecurity & Infrastructure Security Agency (CISA), which is currently facing deep budget and staffing cuts by the Trump administration. The CVE contract available at USAspending.gov says the project was awarded approximately $40 million last year.
Former CISA Director Jen Easterly said the CVE program is a bit like the Dewey Decimal System, but for cybersecurity.
“It’s the global catalog that helps everyone—security teams, software vendors, researchers, governments—organize and talk about vulnerabilities using the same reference system,” Easterly said in a post on LinkedIn. “Without it, everyone is using a different catalog or no catalog at all, no one knows if they’re talking about the same problem, defenders waste precious time figuring out what’s wrong, and worst of all, threat actors take advantage of the confusion.”
John Hammond, principal security researcher at the managed security firm Huntress, told Reuters he swore out loud when he heard the news that CVE’s funding was in jeopardy, and that losing the CVE program would be like losing “the language and lingo we used to address problems in cybersecurity.”
“I really can’t help but think this is just going to hurt,” said Hammond, who posted a Youtube video to vent about the situation and alert others.
Several people close to the matter told KrebsOnSecurity this is not the first time the CVE program’s budget has been left in funding limbo until the last minute. Barsoum’s letter, which was apparently leaked, sounded a hopeful note, saying the government is making “considerable efforts to continue MITRE’s role in support of the program.”
Tait said that without the CVE program, risk managers inside companies would need to continuously monitor many other places for information about new vulnerabilities that may jeopardize the security of their IT networks. Meaning, it may become more common that software updates get mis-prioritized, with companies having hackable software deployed for longer than they otherwise would, he said.
“Hopefully they will resolve this, but otherwise the list will rapidly fall out of date and stop being useful,” he said.
Update, April 16, 11:00 a.m. ET: The CVE board today announced the creation of non-profit entity called The CVE Foundation that will continue the program’s work under a new, unspecified funding mechanism and organizational structure.
“Since its inception, the CVE Program has operated as a U.S. government-funded initiative, with oversight and management provided under contract,” the press release reads. “While this structure has supported the program’s growth, it has also raised longstanding concerns among members of the CVE Board about the sustainability and neutrality of a globally relied-upon resource being tied to a single government sponsor.”
The organization’s website, thecvefoundation.org, is less than a day old and currently hosts no content other than the press release heralding its creation. The announcement said the foundation would release more information about its structure and transition planning in the coming days.
Update, April 16, 4:26 p.m. ET: MITRE issued a statement today saying it “identified incremental funding to keep the programs operational. We appreciate the overwhelming support for these programs that have been expressed by the global cyber community, industry and government over the last 24 hours. The government continues to make considerable efforts to support MITRE’s role in the program and MITRE remains committed to CVE and CWE as global resources.”
Lobo Guará is a platform aimed at cybersecurity professionals, with various features focused on Cyber Threat Intelligence (CTI). It offers tools that make it easier to identify threats, monitor data leaks, analyze suspicious domains and URLs, and much more.
Allows identifying domains and subdomains that may pose a threat to organizations. SSL certificates issued by trusted authorities are indexed in real-time, and users can search using keywords of 4 or more characters.
Note: The current database contains certificates issued from September 5, 2024.
Allows the insertion of keywords for monitoring. When a certificate is issued and the common name contains the keyword (minimum of 5 characters), it will be displayed to the user.
Generates a link to capture device information from attackers. Useful when the security professional can contact the attacker in some way.
Performs a scan on a domain, displaying whois information and subdomains associated with that domain.
Allows performing a scan on a URL to identify URIs (web paths) related to that URL.
Performs a scan on a URL, generating a screenshot and a mirror of the page. The result can be made public to assist in taking down malicious websites.
Monitors a URL with no active application until it returns an HTTP 200 code. At that moment, it automatically initiates a URL scan, providing evidence for actions against malicious sites.
Centralizes intelligence news from various channels, keeping users updated on the latest threats.
The application installation has been approved on Ubuntu 24.04 Server and Red Hat 9.4 distributions, the links for which are below:
Lobo Guará Implementation on Ubuntu 24.04
Lobo Guará Implementation on Red Hat 9.4
There is a Dockerfile and a docker-compose version of Lobo Guará too. Just clone the repo and do:
docker compose up
Then, go to your web browser at localhost:7405.
Before proceeding with the installation, ensure the following dependencies are installed:
git clone https://github.com/olivsec/loboguara.git
cd loboguara/
nano server/app/config.py
Fill in the required parameters in the config.py
file:
class Config:
SECRET_KEY = 'YOUR_SECRET_KEY_HERE'
SQLALCHEMY_DATABASE_URI = 'postgresql://guarauser:YOUR_PASSWORD_HERE@localhost/guaradb?sslmode=disable'
SQLALCHEMY_TRACK_MODIFICATIONS = False
MAIL_SERVER = 'smtp.example.com'
MAIL_PORT = 587
MAIL_USE_TLS = True
MAIL_USERNAME = 'no-reply@example.com'
MAIL_PASSWORD = 'YOUR_SMTP_PASSWORD_HERE'
MAIL_DEFAULT_SENDER = 'no-reply@example.com'
ALLOWED_DOMAINS = ['yourdomain1.my.id', 'yourdomain2.com', 'yourdomain3.net']
API_ACCESS_TOKEN = 'YOUR_LOBOGUARA_API_TOKEN_HERE'
API_URL = 'https://loboguara.olivsec.com.br/api'
CHROME_DRIVER_PATH = '/opt/loboguara/bin/chromedriver'
GOOGLE_CHROME_PATH = '/opt/loboguara/bin/google-chrome'
FFUF_PATH = '/opt/loboguara/bin/ffuf'
SUBFINDER_PATH = '/opt/loboguara/bin/subfinder'
LOG_LEVEL = 'ERROR'
LOG_FILE = '/opt/loboguara/logs/loboguara.log'
sudo chmod +x ./install.sh
sudo ./install.sh
sudo -u loboguara /opt/loboguara/start.sh
Access the URL below to register the Lobo Guará Super Admin
http://your_address:7405/admin
Access the Lobo Guará platform online: https://loboguara.olivsec.com.br/
“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” -U.S. Constitution, First Amendment.
Image: Shutterstock, zimmytws.
In an address to Congress this month, President Trump claimed he had “brought free speech back to America.” But barely two months into his second term, the president has waged an unprecedented attack on the First Amendment rights of journalists, students, universities, government workers, lawyers and judges.
This story explores a slew of recent actions by the Trump administration that threaten to undermine all five pillars of the First Amendment to the U.S. Constitution, which guarantees freedoms concerning speech, religion, the media, the right to assembly, and the right to petition the government and seek redress for wrongs.
The right to petition allows citizens to communicate with the government, whether to complain, request action, or share viewpoints — without fear of reprisal. But that right is being assaulted by this administration on multiple levels. For starters, many GOP lawmakers are now heeding their leadership’s advice to stay away from local town hall meetings and avoid the wrath of constituents affected by the administration’s many federal budget and workforce cuts.
Another example: President Trump recently fired most of the people involved in processing Freedom of Information Act (FOIA) requests for government agencies. FOIA is an indispensable tool used by journalists and the public to request government records, and to hold leaders accountable.
The biggest story by far this week was the bombshell from The Atlantic editor Jeffrey Goldberg, who recounted how he was inadvertently added to a Signal group chat with National Security Advisor Michael Waltz and 16 other Trump administration officials discussing plans for an upcoming attack on Yemen.
One overlooked aspect of Goldberg’s incredible account is that by planning and coordinating the attack on Signal — which features messages that can auto-delete after a short time — administration officials were evidently seeking a way to avoid creating a lasting (and potentially FOIA-able) record of their deliberations.
“Intentional or not, use of Signal in this context was an act of erasure—because without Jeffrey Goldberg being accidentally added to the list, the general public would never have any record of these communications or any way to know they even occurred,” Tony Bradley wrote this week at Forbes.
Petitioning the government, particularly when it ignores your requests, often requires challenging federal agencies in court. But that becomes far more difficult if the most competent law firms start to shy away from cases that may involve crossing the president and his administration.
On March 22, the president issued a memorandum that directs heads of the Justice and Homeland Security Departments to “seek sanctions against attorneys and law firms who engage in frivolous, unreasonable and vexatious litigation against the United States,” or in matters that come before federal agencies.
The POTUS recently issued several executive orders railing against specific law firms with attorneys who worked legal cases against him. On Friday, the president announced that the law firm of Skadden, Arps, Slate, Meager & Flom had agreed to provide $100 million in pro bono work on issues that he supports.
Trump issued another order naming the firm Paul, Weiss, Rifkind, Wharton & Garrison, which ultimately agreed to pledge $40 million in pro bono legal services to the president’s causes.
Other Trump executive orders targeted law firms Jenner & Block and WilmerHale, both of which have attorneys that worked with special counsel Robert Mueller on the investigation into Russian interference in the 2016 election. But this week, two federal judges in separate rulings froze parts of those orders.
“There is no doubt this retaliatory action chills speech and legal advocacy, and that is qualified as a constitutional harm,” wrote Judge Richard Leon, who ruled against the executive order targeting WilmerHale.
President Trump recently took the extraordinary step of calling for the impeachment of federal judges who rule against the administration. Trump called U.S. District Judge James Boasberg a “Radical Left Lunatic” and urged he be removed from office for blocking deportation of Venezuelan alleged gang members under a rarely invoked wartime legal authority.
In a rare public rebuke to a sitting president, U.S. Supreme Court Justice John Roberts issued a statement on March 18 pointing out that “For more than two centuries, it has been established that impeachment is not an appropriate response to disagreement concerning a judicial decision.”
The U.S. Constitution provides that judges can be removed from office only through impeachment by the House of Representatives and conviction by the Senate. The Constitution also states that judges’ salaries cannot be reduced while they are in office.
Undeterred, House Speaker Mike Johnson this week suggested the administration could still use the power of its purse to keep courts in line, and even floated the idea of wholesale eliminating federal courts.
“We do have authority over the federal courts as you know,” Johnson said. “We can eliminate an entire district court. We have power of funding over the courts, and all these other things. But desperate times call for desperate measures, and Congress is going to act, so stay tuned for that.”
President Trump has taken a number of actions to discourage lawful demonstrations at universities and colleges across the country, threatening to cut federal funding for any college that supports protests he deems “illegal.”
A Trump executive order in January outlined a broad federal crackdown on what he called “the explosion of antisemitism” on U.S. college campuses. This administration has asserted that foreign students who are lawfully in the United States on visas do not enjoy the same free speech or due process rights as citizens.
Reuters reports that the acting civil rights director at the Department of Education on March 10 sent letters to 60 educational institutions warning they could lose federal funding if they don’t do more to combat anti-semitism. On March 20, Trump issued an order calling for the closure of the Education Department.
Meanwhile, U.S. Immigration and Customs Enforcement (ICE) agents have been detaining and trying to deport pro-Palestinian students who are legally in the United States. The administration is targeting students and academics who spoke out against Israel’s attacks on Gaza, or who were active in campus protests against U.S. support for the attacks. Secretary of State Marco Rubio told reporters Thursday that at least 300 foreign students have seen their visas revoked under President Trump, a far higher number than was previously known.
In his first term, Trump threatened to use the national guard or the U.S. military to deal with protesters, and in campaigning for re-election he promised to revisit the idea.
“I think the bigger problem is the enemy from within,” Trump told Fox News in October 2024. “We have some very bad people. We have some sick people, radical left lunatics. And I think they’re the big — and it should be very easily handled by, if necessary, by National Guard, or if really necessary, by the military, because they can’t let that happen.”
This term, Trump acted swiftly to remove the top judicial advocates in the armed forces who would almost certainly push back on any request by the president to use U.S. soldiers in an effort to quell public protests, or to arrest and detain immigrants. In late February, the president and Defense Secretary Pete Hegseth fired the top legal officers for the military services — those responsible for ensuring the Uniform Code of Military Justice is followed by commanders.
Military.com warns that the purge “sets an alarming precedent for a crucial job in the military, as President Donald Trump has mused about using the military in unorthodox and potentially illegal ways.” Hegseth told reporters the removals were necessary because he didn’t want them to pose any “roadblocks to orders that are given by a commander in chief.”
President Trump has sued a number of U.S. news outlets, including 60 Minutes, CNN, The Washington Post, The New York Times and other smaller media organizations for unflattering coverage.
In a $10 billion lawsuit against 60 Minutes and its parent Paramount, Trump claims they selectively edited an interview with former Vice President Kamala Harris prior to the 2024 election. The TV news show last month published transcripts of the interview at the heart of the dispute, but Paramount is reportedly considering a settlement to avoid potentially damaging its chances of winning the administration’s approval for a pending multibillion-dollar merger.
The president sued The Des Moines Register and its parent company, Gannett, for publishing a poll showing Trump trailing Harris in the 2024 presidential election in Iowa (a state that went for Trump). The POTUS also is suing the Pulitzer Prize board over 2018 awards given to The New York Times and The Washington Post for their coverage of purported Russian interference in the 2016 election.
Whether or not any of the president’s lawsuits against news organizations have merit or succeed is almost beside the point. The strategy behind suing the media is to make reporters and newsrooms think twice about criticizing or challenging the president and his administration. The president also knows some media outlets will find it more expedient to settle.
Trump also sued ABC News and George Stephanopoulos for stating that the president had been found liable for “rape” in a civil case [Trump was found liable of sexually abusing and defaming E. Jean Carroll]. ABC parent Disney settled that claim by agreeing to donate $15 million to the Trump Presidential Library.
Following the attack on the U.S. Capitol on Jan. 6, 2021, Facebook blocked President Trump’s account. Trump sued Meta, and after the president’s victory in 2024 Meta settled and agreed to pay Trump $25 million: $22 million would go to his presidential library, and the rest to legal fees. Meta CEO Mark Zuckerberg also announced Facebook and Instagram would get rid of fact-checkers and rely instead on reader-submitted “community notes” to debunk disinformation on the social media platform.
Brendan Carr, the president’s pick to run the Federal Communications Commission (FCC), has pledged to “dismantle the censorship cartel and restore free speech rights for everyday Americans.” But on January 22, 2025, the FCC reopened complaints against ABC, CBS and NBC over their coverage of the 2024 election. The previous FCC chair had dismissed the complaints as attacks on the First Amendment and an attempt to weaponize the agency for political purposes.
According to Reuters, the complaints call for an investigation into how ABC News moderated the pre-election TV debate between Trump and Biden, and appearances of then-Vice President Harris on 60 Minutes and on NBC’s “Saturday Night Live.”
Since then, the FCC has opened investigations into NPR and PBS, alleging that they are breaking sponsorship rules. The Center for Democracy & Technology (CDT), a think tank based in Washington, D.C., noted that the FCC is also investigating KCBS in San Francisco for reporting on the location of federal immigration authorities.
“Even if these investigations are ultimately closed without action, the mere fact of opening them – and the implicit threat to the news stations’ license to operate – can have the effect of deterring the press from news coverage that the Administration dislikes,” the CDT’s Kate Ruane observed.
Trump has repeatedly threatened to “open up” libel laws, with the goal of making it easier to sue media organizations for unfavorable coverage. But this week, the U.S. Supreme Court declined to hear a challenge brought by Trump donor and Las Vegas casino magnate Steve Wynn to overturn the landmark 1964 decision in New York Times v. Sullivan, which insulates the press from libel suits over good-faith criticism of public figures.
The president also has insisted on picking which reporters and news outlets should be allowed to cover White House events and participate in the press pool that trails the president. He barred the Associated Press from the White House and Air Force One over their refusal to call the Gulf of Mexico by another name.
And the Defense Department has ordered a number of top media outlets to vacate their spots at the Pentagon, including CNN, The Hill, The Washington Post, The New York Times, NBC News, Politico and National Public Radio.
“Incoming media outlets include the New York Post, Breitbart, the Washington Examiner, the Free Press, the Daily Caller, Newsmax, the Huffington Post and One America News Network, most of whom are seen as conservative or favoring Republican President Donald Trump,” Reuters reported.
Shortly after Trump took office again in January 2025, the administration began circulating lists of hundreds of words that government staff and agencies shall not use in their reports and communications.
The Brookings Institution notes that in moving to comply with this anti-speech directive, federal agencies have purged countless taxpayer-funded data sets from a swathe of government websites, including data on crime, sexual orientation, gender, education, climate, and global development.
The New York Times reports that in the past two months, hundreds of terabytes of digital resources analyzing data have been taken off government websites.
“While in many cases the underlying data still exists, the tools that make it possible for the public and researchers to use that data have been removed,” The Times wrote.
On Jan. 27, Trump issued a memo (PDF) that paused all federally funded programs pending a review of those programs for alignment with the administration’s priorities. Among those was ensuring that no funding goes toward advancing “Marxist equity, transgenderism, and green new deal social engineering policies.”
According to the CDT, this order is a blatant attempt to force government grantees to cease engaging in speech that the current administration dislikes, including speech about the benefits of diversity, climate change, and LGBTQ issues.
“The First Amendment does not permit the government to discriminate against grantees because it does not like some of the viewpoints they espouse,” the CDT’s Ruane wrote. “Indeed, those groups that are challenging the constitutionality of the order argued as much in their complaint, and have won an injunction blocking its implementation.”
On January 20, the same day Trump issued an executive order on free speech, the president also issued an executive order titled “Reevaluating and Realigning United States Foreign Aid,” which froze funding for programs run by the U.S. Agency for International Development (USAID). Among those were programs designed to empower civil society and human rights groups, journalists and others responding to digital repression and Internet shutdowns.
According to the Electronic Frontier Foundation (EFF), this includes many freedom technologies that use cryptography, fight censorship, protect freedom of speech, privacy and anonymity for millions of people around the world.
“While the State Department has issued some limited waivers, so far those waivers do not seem to cover the open source internet freedom technologies,” the EFF wrote about the USAID disruptions. “As a result, many of these projects have to stop or severely curtail their work, lay off talented workers, and stop or slow further development.”
On March 14, the president signed another executive order that effectively gutted the U.S. Agency for Global Media (USAGM), which oversees or funds media outlets including Radio Free Europe/Radio Liberty and Voice of America (VOA). The USAGM also oversees Radio Free Asia, which supporters say has been one of the most reliable tools used by the government to combat Chinese propaganda.
But this week, U.S. District Court Judge Royce Lamberth, a Reagan appointee, temporarily blocked USAGM’s closure by the administration.
“RFE/RL has, for decades, operated as one of the organizations that Congress has statutorily designated to carry out this policy,” Lamberth wrote in a 10-page opinion. “The leadership of USAGM cannot, with one sentence of reasoning offering virtually no explanation, force RFE/RL to shut down — even if the President has told them to do so.”
The Trump administration rescinded a decades-old policy that instructed officers not to take immigration enforcement actions in or near “sensitive” or “protected” places, such as churches, schools, and hospitals.
That directive was immediately challenged in a case brought by a group of Quakers, Baptists and Sikhs, who argued the policy reversal was keeping people from attending services for fear of being arrested on civil immigration violations. On Feb. 24, a federal judge agreed and blocked ICE agents from entering churches or targeting migrants nearby.
The president’s executive order allegedly addressing antisemitism came with a fact sheet that described college campuses as “infested” with “terrorists” and “jihadists.” Multiple faith groups expressed alarm over the order, saying it attempts to weaponize antisemitism and promote “dehumanizing anti-immigrant policies.”
The president also announced the creation of a “Task Force to Eradicate Anti-Christian Bias,” to be led by Attorney General Pam Bondi. Never mind that Christianity is easily the largest faith in America and that Christians are well-represented in Congress.
The Rev. Paul Brandeis Raushenbush, a Baptist minister and head of the progressive Interfaith Alliance, issued a statement accusing Trump of hypocrisy in claiming to champion religion by creating the task force.
“From allowing immigration raids in churches, to targeting faith-based charities, to suppressing religious diversity, the Trump Administration’s aggressive government overreach is infringing on religious freedom in a way we haven’t seen for generations,” Raushenbush said.
A statement from Americans United for Separation of Church and State said the task force could lead to religious persecution of those with other faiths.
“Rather than protecting religious beliefs, this task force will misuse religious freedom to justify bigotry, discrimination, and the subversion of our civil rights laws,” said Rachel Laser, the group’s president and CEO.
Where is President Trump going with all these blatant attacks on the First Amendment? The president has made no secret of his affection for autocratic leaders and “strongmen” around the world, and he is particularly enamored with Hungary’s far-right Prime Minister Viktor Orbán, who has visited Trump’s Mar-a-Lago resort twice in the past year.
A March 15 essay in The Atlantic by Hungarian investigative journalist András Pethő recounts how Orbán rose to power by consolidating control over the courts, and by building his own media universe while simultaneously placing a stranglehold on the independent press.
“As I watch from afar what’s happening to the free press in the United States during the first weeks of Trump’s second presidency — the verbal bullying, the legal harassment, the buckling by media owners in the face of threats — it all looks very familiar,” Pethő wrote. “The MAGA authorities have learned Orbán’s lessons well.”
Many successful phishing attacks result in a financial loss or malware infection. But falling for some phishing scams, like those currently targeting Russians searching online for organizations that are fighting the Kremlin war machine, can cost you your freedom or your life.
The real website of the Ukrainian paramilitary group “Freedom of Russia” legion. The text has been machine-translated from Russian.
Researchers at the security firm Silent Push mapped a network of several dozen phishing domains that spoof the recruitment websites of Ukrainian paramilitary groups, as well as Ukrainian government intelligence sites.
The website legiohliberty[.]army features a carbon copy of the homepage for the Freedom of Russia Legion (a.k.a. “Free Russia Legion”), a three-year-old Ukraine-based paramilitary unit made up of Russian citizens who oppose Vladimir Putin and his invasion of Ukraine.
The phony version of that website copies the legitimate site — legionliberty[.]army — providing an interactive Google Form where interested applicants can share their contact and personal details. The form asks visitors to provide their name, gender, age, email address and/or Telegram handle, country, citizenship, experience in the armed forces; political views; motivations for joining; and any bad habits.
“Participation in such anti-war actions is considered illegal in the Russian Federation, and participating citizens are regularly charged and arrested,” Silent Push wrote in a report released today. “All observed campaigns had similar traits and shared a common objective: collecting personal information from site-visiting victims. Our team believes it is likely that this campaign is the work of either Russian Intelligence Services or a threat actor with similarly aligned motives.”
Silent Push’s Zach Edwards said the fake Legion Liberty site shared multiple connections with rusvolcorps[.]net. That domain mimics the recruitment page for a Ukrainian far-right paramilitary group called the Russian Volunteer Corps (rusvolcorps[.]com), and uses a similar Google Forms page to collect information from would-be members.
Other domains Silent Push connected to the phishing scheme include: ciagov[.]icu, which mirrors the content on the official website of the U.S. Central Intelligence Agency; and hochuzhitlife[.]com, which spoofs the Ministry of Defense of Ukraine & General Directorate of Intelligence (whose actual domain is hochuzhit[.]com).
According to Edwards, there are no signs that these phishing sites are being advertised via email. Rather, it appears those responsible are promoting them by manipulating the search engine results shown when someone searches for one of these anti-Putin organizations.
In August 2024, security researcher Artem Tamoian posted on Twitter/X about how he received startlingly different results when he searched for “Freedom of Russia legion” in Russia’s largest domestic search engine Yandex versus Google.com. The top result returned by Google was the legion’s actual website, while the first result on Yandex was a phishing page targeting the group.
“I think at least some of them are surely promoted via search,” Tamoian said of the phishing domains. “My first thread on that accuses Yandex, but apart from Yandex those websites are consistently ranked above legitimate in DuckDuckGo and Bing. Initially, I didn’t realize the scale of it. They keep appearing to this day.”
Tamoian, a native Russian who left the country in 2019, is the founder of the cyber investigation platform malfors.com. He recently discovered two other sites impersonating the Ukrainian paramilitary groups — legionliberty[.]world and rusvolcorps[.]ru — and reported both to Cloudflare. When Cloudflare responded by blocking the sites with a phishing warning, the real Internet address of these sites was exposed as belonging to a known “bulletproof hosting” network called Stark Industries Solutions Ltd.
Stark Industries Solutions appeared two weeks before Russia invaded Ukraine in February 2022, materializing out of nowhere with hundreds of thousands of Internet addresses in its stable — many of them originally assigned to Russian government organizations. In May 2024, KrebsOnSecurity published a deep dive on Stark, which has repeatedly been used to host infrastructure for distributed denial-of-service (DDoS) attacks, phishing, malware and disinformation campaigns from Russian intelligence agencies and pro-Kremlin hacker groups.
In March 2023, Russia’s Supreme Court designated the Freedom of Russia legion as a terrorist organization, meaning that Russians caught communicating with the group could face between 10 and 20 years in prison.
Tamoian said those searching online for information about these paramilitary groups have become easy prey for Russian security services.
“I started looking into those phishing websites, because I kept stumbling upon news that someone gets arrested for trying to join [the] Ukrainian Army or for trying to help them,” Tamoian told KrebsOnSecurity. “I have also seen reports [of] FSB contacting people impersonating Ukrainian officers, as well as using fake Telegram bots, so I thought fake websites might be an option as well.”
Search results showing news articles about people in Russia being sentenced to lengthy prison terms for attempting to aid Ukrainian paramilitary groups.
Tamoian said reports surface regularly in Russia about people being arrested for trying carry out an action requested by a “Ukrainian recruiter,” with the courts unfailingly imposing harsh sentences regardless of the defendant’s age.
“This keeps happening regularly, but usually there are no details about how exactly the person gets caught,” he said. “All cases related to state treason [and] terrorism are classified, so there are barely any details.”
Tamoian said while he has no direct evidence linking any of the reported arrests and convictions to these phishing sites, he is certain the sites are part of a larger campaign by the Russian government.
“Considering that they keep them alive and keep spawning more, I assume it might be an efficient thing,” he said. “They are on top of DuckDuckGo and Yandex, so it unfortunately works.”
Further reading: Silent Push report, Russian Intelligence Targeting its Citizens and Informants.