FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayKrebs on Security

FBI: Spike in Hacked Police Emails, Fake Subpoenas

The Federal Bureau of Investigation (FBI) is urging police departments and governments worldwide to beef up security around their email systems, citing a recent increase in cybercriminal services that use hacked police email accounts to send unauthorized subpoenas and customer data requests to U.S.-based technology companies.

In an alert (PDF) published this week, the FBI said it has seen an uptick in postings on criminal forums regarding the process of emergency data requests (EDRs) and the sale of email credentials stolen from police departments and government agencies.

“Cybercriminals are likely gaining access to compromised US and foreign government email addresses and using them to conduct fraudulent emergency data requests to US based companies, exposing the personal information of customers to further use for criminal purposes,” the FBI warned.

In the United States, when federal, state or local law enforcement agencies wish to obtain information about an account at a technology provider — such as the account’s email address, or what Internet addresses a specific cell phone account has used in the past — they must submit an official court-ordered warrant or subpoena.

Virtually all major technology companies serving large numbers of users online have departments that routinely review and process such requests, which are typically granted (eventually, and at least in part) as long as the proper documents are provided and the request appears to come from an email address connected to an actual police department domain name.

In some cases, a cybercriminal will offer to forge a court-approved subpoena and send that through a hacked police or government email account. But increasingly, thieves are relying on fake EDRs, which allow investigators to attest that people will be bodily harmed or killed unless a request for account data is granted expeditiously.

The trouble is, these EDRs largely bypass any official review and do not require the requester to supply any court-approved documents. Also, it is difficult for a company that receives one of these EDRs to immediately determine whether it is legitimate.

In this scenario, the receiving company finds itself caught between two unsavory outcomes: Failing to immediately comply with an EDR — and potentially having someone’s blood on their hands — or possibly leaking a customer record to the wrong person.

Perhaps unsurprisingly, compliance with such requests tends to be extremely high. For example, in its most recent transparency report (PDF) Verizon said it received more than 127,000 law enforcement demands for customer data in the second half of 2023 — including more than 36,000 EDRs — and that the company provided records in response to approximately 90 percent of requests.

One English-speaking cybercriminal who goes by the nicknames “Pwnstar” and “Pwnipotent” has been selling fake EDR services on both Russian-language and English cybercrime forums. Their prices range from $1,000 to $3,000 per successful request, and they claim to control “gov emails from over 25 countries,” including Argentina, Bangladesh, Brazil, Bolivia, Dominican Republic, Hungary, India, Kenya, Jordan, Lebanon, Laos, Malaysia, Mexico, Morocco, Nigeria, Oman, Pakistan, Panama, Paraguay, Peru, Philippines, Tunisia, Turkey, United Arab Emirates (UAE), and Vietnam.

“I cannot 100% guarantee every order will go through,” Pwnstar explained. “This is social engineering at the highest level and there will be failed attempts at times. Don’t be discouraged. You can use escrow and I give full refund back if EDR doesn’t go through and you don’t receive your information.”

An ad from Pwnstar for fake EDR services.

A review of EDR vendors across many cybercrime forums shows that some fake EDR vendors sell the ability to send phony police requests to specific social media platforms, including forged court-approved documents. Others simply sell access to hacked government or police email accounts, and leave it up to the buyer to forge any needed documents.

“When you get account, it’s yours, your account, your liability,” reads an ad in October on BreachForums. “Unlimited Emergency Data Requests. Once Paid, the Logins are completely Yours. Reset as you please. You would need to Forge Documents to Successfully Emergency Data Request.”

Still other fake EDR service vendors claim to sell hacked or fraudulently created accounts on Kodex, a startup that aims to help tech companies do a better job screening out phony law enforcement data requests. Kodex is trying to tackle the problem of fake EDRs by working directly with the data providers to pool information about police or government officials submitting these requests, with an eye toward making it easier for everyone to spot an unauthorized EDR.

If police or government officials wish to request records regarding Coinbase customers, for example, they must first register an account on Kodexglobal.com. Kodex’s systems then assign that requestor a score or credit rating, wherein officials who have a long history of sending valid legal requests will have a higher rating than someone sending an EDR for the first time.

It is not uncommon to see fake EDR vendors claim the ability to send data requests through Kodex, with some even sharing redacted screenshots of police accounts at Kodex.

Matt Donahue is the former FBI agent who founded Kodex in 2021. Donahue said just because someone can use a legitimate police department or government email to create a Kodex account doesn’t mean that user will be able to send anything. Donahue said even if one customer gets a fake request, Kodex is able to prevent the same thing from happening to another.

Kodex told KrebsOnSecurity that over the past 12 months it has processed a total of 1,597 EDRs, and that 485 of those requests (~30 percent) failed a second-level verification. Kodex reports it has suspended nearly 4,000 law enforcement users in the past year, including:

-1,521 from the Asia-Pacific region;
-1,290 requests from Europe, the Middle East and Asia;
-460 from police departments and agencies in the United States;
-385 from entities in Latin America, and;
-285 from Brazil.

Donahue said 60 technology companies are now routing all law enforcement data requests through Kodex, including an increasing number of financial institutions and cryptocurrency platforms. He said one concern shared by recent prospective customers is that crooks are seeking to use phony law enforcement requests to freeze and in some cases seize funds in specific accounts.

“What’s being conflated [with EDRs] is anything that doesn’t involve a formal judge’s signature or legal process,” Donahue said. “That can include control over data, like an account freeze or preservation request.”

In a hypothetical example, a scammer uses a hacked government email account to request that a service provider place a hold on a specific bank or crypto account that is allegedly subject to a garnishment order, or party to crime that is globally sanctioned, such as terrorist financing or child exploitation.

A few days or weeks later, the same impersonator returns with a request to seize funds in the account, or to divert the funds to a custodial wallet supposedly controlled by government investigators.

“In terms of overall social engineering attacks, the more you have a relationship with someone the more they’re going to trust you,” Donahue said. “If you send them a freeze order, that’s a way to establish trust, because [the first time] they’re not asking for information. They’re just saying, ‘Hey can you do me a favor?’ And that makes the [recipient] feel valued.”

Echoing the FBI’s warning, Donahue said far too many police departments in the United States and other countries have poor account security hygiene, and often do not enforce basic account security precautions — such as requiring phishing-resistant multifactor authentication.

How are cybercriminals typically gaining access to police and government email accounts? Donahue said it’s still mostly email-based phishing, and credentials that are stolen by opportunistic malware infections and sold on the dark web. But as bad as things are internationally, he said, many law enforcement entities in the United States still have much room for improvement in account security.

“Unfortunately, a lot of this is phishing or malware campaigns,” Donahue said. “A lot of global police agencies don’t have stringent cybersecurity hygiene, but even U.S. dot-gov emails get hacked. Over the last nine months, I’ve reached out to CISA (the Cybersecurity and Infrastructure Security Agency) over a dozen times about .gov email addresses that were compromised and that CISA was unaware of.”

A Single Cloud Compromise Can Feed an Army of AI Sex Bots

Organizations that get relieved of credentials to their cloud environments can quickly find themselves part of a disturbing new trend: Cybercriminals using stolen cloud credentials to operate and resell sexualized AI-powered chat services. Researchers say these illicit chat bots, which use custom jailbreaks to bypass content filtering, often veer into darker role-playing scenarios, including child sexual exploitation and rape.

Image: Shutterstock.

Researchers at security firm Permiso Security say attacks against generative artificial intelligence (AI) infrastructure like Bedrock from Amazon Web Services (AWS) have increased markedly over the last six months, particularly when someone in the organization accidentally exposes their cloud credentials or key online, such as in a code repository like GitHub.

Investigating the abuse of AWS accounts for several organizations, Permiso found attackers had seized on stolen AWS credentials to interact with the large language models (LLMs) available on Bedrock. But they also soon discovered none of these AWS users had enabled full logging of LLM activity (by default, logs don’t include model prompts and outputs), and thus they lacked any visibility into what attackers were doing with that access.

So Permiso researchers decided to leak their own test AWS key on GitHub, while turning on logging so that they could see exactly what an attacker might ask for, and what the responses might be.

Within minutes, their bait key was scooped up and used in a service that offers AI-powered sex chats online.

“After reviewing the prompts and responses it became clear that the attacker was hosting an AI roleplaying service that leverages common jailbreak techniques to get the models to accept and respond with content that would normally be blocked,” Permiso researchers wrote in a report released today.

“Almost all of the roleplaying was of a sexual nature, with some of the content straying into darker topics such as child sexual abuse,” they continued. “Over the course of two days we saw over 75,000 successful model invocations, almost all of a sexual nature.”

Ian Ahl, senior vice president of threat research at Permiso, said attackers in possession of a working cloud account traditionally have used that access for run-of-the-mill financial cybercrime, such as cryptocurrency mining or spam. But over the past six months, Ahl said, Bedrock has emerged as one of the top targeted cloud services.

“Bad guy hosts a chat service, and subscribers pay them money,” Ahl said of the business model for commandeering Bedrock access to power sex chat bots. “They don’t want to pay for all the prompting that their subscribers are doing, so instead they hijack someone else’s infrastructure.”

Ahl said much of the AI-powered chat conversations initiated by the users of their honeypot AWS key were harmless roleplaying of sexual behavior.

“But a percentage of it is also geared toward very illegal stuff, like child sexual assault fantasies and rapes being played out,” Ahl said. “And these are typically things the large language models won’t be able to talk about.”

AWS’s Bedrock uses large language models from Anthropic, which incorporates a number of technical restrictions aimed at placing certain ethical guardrails on the use of their LLMs. But attackers can evade or “jailbreak” their way out of these restricted settings, usually by asking the AI to imagine itself in an elaborate hypothetical situation under which its normal restrictions might be relaxed or discarded altogether.

“A typical jailbreak will pose a very specific scenario, like you’re a writer who’s doing research for a book, and everyone involved is a consenting adult, even though they often end up chatting about nonconsensual things,” Ahl said.

In June 2024, security experts at Sysdig documented a new attack that leveraged stolen cloud credentials to target ten cloud-hosted LLMs. The attackers Sysdig wrote about gathered cloud credentials through a known security vulnerability, but the researchers also found the attackers sold LLM access to other cybercriminals while sticking the cloud account owner with an astronomical bill.

“Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers: in this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted,” Sysdig researchers wrote. “If undiscovered, this type of attack could result in over $46,000 of LLM consumption costs per day for the victim.”

Ahl said it’s not certain who is responsible for operating and selling these sex chat services, but Permiso suspects the activity may be tied to a platform cheekily named “chub[.]ai,” which offers a broad selection of pre-made AI characters with whom users can strike up a conversation. Permiso said almost every character name from the prompts they captured in their honeypot could be found at Chub.

Some of the AI chat bot characters offered by Chub. Some of these characters include the tags “rape” and “incest.”

Chub offers free registration, via its website or a mobile app. But after a few minutes of chatting with their newfound AI friends, users are asked to purchase a subscription. The site’s homepage features a banner at the top that reads: “Banned from OpenAI? Get unmetered access to uncensored alternatives for as little as $5 a month.”

Until late last week Chub offered a wide selection of characters in a category called “NSFL” or Not Safe for Life, a term meant to describe content that is disturbing or nauseating to the point of being emotionally scarring.

Fortune profiled Chub AI in a January 2024 story that described the service as a virtual brothel advertised by illustrated girls in spaghetti strap dresses who promise a chat-based “world without feminism,” where “girls offer sexual services.” From that piece:

Chub AI offers more than 500 such scenarios, and a growing number of other sites are enabling similar AI-powered child pornographic role-play. They are part of a broader uncensored AI economy that, according to Fortune’s interviews with 18 AI developers and founders, was spurred first by OpenAI and then accelerated by Meta’s release of its open-source Llama tool.

Fortune says Chub is run by someone using the handle “Lore,” who said they launched the service to help others evade content restrictions on AI platforms. Chub charges fees starting at $5 a month to use the new chatbots, and the founder told Fortune the site had generated more than $1 million in annualized revenue.

KrebsOnSecurity sought comment about Permiso’s research from AWS, which initially seemed to downplay the seriousness of the researchers’ findings. The company noted that AWS employs automated systems that will alert customers if their credentials or keys are found exposed online.

AWS explained that when a key or credential pair is flagged as exposed, it is then restricted to limit the amount of abuse that attackers can potentially commit with that access. For example, flagged credentials can’t be used to create or modify authorized accounts, or spin up new cloud resources.

Ahl said Permiso did indeed receive multiple alerts from AWS about their exposed key, including one that warned their account may have been used by an unauthorized party. But they said the restrictions AWS placed on the exposed key did nothing to stop the attackers from using it to abuse Bedrock services.

Sometime in the past few days, however, AWS responded by including Bedrock in the list of services that will be quarantined in the event an AWS key or credential pair is found compromised or exposed online. AWS confirmed that Bedrock was a new addition to its quarantine procedures.

Additionally, not long after KrebsOnSecurity began reporting this story, Chub’s website removed its NSFL section. It also appears to have removed cached copies of the site from the Wayback Machine at archive.org. Still, Permiso found that Chub’s user stats page shows the site has more than 3,000 AI conversation bots with the NSFL tag, and that 2,113 accounts were following the NSFL tag.

The user stats page at Chub shows more than 2,113 people have subscribed to its AI conversation bots with the “Not Safe for Life” designation.

Permiso said their entire two-day experiment generated a $3,500 bill from AWS. Most of that cost was tied to the 75,000 LLM invocations caused by the sex chat service that hijacked their key.

Paradoxically, Permiso found that while enabling these logs is the only way to know for sure how crooks might be using a stolen key, the cybercriminals who are reselling stolen or exposed AWS credentials for sex chats have started including programmatic checks in their code to ensure they aren’t using AWS keys that have prompt logging enabled.

“Enabling logging is actually a deterrent to these attackers because they are immediately checking to see if you have logging on,” Ahl said. “At least some of these guys will totally ignore those accounts, because they don’t want anyone to see what they’re doing.”

In a statement shared with KrebsOnSecurity, AWS said its services are operating securely, as designed, and that no customer action is needed. Here is their statement:

“AWS services are operating securely, as designed, and no customer action is needed. The researchers devised a testing scenario that deliberately disregarded security best practices to test what may happen in a very specific scenario. No customers were put at risk. To carry out this research, security researchers ignored fundamental security best practices and publicly shared an access key on the internet to observe what would happen.”

“AWS, nonetheless, quickly and automatically identified the exposure and notified the researchers, who opted not to take action. We then identified suspected compromised activity and took additional action to further restrict the account, which stopped this abuse. We recommend customers follow security best practices, such as protecting their access keys and avoiding the use of long-term keys to the extent possible. We thank Permiso Security for engaging AWS Security.”

AWS said customers can configure model invocation logging to collect Bedrock invocation logs, model input data, and model output data for all invocations in the AWS account used in Amazon Bedrock. Customers can also use CloudTrail to monitor Amazon Bedrock API calls.

The company said AWS customers also can use services such as GuardDuty to detect potential security concerns and Billing Alarms to provide notifications of abnormal billing activity. Finally, AWS Cost Explorer is intended to give customers a way to visualize and manage Bedrock costs and usage over time.

Anthropic told KrebsOnSecurity it is always working on novel techniques to make its models more resistant to jailbreaks.

“We remain committed to implementing strict policies and advanced techniques to protect users, as well as publishing our own research so that other AI developers can learn from it,” Anthropic said in an emailed statement. “We appreciate the research community’s efforts in highlighting potential vulnerabilities.”

Anthropic said it uses feedback from child safety experts at Thorn around signals often seen in child grooming to update its classifiers, enhance its usage policies, fine tune its models, and incorporate those signals into testing of future models.

Update: 5:01 p.m. ET: Chub has issued a statement saying they are only hosting the role-playing characters, and that the LLMs they use run on their own infrastructure.

“Our own LLMs run on our own infrastructure,” Chub wrote in an emailed statement. “Any individuals participating in such attacks can use any number of UIs that allow user-supplied keys to connect to third-party APIs. We do not participate in, enable or condone any illegal activity whatsoever.”

ID Theft Service Resold Access to USInfoSearch Data

One of the cybercrime underground’s more active sellers of Social Security numbers, background and credit reports has been pulling data from hacked accounts at the U.S. consumer data broker USinfoSearch, KrebsOnSecurity has learned.

Since at least February 2023, a service advertised on Telegram called USiSLookups has operated an automated bot that allows anyone to look up the SSN or background report on virtually any American. For prices ranging from $8 to $40 and payable via virtual currency, the bot will return detailed consumer background reports automatically in just a few moments.

USiSLookups is the project of a cybercriminal who uses the nicknames JackieChan/USInfoSearch, and the Telegram channel for this service features a small number of sample background reports, including that of President Joe Biden, and podcaster Joe Rogan. The data in those reports includes the subject’s date of birth, address, previous addresses, previous phone numbers and employers, known relatives and associates, and driver’s license information.

JackieChan’s service abuses the name and trademarks of Columbus, OH based data broker USinfoSearch, whose website says it provides “identity and background information to assist with risk management, fraud prevention, identity and age verification, skip tracing, and more.”

“We specialize in non-FCRA data from numerous proprietary sources to deliver the information you need, when you need it,” the company’s website explains. “Our services include API-based access for those integrating data into their product or application, as well as bulk and batch processing of records to suit every client.”

As luck would have it, my report was also listed in the Telegram channel for this identity fraud service, presumably as a teaser for would-be customers. On October 19, 2023, KrebsOnSecurity shared a copy of this file with the real USinfoSearch, along with a request for information about the provenance of the data.

USinfoSearch said it would investigate the report, which appears to have been obtained on or before June 30, 2023. On Nov. 9, 2023, Scott Hostettler, general manager of USinfoSearch parent Martin Data LLC shared a written statement about their investigation that suggested the ID theft service was trying to pass off someone else’s consumer data as coming from USinfoSearch:

Regarding the Telegram incident, we understand the importance of protecting sensitive information and upholding the trust of our users is our top priority. Any allegation that we have provided data to criminals is in direct opposition to our fundamental principles and the protective measures we have established and continually monitor to prevent any unauthorized disclosure. Because Martin Data has a reputation for high-quality data, thieves may steal data from other sources and then disguise it as ours. While we implement appropriate safeguards to guarantee that our data is only accessible by those who are legally permitted, unauthorized parties will continue to try to access our data. Thankfully, the requirements needed to pass our credentialing process is tough even for established honest companies.

USinfoSearch’s statement did not address any questions put to the company, such as whether it requires multi-factor authentication for customer accounts, or whether my report had actually come from USinfoSearch’s systems.

After much badgering, on Nov. 21 Hostettler acknowledged that the USinfoSearch identity fraud service on Telegram was in fact pulling data from an account belonging to a vetted USinfoSearch client.

“I do know 100% that my company did not give access to the group who created the bots, but they did gain access to a client,” Hostettler said of the Telegram-based identity fraud service. “I apologize for any inconvenience this has caused.”

Hostettler said USinfoSearch heavily vets any new potential clients, and that all users are required to undergo a background check and provide certain documents. Even so, he said, several fraudsters each month present themselves as credible business owners or C-level executives during the credentialing process, completing the application and providing the necessary documentation to open a new account.

“The level of skill and craftsmanship demonstrated in the creation of these supporting documents is incredible,” Hostettler said. “The numerous licenses provided appear to be exact replicas of the original document. Fortunately, I’ve discovered several methods of verification that do not rely solely on those documents to catch the fraudsters.”

“These people are unrelenting, and they act without regard for the consequences,” Hostettler continued. “After I deny their access, they will contact us again within the week using the same credentials. In the past, I’ve notified both the individual whose identity is being used fraudulently and the local police. Both are hesitant to act because nothing can be done to the offender if they are not apprehended. That is where most attention is needed.”

SIM SWAPPER’S DELIGHT

JackieChan is most active on Telegram channels focused on “SIM swapping,” which involves bribing or tricking mobile phone company employees into redirecting a target’s phone number to a device the attackers control. SIM swapping allows crooks to temporarily intercept the target’s text messages and phone calls, including any links or one-time codes for authentication that are delivered via SMS.

Reached on Telegram, JackieChan said most of his clients hail from the criminal SIM swapping world, and that the bulk of his customers use his service via an application programming interface (API) that allows customers to integrate the lookup service with other web-based services, databases, or applications.

“Sim channels is where I get most of my customers,” JackieChan told KrebsOnSecurity. “I’m averaging around 100 lookups per day on the [Telegram] bot, and around 400 per day on the API.”

JackieChan claims his USinfoSearch bot on Telegram abuses stolen credentials needed to access an API used by the real USinfoSearch, and that his service was powered by USinfoSearch account credentials that were stolen by malicious software tied to a botnet that he claims to have operated for some time.

This is not the first time USinfoSearch has had trouble with identity thieves masquerading as legitimate customers. In 2013, KrebsOnSecurity broke the news that an identity fraud service in the underground called “SuperGet[.]info” was reselling access to personal and financial data on more than 200 million Americans that was obtained via the big-three credit bureau Experian.

The consumer data resold by Superget was not obtained directly from Experian, but rather via USinfoSearch. At the time, USinfoSearch had a contractual agreement with a California company named Court Ventures, whereby customers of Court Ventures had access to the USinfoSearch data, and vice versa.

When Court Ventures was purchased by Experian in 2012, the proprietor of SuperGet — a Vietnamese hacker named Hieu Minh Ngo who had impersonated an American private investigator — was grandfathered in as a client. The U.S. Secret Service agent who oversaw Ngo’s capture, extradition, prosecution and rehabilitation told KrebsOnSecurity he’s unaware of any other cybercriminal who has caused more material financial harm to more Americans than Ngo.

REAL POLICE, FAKE EDRS

JackieChan also sells access to hacked email accounts belonging to law enforcement personnel in the United States and abroad. Hacked police department emails can come in handy for ID thieves trying to pose as law enforcement officials who wish to purchase consumer data from platforms like USinfoSearch. Hence, Mr. Hostettler’s ongoing battle with fraudsters seeking access to his company’s service.

These police credentials are mainly marketed to criminals seeking fraudulent “Emergency Data Requests,” wherein crooks use compromised government and police department email accounts to rapidly obtain customer account data from mobile providers, ISPs and social media companies.

Normally, these companies will require law enforcement officials to supply a subpoena before turning over customer or user records. But EDRs allow police to bypass that process by attesting that the information sought is related to an urgent matter of life and death, such as an impending suicide or terrorist attack.

In response to an alarming increase in the volume of fraudulent EDRs, many service providers have chosen to require all EDRs be processed through a service called Kodex, which seeks to filter EDRs based on the reputation of the law enforcement entity requesting the information, and other attributes of the requestor.

For example, if you want to send an EDR to Coinbase or Twilio, you’ll first need to have valid law enforcement credentials and create an account at the Kodex online portal at these companies. However, Kodex may still throttle or block any requests from any accounts if they set off certain red flags.

Within their own separate Kodex portals, Twilio can’t see requests submitted to Coinbase, or vice versa. But each can see if a law enforcement entity or individual tied to one of their own requests has ever submitted a request to a different Kodex client, and then drill down further into other data about the submitter, such as Internet address(es) used, and the age of the requestor’s email address.

In August, JackieChan was advertising a working Kodex account for sale on the cybercrime channels, including redacted screenshots of the Kodex account dashboard as proof of access.

Kodex co-founder Matt Donahue told KrebsOnSecurity his company immediately detected that the law enforcement email address used to create the Kodex account pictured in JackieChan’s ad was likely stolen from a police officer in India. One big tipoff, Donahue said, was that the person creating the account did so using an Internet address in Brazil.

“There’s a lot of friction we can put in the way for illegitimate actors,” Donahue said. “We don’t let people use VPNs. In this case we let them in to honeypot them, and that’s how they got that screenshot. But nothing was allowed to be transmitted out from that account.”

Massive amounts of data about you and your personal history are available from USinfoSearch and dozens of other data brokers that acquire and sell “non-FCRA” data — i.e., consumer data that cannot be used for the purposes of determining one’s eligibility for credit, insurance, or employment.

Anyone who works in or adjacent to law enforcement is eligible to apply for access to these data brokers, which often market themselves to police departments and to “skip tracers,” essentially bounty hunters hired to locate others in real life — often on behalf of debt collectors, process servers or a bail bondsman.

There are tens of thousands of police jurisdictions around the world — including roughly 18,000 in the United States alone. And the harsh reality is that all it takes for hackers to apply for access to data brokers (and abuse the EDR process) is illicit access to a single police email account.

The trouble is, compromised credentials to law enforcement email accounts show up for sale with alarming frequency on the Telegram channels where JackieChan and their many clients reside. Indeed, Donahue said Kodex so far this year has identified attempted fake EDRs coming from compromised email accounts for police departments in India, Italy, Thailand and Turkey.

❌