Besieged by scammers seeking to phish user accounts over the telephone, Apple and Google frequently caution that they will never reach out unbidden to users this way. However, new details about the internal operations of a prolific voice phishing gang show the group routinely abuses legitimate services at Apple and Google to force a variety of outbound communications to their users, including emails, automated phone calls and system-level messages sent to all signed-in devices.
Image: Shutterstock, iHaMoo.
KrebsOnSecurity recently told the saga of a cryptocurrency investor named Tony who was robbed of more than $4.7 million in an elaborate voice phishing attack. In Tony’s ordeal, the crooks appear to have initially contacted him via Google Assistant, an AI-based service that can engage in two-way conversations. The phishers also abused legitimate Google services to send Tony an email from google.com, and to send a Google account recovery prompt to all of his signed-in devices.
Today’s story pivots off of Tony’s heist and new details shared by a scammer to explain how these voice phishing groups are abusing a legitimate Apple telephone support line to generate “account confirmation” message prompts from Apple to their customers.
Before we get to the Apple scam in detail, we need to revisit Tony’s case. The phishing domain used to steal roughly $4.7 million in cryptocurrencies from Tony was verify-trezor[.]io. This domain was featured in a writeup from February 2024 by the security firm Lookout, which found it was one of dozens being used by a prolific and audacious voice phishing group it dubbed “Crypto Chameleon.”
Crypto Chameleon was brazenly trying to voice phish employees at the U.S. Federal Communications Commission (FCC), as well as those working at the cryptocurrency exchanges Coinbase and Binance. Lookout researchers discovered multiple voice phishing groups were using a new phishing kit that closely mimicked the single sign-on pages for Okta and other authentication providers.
As we’ll see in a moment, that phishing kit is operated and rented out by a cybercriminal known as “Perm” a.k.a. “Annie.” Perm is the current administrator of Star Fraud, one of the more consequential cybercrime communities on Telegram and one that has emerged as a foundry of innovation in voice phishing attacks.
A review of the many messages that Perm posted to Star Fraud and other Telegram channels showed they worked closely with another cybercriminal who went by the handles “Aristotle” and just “Stotle.”
It is not clear what caused the rift, but at some point last year Stotle decided to turn on his erstwhile business partner Perm, sharing extremely detailed videos, tutorials and secrets that shed new light on how these phishing panels operate.
Stotle explained that the division of spoils from each robbery is decided in advance by all participants. Some co-conspirators will be paid a set fee for each call, while others are promised a percentage of any overall amount stolen. The person in charge of managing or renting out the phishing panel to others will generally take a percentage of each theft, which in Perm’s case is 10 percent.
When the phishing group settles on a target of interest, the scammers will create and join a new Discord channel. This allows each logged on member to share what is currently on their screen, and these screens are tiled in a series of boxes so that everyone can see all other call participant screens at once.
Each participant in the call has a specific role, including:
-The Caller: The person speaking and trying to social engineer the target.
-The Operator: The individual managing the phishing panel, silently moving the victim from page to page.
-The Drainer: The person who logs into compromised accounts to drain the victim’s funds.
-The Owner: The phishing panel owner, who will frequently listen in on and participate in scam calls.
In one video of a live voice phishing attack shared by Stotle, scammers using Perm’s panel targeted a musician in California. Throughout the video, we can see Perm monitoring the conversation and operating the phishing panel in the upper right corner of the screen.
In the first step of the attack, they peppered the target’s Apple device with notifications from Apple by attempting to reset his password. Then a “Michael Keen” called him, spoofing Apple’s phone number and saying they were with Apple’s account recovery team.
The target told Michael that someone was trying to change his password, which Michael calmly explained they would investigate. Michael said he was going to send a prompt to the man’s device, and proceeded to place a call to an automated line that answered as Apple support saying, “I’d like to send a consent notification to your Apple devices. Do I have permission to do that?”
In this segment of the video, we can see the operator of the panel is calling the real Apple customer support phone number 800-275-2273, but they are doing so by spoofing the target’s phone number (the victim’s number is redacted in the video above). That’s because calling this support number from a phone number tied to an Apple account and selecting “1” for “yes” will then send an alert from Apple that displays the following message on all associated devices:
Calling the Apple support number 800-275-2273 from a phone number tied to an Apple account will cause a prompt similar to this one to appear on all connected Apple devices.
KrebsOnSecurity asked two different security firms to test this using the caller ID spoofing service shown in Perm’s video, and sure enough calling that 800 number for Apple by spoofing my phone number as the source caused the Apple Account Confirmation to pop up on all of my signed-in Apple devices.
In essence, the voice phishers are using an automated Apple phone support line to send notifications from Apple and to trick people into thinking they’re really talking with Apple. The phishing panel video leaked by Stotle shows this technique fooled the target, who felt completely at ease that he was talking to Apple after receiving the support prompt on his iPhone.
“Okay, so this really is Apple,” the man said after receiving the alert from Apple. “Yeah, that’s definitely not me trying to reset my password.”
“Not a problem, we can go ahead and take care of this today,” Michael replied. “I’ll go ahead and prompt your device with the steps to close out this ticket. Before I do that, I do highly suggest that you change your password in the settings app of your device.”
The target said they weren’t sure exactly how to do that. Michael replied “no problem,” and then described how to change the account password, which the man said he did on his own device. At this point, the musician was still in control of his iCloud account.
“Password is changed,” the man said. “I don’t know what that was, but I appreciate the call.”
“Yup,” Michael replied, setting up the killer blow. “I’ll go ahead and prompt you with the next step to close out this ticket. Please give me one moment.”
The target then received a text message that referenced information about his account, stating that he was in a support call with Michael. Included in the message was a link to a website that mimicked Apple’s iCloud login page — 17505-apple[.]com. Once the target navigated to the phishing page, the video showed Perm’s screen in the upper right corner opening the phishing page from their end.
“Oh okay, now I log in with my Apple ID?,” the man asked.
“Yup, then just follow the steps it requires, and if you need any help, just let me know,” Michael replied.
As the victim typed in their Apple password and one-time passcode at the fake Apple site, Perm’s screen could be seen in the background logging into the victim’s iCloud account.
It’s unclear whether the phishers were able to steal any cryptocurrency from the victim in this case, who did not respond to requests for comment. However, shortly after this video was recorded, someone leaked several music recordings stolen from the victim’s iCloud account.
At the conclusion of the call, Michael offered to configure the victim’s Apple profile so that any further changes to the account would need to happen in person at a physical Apple store. This appears to be one of several scripted ploys used by these voice phishers to gain and maintain the target’s confidence.
A tutorial shared by Stotle titled “Social Engineering Script” includes a number of tips for scam callers that can help establish trust or a rapport with their prey. When the callers are impersonating Coinbase employees, for example, they will offer to sign the user up for the company’s free security email newsletter.
“Also, for your security, we are able to subscribe you to Coinbase Bytes, which will basically give you updates to your email about data breaches and updates to your Coinbase account,” the script reads. “So we should have gone ahead and successfully subscribed you, and you should have gotten an email confirmation. Please let me know if that is the case. Alright, perfect.”
In reality, all they are doing is entering the target’s email address into Coinbase’s public email newsletter signup page, but it’s a remarkably effective technique because it demonstrates to the would-be victim that the caller has the ability to send emails from Coinbase.com.
Asked to comment for this story, Apple said there has been no breach, hack, or technical exploit of iCloud or Apple services, and that the company is continuously adding new protections to address new and emerging threats. For example, it said it has implemented rate limiting for multi-factor authentication requests, which have been abused by voice phishing groups to impersonate Apple.
Apple said its representatives will never ask users to provide their password, device passcode, or two-factor authentication code or to enter it into a web page, even if it looks like an official Apple website. If a user receives a message or call that claims to be from Apple, here is what the user should expect.
According to Stotle, the target lists used by their phishing callers originate mostly from a few crypto-related data breaches, including the 2022 and 2024 breaches involving user account data stolen from cryptocurrency hardware wallet vendor Trezor.
Perm’s group and other crypto phishing gangs rely on a mix of homemade code and third-party data broker services to refine their target lists. Known as “autodoxers,” these tools help phishing gangs quickly automate the acquisition and/or verification of personal data on a target prior to each call attempt.
One “autodoxer” service advertised on Telegram that promotes a range of voice phishing tools and services.
Stotle said their autodoxer used a Telegram bot that leverages hacked accounts at consumer data brokers to gather a wealth of information about their targets, including their full Social Security number, date of birth, current and previous addresses, employer, and the names of family members.
The autodoxers are used to verify that each email address on a target list has an active account at Coinbase or another cryptocurrency exchange, ensuring that the attackers don’t waste time calling people who have no cryptocurrency to steal.
Some of these autodoxer tools also will check the value of the target’s home address at property search services online, and then sort the target lists so that the wealthiest are at the top.
Stotle’s messages on Discord and Telegram show that a phishing group renting Perm’s panel voice-phished tens of thousands of dollars worth of cryptocurrency from the billionaire Mark Cuban.
“I was an idiot,” Cuban told KrebsOnsecurity when asked about the June 2024 attack, which he first disclosed in a short-lived post on Twitter/X. “We were shooting Shark Tank and I was rushing between pitches.”
Image: Shutterstock, ssi77.
Cuban said he first received a notice from Google that someone had tried to log in to his account. Then he got a call from what appeared to be a Google phone number. Cuban said he ignored several of these emails and calls until he decided they probably wouldn’t stop unless he answered.
“So I answered, and wasn’t paying enough attention,” he said. “They asked for the circled number that comes up on the screen. Like a moron, I gave it to them, and they were in.”
Unfortunately for Cuban, somewhere in his inbox were the secret “seed phrases” protecting two of his cryptocurrency accounts, and armed with those credentials the crooks were able to drain his funds. All told, the thieves managed to steal roughly $43,000 worth of cryptocurrencies from Cuban’s wallets — a relatively small heist for this crew.
“They must have done some keyword searches,” once inside his Gmail account, Cuban said. “I had sent myself an email I had forgotten about that had my seed words for 2 accounts that weren’t very active any longer. I had moved almost everything but some smaller balances to Coinbase.”
Cybercriminals involved in voice phishing communities on Telegram are universally obsessed with their crypto holdings, mainly because in this community one’s demonstrable wealth is primarily what confers social status. It is not uncommon to see members sizing one another up using a verbal shorthand of “figs,” as in figures of crypto wealth.
For example, a low-level caller with no experience will sometimes be mockingly referred to as a 3fig or 3f, as in a person with less than $1,000 to their name. Salaries for callers are often also referenced this way, e.g. “Weekly salary: 5f.”
This meme shared by Stotle uses humor to depict an all-too-common pathway for voice phishing callers, who are often minors recruited from gaming networks like Minecraft and Roblox. The image that Lookout used in its blog post for Crypto Chameleon can be seen in the lower right hooded figure.
Voice phishing groups frequently require new members to provide “proof of funds” — screenshots of their crypto holdings, ostensibly to demonstrate they are not penniless — before they’re allowed to join.
This proof of funds (POF) demand is typical among thieves selling high-dollar items, because it tends to cut down on the time-wasting inquiries from criminals who can’t afford what’s for sale anyway. But it has become so common in cybercrime communities that there are now several services designed to create fake POF images and videos, allowing customers to brag about large crypto holdings without actually possessing said wealth.
Several of the phishing panel videos shared by Stotle feature audio that suggests co-conspirators were practicing responses to certain call scenarios, while other members of the phishing group critiqued them or tried disrupt their social engineering by being verbally abusive.
These groups will organize and operate for a few weeks, but tend to disintegrate when one member of the conspiracy decides to steal some or all of the loot, referred to in these communities as “snaking” others out of their agreed-upon sums. Almost invariably, the phishing groups will splinter apart over the drama caused by one of these snaking events, and individual members eventually will then re-form a new phishing group.
Allison Nixon is the chief research officer for Unit 221B, a cybersecurity firm in New York that has worked on a number of investigations involving these voice phishing groups. Nixon said the constant snaking within the voice phishing circles points to a psychological self-selection phenomenon that is in desperate need of academic study.
“In short, a person whose moral compass lets them rob old people will also be a bad business partner,” Nixon said. “This is another fundamental flaw in this ecosystem and why most groups end in betrayal. This structural problem is great for journalists and the police too. Lots of snitching.”
Asked about the size of Perm’s phishing enterprise, Stotle said there were dozens of distinct phishing groups paying to use Perm’s panel. He said each group was assigned their own subdomain on Perm’s main “command and control server,” which naturally uses the domain name commandandcontrolserver[.]com.
A review of that domain’s history via DomainTools.com shows there are at least 57 separate subdomains scattered across commandandcontrolserver[.]com and two other related control domains — thebackendserver[.]com and lookoutsucks[.]com. That latter domain was created and deployed shortly after Lookout published its blog post on Crypto Chameleon.
The dozens of phishing domains that phone home to these control servers are all kept offline when they are not actively being used in phishing attacks. A social engineering training guide shared by Stotle explains this practice minimizes the chances that a phishing domain will get “redpaged,” a reference to the default red warning pages served by Google Chrome or Firefox whenever someone tries to visit a site that’s been flagged for phishing or distributing malware.
What’s more, while the phishing sites are live their operators typically place a CAPTCHA challenge in front of the main page to prevent security services from scanning and flagging the sites as malicious.
It may seem odd that so many cybercriminal groups operate so openly on instant collaboration networks like Telegram and Discord. After all, this blog is replete with stories about cybercriminals getting caught thanks to personal details they inadvertently leaked or disclosed themselves.
Nixon said the relative openness of these cybercrime communities makes them inherently risky, but it also allows for the rapid formation and recruitment of new potential co-conspirators. Moreover, today’s English-speaking cybercriminals tend to be more afraid of getting home invaded or mugged by fellow cyber thieves than they are of being arrested by authorities.
“The biggest structural threat to the online criminal ecosystem is not the police or researchers, it is fellow criminals,” Nixon said. “To protect them from themselves, every criminal forum and marketplace has a reputation system, even though they know it’s a major liability when the police come. That is why I am not worried as we see criminals migrate to various ‘encrypted’ platforms that promise to ignore the police. To protect themselves better against the law, they have to ditch their protections against fellow criminals and that’s not going to happen.”
Organizations that get relieved of credentials to their cloud environments can quickly find themselves part of a disturbing new trend: Cybercriminals using stolen cloud credentials to operate and resell sexualized AI-powered chat services. Researchers say these illicit chat bots, which use custom jailbreaks to bypass content filtering, often veer into darker role-playing scenarios, including child sexual exploitation and rape.
Image: Shutterstock.
Researchers at security firm Permiso Security say attacks against generative artificial intelligence (AI) infrastructure like Bedrock from Amazon Web Services (AWS) have increased markedly over the last six months, particularly when someone in the organization accidentally exposes their cloud credentials or key online, such as in a code repository like GitHub.
Investigating the abuse of AWS accounts for several organizations, Permiso found attackers had seized on stolen AWS credentials to interact with the large language models (LLMs) available on Bedrock. But they also soon discovered none of these AWS users had enabled full logging of LLM activity (by default, logs don’t include model prompts and outputs), and thus they lacked any visibility into what attackers were doing with that access.
So Permiso researchers decided to leak their own test AWS key on GitHub, while turning on logging so that they could see exactly what an attacker might ask for, and what the responses might be.
Within minutes, their bait key was scooped up and used in a service that offers AI-powered sex chats online.
“After reviewing the prompts and responses it became clear that the attacker was hosting an AI roleplaying service that leverages common jailbreak techniques to get the models to accept and respond with content that would normally be blocked,” Permiso researchers wrote in a report released today.
“Almost all of the roleplaying was of a sexual nature, with some of the content straying into darker topics such as child sexual abuse,” they continued. “Over the course of two days we saw over 75,000 successful model invocations, almost all of a sexual nature.”
Ian Ahl, senior vice president of threat research at Permiso, said attackers in possession of a working cloud account traditionally have used that access for run-of-the-mill financial cybercrime, such as cryptocurrency mining or spam. But over the past six months, Ahl said, Bedrock has emerged as one of the top targeted cloud services.
“Bad guy hosts a chat service, and subscribers pay them money,” Ahl said of the business model for commandeering Bedrock access to power sex chat bots. “They don’t want to pay for all the prompting that their subscribers are doing, so instead they hijack someone else’s infrastructure.”
Ahl said much of the AI-powered chat conversations initiated by the users of their honeypot AWS key were harmless roleplaying of sexual behavior.
“But a percentage of it is also geared toward very illegal stuff, like child sexual assault fantasies and rapes being played out,” Ahl said. “And these are typically things the large language models won’t be able to talk about.”
AWS’s Bedrock uses large language models from Anthropic, which incorporates a number of technical restrictions aimed at placing certain ethical guardrails on the use of their LLMs. But attackers can evade or “jailbreak” their way out of these restricted settings, usually by asking the AI to imagine itself in an elaborate hypothetical situation under which its normal restrictions might be relaxed or discarded altogether.
“A typical jailbreak will pose a very specific scenario, like you’re a writer who’s doing research for a book, and everyone involved is a consenting adult, even though they often end up chatting about nonconsensual things,” Ahl said.
In June 2024, security experts at Sysdig documented a new attack that leveraged stolen cloud credentials to target ten cloud-hosted LLMs. The attackers Sysdig wrote about gathered cloud credentials through a known security vulnerability, but the researchers also found the attackers sold LLM access to other cybercriminals while sticking the cloud account owner with an astronomical bill.
“Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers: in this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted,” Sysdig researchers wrote. “If undiscovered, this type of attack could result in over $46,000 of LLM consumption costs per day for the victim.”
Ahl said it’s not certain who is responsible for operating and selling these sex chat services, but Permiso suspects the activity may be tied to a platform cheekily named “chub[.]ai,” which offers a broad selection of pre-made AI characters with whom users can strike up a conversation. Permiso said almost every character name from the prompts they captured in their honeypot could be found at Chub.
Some of the AI chat bot characters offered by Chub. Some of these characters include the tags “rape” and “incest.”
Chub offers free registration, via its website or a mobile app. But after a few minutes of chatting with their newfound AI friends, users are asked to purchase a subscription. The site’s homepage features a banner at the top that reads: “Banned from OpenAI? Get unmetered access to uncensored alternatives for as little as $5 a month.”
Until late last week Chub offered a wide selection of characters in a category called “NSFL” or Not Safe for Life, a term meant to describe content that is disturbing or nauseating to the point of being emotionally scarring.
Fortune profiled Chub AI in a January 2024 story that described the service as a virtual brothel advertised by illustrated girls in spaghetti strap dresses who promise a chat-based “world without feminism,” where “girls offer sexual services.” From that piece:
Chub AI offers more than 500 such scenarios, and a growing number of other sites are enabling similar AI-powered child pornographic role-play. They are part of a broader uncensored AI economy that, according to Fortune’s interviews with 18 AI developers and founders, was spurred first by OpenAI and then accelerated by Meta’s release of its open-source Llama tool.
Fortune says Chub is run by someone using the handle “Lore,” who said they launched the service to help others evade content restrictions on AI platforms. Chub charges fees starting at $5 a month to use the new chatbots, and the founder told Fortune the site had generated more than $1 million in annualized revenue.
KrebsOnSecurity sought comment about Permiso’s research from AWS, which initially seemed to downplay the seriousness of the researchers’ findings. The company noted that AWS employs automated systems that will alert customers if their credentials or keys are found exposed online.
AWS explained that when a key or credential pair is flagged as exposed, it is then restricted to limit the amount of abuse that attackers can potentially commit with that access. For example, flagged credentials can’t be used to create or modify authorized accounts, or spin up new cloud resources.
Ahl said Permiso did indeed receive multiple alerts from AWS about their exposed key, including one that warned their account may have been used by an unauthorized party. But they said the restrictions AWS placed on the exposed key did nothing to stop the attackers from using it to abuse Bedrock services.
Sometime in the past few days, however, AWS responded by including Bedrock in the list of services that will be quarantined in the event an AWS key or credential pair is found compromised or exposed online. AWS confirmed that Bedrock was a new addition to its quarantine procedures.
Additionally, not long after KrebsOnSecurity began reporting this story, Chub’s website removed its NSFL section. It also appears to have removed cached copies of the site from the Wayback Machine at archive.org. Still, Permiso found that Chub’s user stats page shows the site has more than 3,000 AI conversation bots with the NSFL tag, and that 2,113 accounts were following the NSFL tag.
The user stats page at Chub shows more than 2,113 people have subscribed to its AI conversation bots with the “Not Safe for Life” designation.
Permiso said their entire two-day experiment generated a $3,500 bill from AWS. Most of that cost was tied to the 75,000 LLM invocations caused by the sex chat service that hijacked their key.
Paradoxically, Permiso found that while enabling these logs is the only way to know for sure how crooks might be using a stolen key, the cybercriminals who are reselling stolen or exposed AWS credentials for sex chats have started including programmatic checks in their code to ensure they aren’t using AWS keys that have prompt logging enabled.
“Enabling logging is actually a deterrent to these attackers because they are immediately checking to see if you have logging on,” Ahl said. “At least some of these guys will totally ignore those accounts, because they don’t want anyone to see what they’re doing.”
In a statement shared with KrebsOnSecurity, AWS said its services are operating securely, as designed, and that no customer action is needed. Here is their statement:
“AWS services are operating securely, as designed, and no customer action is needed. The researchers devised a testing scenario that deliberately disregarded security best practices to test what may happen in a very specific scenario. No customers were put at risk. To carry out this research, security researchers ignored fundamental security best practices and publicly shared an access key on the internet to observe what would happen.”
“AWS, nonetheless, quickly and automatically identified the exposure and notified the researchers, who opted not to take action. We then identified suspected compromised activity and took additional action to further restrict the account, which stopped this abuse. We recommend customers follow security best practices, such as protecting their access keys and avoiding the use of long-term keys to the extent possible. We thank Permiso Security for engaging AWS Security.”
AWS said customers can configure model invocation logging to collect Bedrock invocation logs, model input data, and model output data for all invocations in the AWS account used in Amazon Bedrock. Customers can also use CloudTrail to monitor Amazon Bedrock API calls.
The company said AWS customers also can use services such as GuardDuty to detect potential security concerns and Billing Alarms to provide notifications of abnormal billing activity. Finally, AWS Cost Explorer is intended to give customers a way to visualize and manage Bedrock costs and usage over time.
Anthropic told KrebsOnSecurity it is always working on novel techniques to make its models more resistant to jailbreaks.
“We remain committed to implementing strict policies and advanced techniques to protect users, as well as publishing our own research so that other AI developers can learn from it,” Anthropic said in an emailed statement. “We appreciate the research community’s efforts in highlighting potential vulnerabilities.”
Anthropic said it uses feedback from child safety experts at Thorn around signals often seen in child grooming to update its classifiers, enhance its usage policies, fine tune its models, and incorporate those signals into testing of future models.
Update: 5:01 p.m. ET: Chub has issued a statement saying they are only hosting the role-playing characters, and that the LLMs they use run on their own infrastructure.
“Our own LLMs run on our own infrastructure,” Chub wrote in an emailed statement. “Any individuals participating in such attacks can use any number of UIs that allow user-supplied keys to connect to third-party APIs. We do not participate in, enable or condone any illegal activity whatsoever.”
The data privacy company Onerep.com bills itself as a Virginia-based service for helping people remove their personal information from almost 200 people-search websites. However, an investigation into the history of onerep.com finds this company is operating out of Belarus and Cyprus, and that its founder has launched dozens of people-search services over the years.
Onerep’s “Protect” service starts at $8.33 per month for individuals and $15/mo for families, and promises to remove your personal information from nearly 200 people-search sites. Onerep also markets its service to companies seeking to offer their employees the ability to have their data continuously removed from people-search sites.
A testimonial on onerep.com.
Customer case studies published on onerep.com state that it struck a deal to offer the service to employees of Permanente Medicine, which represents the doctors within the health insurance giant Kaiser Permanente. Onerep also says it has made inroads among police departments in the United States.
But a review of Onerep’s domain registration records and that of its founder reveal a different side to this company. Onerep.com says its founder and CEO is Dimitri Shelest from Minsk, Belarus, as does Shelest’s profile on LinkedIn. Historic registration records indexed by DomainTools.com say Mr. Shelest was a registrant of onerep.com who used the email address dmitrcox2@gmail.com.
A search in the data breach tracking service Constella Intelligence for the name Dimitri Shelest brings up the email address dimitri.shelest@onerep.com. Constella also finds that Dimitri Shelest from Belarus used the email address d.sh@nuwber.com, and the Belarus phone number +375-292-702786.
Nuwber.com is a people search service whose employees all appear to be from Belarus, and it is one of dozens of people-search companies that Onerep claims to target with its data-removal service. Onerep.com’s website disavows any relationship to Nuwber.com, stating quite clearly, “Please note that OneRep is not associated with Nuwber.com.”
However, there is an abundance of evidence suggesting Mr. Shelest is in fact the founder of Nuwber. Constella found that Minsk telephone number (375-292-702786) has been used multiple times in connection with the email address dmitrcox@gmail.com. Recall that Onerep.com’s domain registration records in 2018 list the email address dmitrcox2@gmail.com.
It appears Mr. Shelest sought to reinvent his online identity in 2015 by adding a “2” to his email address. The Belarus phone number tied to Nuwber.com shows up in the domain records for comversus.com, and DomainTools says this domain is tied to both dmitrcox@gmail.com and dmitrcox2@gmail.com. Other domains that mention both email addresses in their WHOIS records include careon.me, docvsdoc.com, dotcomsvdot.com, namevname.com, okanyway.com and tapanyapp.com.
Onerep.com CEO and founder Dimitri Shelest, as pictured on the “about” page of onerep.com.
A search in DomainTools for the email address dmitrcox@gmail.com shows it is associated with the registration of at least 179 domain names, including dozens of mostly now-defunct people-search companies targeting citizens of Argentina, Brazil, Canada, Denmark, France, Germany, Hong Kong, Israel, Italy, Japan, Latvia and Mexico, among others.
Those include nuwber.fr, a site registered in 2016 which was identical to the homepage of Nuwber.com at the time. DomainTools shows the same email and Belarus phone number are in historic registration records for nuwber.at, nuwber.ch, and nuwber.dk (all domains linked here are to their cached copies at archive.org, where available).
Update, March 21, 11:15 a.m. ET: Mr. Shelest has provided a lengthy response to the findings in this story. In summary, Shelest acknowledged maintaining an ownership stake in Nuwber, but said there was “zero cross-over or information-sharing with OneRep.” Mr. Shelest said any other old domains that may be found and associated with his name are no longer being operated by him.
“I get it,” Shelest wrote. “My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.” The full statement is available here (PDF).
Original story:
Historic WHOIS records for onerep.com show it was registered for many years to a resident of Sioux Falls, SD for a completely unrelated site. But around Sept. 2015 the domain switched from the registrar GoDaddy.com to eNom, and the registration records were hidden behind privacy protection services. DomainTools indicates around this time onerep.com started using domain name servers from DNS provider constellix.com. Likewise, Nuwber.com first appeared in late 2015, was also registered through eNom, and also started using constellix.com for DNS at nearly the same time.
Listed on LinkedIn as a former product manager at OneRep.com between 2015 and 2018 is Dimitri Bukuyazau, who says their hometown is Warsaw, Poland. While this LinkedIn profile (linkedin.com/in/dzmitrybukuyazau) does not mention Nuwber, a search on this name in Google turns up a 2017 blog post from privacyduck.com, which laid out a number of reasons to support a conclusion that OneRep and Nuwber.com were the same company.
“Any people search profiles containing your Personally Identifiable Information that were on Nuwber.com were also mirrored identically on OneRep.com, down to the relatives’ names and address histories,” Privacyduck.com wrote. The post continued:
“Both sites offered the same immediate opt-out process. Both sites had the same generic contact and support structure. They were – and remain – the same company (even PissedConsumer.com advocates this fact: https://nuwber.pissedconsumer.com/nuwber-and-onerep-20160707878520.html).”
“Things changed in early 2016 when OneRep.com began offering privacy removal services right alongside their own open displays of your personal information. At this point when you found yourself on Nuwber.com OR OneRep.com, you would be provided with the option of opting-out your data on their site for free – but also be highly encouraged to pay them to remove it from a slew of other sites (and part of that payment was removing you from their own site, Nuwber.com, as a benefit of their service).”
Reached via LinkedIn, Mr. Bukuyazau declined to answer questions, such as whether he ever worked at Nuwber.com. However, Constella Intelligence finds two interesting email addresses for employees at nuwber.com: d.bu@nuwber.com, and d.bu+figure-eight.com@nuwber.com, which was registered under the name “Dzmitry.”
PrivacyDuck’s claims about how onerep.com appeared and behaved in the early days are not readily verifiable because the domain onerep.com has been completely excluded from the Wayback Machine at archive.org. The Wayback Machine will honor such requests if they come directly from the owner of the domain in question.
Still, Mr. Shelest’s name, phone number and email also appear in the domain registration records for a truly dizzying number of country-specific people-search services, including pplcrwlr.in, pplcrwlr.fr, pplcrwlr.dk, pplcrwlr.jp, peeepl.br.com, peeepl.in, peeepl.it and peeepl.co.uk.
The same details appear in the WHOIS registration records for the now-defunct people-search sites waatpp.de, waatp1.fr, azersab.com, and ahavoila.com, a people-search service for French citizens.
A search on the email address dmitrcox@gmail.com suggests Mr. Shelest was previously involved in rather aggressive email marketing campaigns. In 2010, an anonymous source leaked to KrebsOnSecurity the financial and organizational records of Spamit, which at the time was easily the largest Russian-language pharmacy spam affiliate program in the world.
Spamit paid spammers a hefty commission every time someone bought male enhancement drugs from any of their spam-advertised websites. Mr. Shelest’s email address stood out because immediately after the Spamit database was leaked, KrebsOnSecurity searched all of the Spamit affiliate email addresses to determine if any of them corresponded to social media accounts at Facebook.com (at the time, Facebook allowed users to search profiles by email address).
That mapping, which was done mainly by generous graduate students at my alma mater George Mason University, revealed that dmitrcox@gmail.com was used by a Spamit affiliate, albeit not a very profitable one. That same Facebook profile for Mr. Shelest is still active, and it says he is married and living in Minsk [Update, Mar. 16: Mr. Shelest’s Facebook account is no longer active].
Scrolling down Mr. Shelest’s Facebook page to posts made more than ten years ago show him liking the Facebook profile pages for a large number of other people-search sites, including findita.com, findmedo.com, folkscan.com, huntize.com, ifindy.com, jupery.com, look2man.com, lookerun.com, manyp.com, peepull.com, perserch.com, persuer.com, pervent.com, piplenter.com, piplfind.com, piplscan.com, popopke.com, pplsorce.com, qimeo.com, scoutu2.com, search64.com, searchay.com, seekmi.com, selfabc.com, socsee.com, srching.com, toolooks.com, upearch.com, webmeek.com, and many country-code variations of viadin.ca (e.g. viadin.hk, viadin.com and viadin.de).
Domaintools.com finds that all of the domains mentioned in the last paragraph were registered to the email address dmitrcox@gmail.com.
Mr. Shelest has not responded to multiple requests for comment. KrebsOnSecurity also sought comment from onerep.com, which likewise has not responded to inquiries about its founder’s many apparent conflicts of interest. In any event, these practices would seem to contradict the goal Onerep has stated on its site: “We believe that no one should compromise personal online security and get a profit from it.”
Max Anderson is chief growth officer at 360 Privacy, a legitimate privacy company that works to keep its clients’ data off of more than 400 data broker and people-search sites. Anderson said it is concerning to see a direct link between between a data removal service and data broker websites.
“I would consider it unethical to run a company that sells people’s information, and then charge those same people to have their information removed,” Anderson said.
Last week, KrebsOnSecurity published an analysis of the people-search data broker giant Radaris, whose consumer profiles are deep enough to rival those of far more guarded data broker resources available to U.S. police departments and other law enforcement personnel.
That story revealed that the co-founders of Radaris are two native Russian brothers who operate multiple Russian-language dating services and affiliate programs. It also appears many of the Radaris founders’ businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.
KrebsOnSecurity will continue investigating the history of various consumer data brokers and people-search providers. If any readers have inside knowledge of this industry or key players within it, please consider reaching out to krebsonsecurity at gmail.com.
Update, March 15, 11:35 a.m. ET: Many readers have pointed out something that was somehow overlooked amid all this research: The Mozilla Foundation, the company that runs the Firefox Web browser, has launched a data removal service called Mozilla Monitor that bundles OneRep. That notice says Mozilla Monitor is offered as a free or paid subscription service.
“The free data breach notification service is a partnership with Have I Been Pwned (“HIBP”),” the Mozilla Foundation explains. “The automated data deletion service is a partnership with OneRep to remove personal information published on publicly available online directories and other aggregators of information about individuals (“Data Broker Sites”).”
In a statement shared with KrebsOnSecurity.com, Mozilla said they did assess OneRep’s data removal service to confirm it acts according to privacy principles advocated at Mozilla.
“We were aware of the past affiliations with the entities named in the article and were assured they had ended prior to our work together,” the statement reads. “We’re now looking into this further. We will always put the privacy and security of our customers first and will provide updates as needed.”
Easy to use PowerShell script to enumerate access permissions in an Azure Active Directory environment.
Background details can be found in the accompanied blog posts:
To run this script you'll need these two PowerShell modules:
All of these can be installed directly within PowerShell:
PS:> Install-Module Microsoft.Graph
PS:> Install-Module AADInternals
PS:> Install-Module AzureADPreview
The script uses a browser-based Login UI to connect to Azure. If you run the tool for the first time you might experience the following error
[*] Connecting to Microsoft Graph...
WARNING: WebBrowser control emulation not set for PowerShell or PowerShell ISE!
Would you like set the emulation to IE 11? Otherwise the login form may not work! (Y/N): Y
Emulation set. Restart PowerShell/ISE!
To solve this simply allow PowerShell to emulate the browser and rerun your command.
Import and run, no argumentes needed.
Note: On your first run you will likely have to authenticate twice (once Microsoft Graph and once against Azure AD Graph). I might wrap this into a single login in the future...
PS:> Import-Module .\Azure-AccessPermissions.ps1