FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

By: Zion3R


Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

The feed should update daily. Actively working on making the backend more reliable


Honorable Mentions

Many of the Shodan queries have been sourced from other CTI researchers:

Huge shoutout to them!

Thanks to BertJanCyber for creating the KQL query for ingesting this feed

And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

What do I track?

Running Locally

If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
bash
python3 -m pip install -r requirements.txt
python3 tracker.py

Contributing

I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

References



ST Smart Things Sentinel - Advanced Security Tool To Detect Threats Within The Intricate Protocols utilized By IoT Devices

By: Zion3R


ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.


~ Hilali Abdel

USAGE

python st_tool.py [-h] [-s] [--add ADD] [--scan SCAN] [--id ID] [--search SEARCH] [--bug BUG] [--firmware FIRMWARE] [--type TYPE] [--detect] [--tty] [--uart UART] [--fz FZ]

[Add new Device]

python3 smartthings.py -a 192.168.1.1

python3 smarthings.py -s --type TPLINK

python3 smartthings.py -s --firmware TP-Link Archer C7v2

Search for CVE and Poc [ firmware and device type]

Β 

Scan device for open upnp ports

python3 smartthings.py -s --scan upnp --id

get data from mqtt 'subscribe'

python3 smartthings.py -s --scan mqtt --id



SwaggerSpy - Automated OSINT On SwaggerHub

By: Zion3R


SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.


What is Swagger?

Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.


About SwaggerHub

SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.


Why OSINT on SwaggerHub?

Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:

  1. Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.

  2. Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.

  3. Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.

  4. Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.

  5. Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.

  6. Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.

By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.


How SwaggerSpy Works

SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.


Getting Started

To use SwaggerSpy, follow these steps:

  1. Installation: Clone the SwaggerSpy repository and install the required dependencies.
git clone https://github.com/UndeadSec/SwaggerSpy.git
cd SwaggerSpy
pip install -r requirements.txt
  1. Usage: Run SwaggerSpy with the target search terms (more accurate with domains).
python swaggerspy.py searchterm
  1. Results: SwaggerSpy will generate a report containing OSINT findings, including information about the API, endpoints, and secrets.

Disclaimer

SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.


Contribution

Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.


About the Author

SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)

I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.


TODO

Regular Expressions Enhancement
  • [ ] Review and improve existing regular expressions.
  • [ ] Ensure that regular expressions adhere to best practices.
  • [ ] Check for any potential optimizations in the regex patterns.
  • [ ] Test regular expressions with various input scenarios for accuracy.
  • [ ] Document any complex or non-trivial regex patterns for better understanding.
  • [ ] Explore opportunities to modularize or break down complex patterns.
  • [ ] Verify the regular expressions against the latest specifications or requirements.
  • [ ] Update documentation to reflect any changes made to the regular expressions.

License

SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.


Thanks

Special thanks to @Liodeus for providing project inspiration through swaggerHole.



Melee - Tool To Detect Infections In MySQL Instances

By: Zion3R

MELEE: A Tool to Detect Ransomware Infections in MySQL Instances


Attackers are abusing MySQL instances for conducting nefarious operations on the Internet. The cybercriminals are targeting exposed MySQL instances and triggering infections at scale to exfiltrate data, destruct data, and extort money via ransom. For example one of the significant threats MySQL deployments face is ransomware. We have authored a tool named "MELEE" to detect potential infections in MySQL instances. The tool allows security researchers, penetration testers, and threat intelligence experts to detect compromised and infected MySQL instances running malicious code. The tool also enables you to conduct efficient research in the field of malware targeting cloud databases. In this release of the tool, the following modules are supported:

  • MySQL instance information gathering and reconnaissance
  • MySQL instance exposure to the Internet
  • MySQL access permissions for assessing remote command execution
  • MySQL user enumeration
  • MySQL ransomware infections
  • Basic assessment checks for detecting ransomware infections
  • Extensive assessment checks for extracting insidious details about potential ransomware infections
  • MySQL ransomware detection and scanning for both unauthenticated and authenticated deployments

Tool Usage

Researched and Developed By Aditya K Sood and Rohit BansalΒ 

Douglas-042 - Powershell Script To Help Speed ​​Up Threat Hunting Incident Response Processes

By: Zion3R


DOUGLAS-042 stands as an ingenious embodiment of a PowerShell script meticulously designed to expedite the triage process and facilitate the meticulous collection of crucial evidence derived from both forensic artifacts and the ephemeral landscape of volatile data. Its fundamental mission revolves around providing indispensable aid in the arduous task of pinpointing potential security breaches within Windows ecosystems. With an overarching focus on expediency, DOUGLAS-042 orchestrates the efficient prioritization and methodical aggregation of data, ensuring that no vital piece of information eludes scrutiny when investigating a possible compromise. As a testament to its organized approach, the amalgamated data finds its sanctuary within the confines of a meticulously named text file, bearing the nomenclature of the host system's very own hostname. This practice of meticulous data archival emerges not just as a systematic convention, but as a cornerstone that paves the way for seamless transitions into subsequent stages of the Forensic journey.


Content Queries

  • General information
  • Accountand group information
  • Network
  • Process Information
  • OS Build and HOTFIXE
  • Persistence
  • HARDWARE Information
  • Encryption information
  • FIREWALL INFORMATION
  • Services
  • History
  • SMB Queries
  • Remoting queries
  • REGISTRY Analysis
  • LOG queries
  • Instllation of Software
  • User activity

Advanced Queries

  • Prefetch file information
  • DLL List
  • WMI filters and consumers
  • Named pipes

Usage

Using administrative privileges, just run the script from a PowerShell console, then the results will be saved in the directory as a txt file.

$ PS >./douglas.ps1

Advance usage

$ PS >./douglas.ps1 -a


Video




Crawlector - Threat Hunting Framework Designed For Scanning Websites For Malicious Objects

By: Zion3R


Crawlector (the name Crawlector is a combination of Crawler & Detector) is a threat hunting framework designed for scanning websites for malicious objects.

Note-1: The framework was first presented at the No Hat conference in Bergamo, Italy on October 22nd, 2022 (Slides, YouTube Recording). Also, it was presented for the second time at the AVAR conference, in Singapore, on December 2nd, 2022.

Note-2: The accompanying tool EKFiddle2Yara (is a tool that takes EKFiddle rules and converts them into Yara rules) mentioned in the talk, was also released at both conferences.


Features

  • Supports spidering websites for findings additional links for scanning (up to 2 levels only)
  • Integrates Yara as a backend engine for rule scanning
  • Supports online and offline scanning
  • Supports crawling for domains/sites digital certificate
  • Supports querying URLhaus for finding malicious URLs on the page
  • Supports hashing the page's content with TLSH (Trend Micro Locality Sensitive Hash), and other standard cryptographic hash functions such as md5, sha1, sha256, and ripemd128, among others
    • TLSH won't return a value if the page size is less than 50 bytes or not "enough amount of randomness" is present in the data
  • Supports querying the rating and category of every URL
  • Supports expanding on a given site, by attempting to find all available TLDs and/or subdomains for the same domain
    • This feature uses the Omnisint Labs API (this site is down as of March 10, 2023) and RapidAPI APIs
    • TLD expansion implementation is native
    • This feature along with the rating and categorization, provides the capability to find scam/phishing/malicious domains for the original domain
  • Supports domain resolution (IPv4 and IPv6)
  • Saves scanned websites pages for later scanning (can be saved as a zip compressed)
  • The entirety of the framework’s settings is controlled via a single customizable configuration file
  • All scanning sessions are saved into a well-structured CSV file with a plethora of information about the website being scanned, in addition to information about the Yara rules that have triggered
  • All HTTP(S) communications are proxy-aware
  • One executable
  • Written in C++

URLHaus Scanning & API Integration

This is for checking for malicious urls against every page being scanned. The framework could either query the list of malicious URLs from URLHaus server (configuration: url_list_web), or from a file on disk (configuration: url_list_file), and if the latter is specified, then, it takes precedence over the former.

It works by searching the content of every page against all URL entries in url_list_web or url_list_file, checking for all occurrences. Additionally, upon a match, and if the configuration option check_url_api is set to true, Crawlector will send a POST request to the API URL set in the url_api configuration option, which returns a JSON object with extra information about a matching URL. Such information includes urlh_status (ex., online, offline, unknown), urlh_threat (ex., malware_download), urlh_tags (ex., elf, Mozi), and urlh_reference (ex., https://urlhaus.abuse.ch/url/1116455/). This information will be included in the log file cl_mlog_<current_date><current_time><(pm|am)>.csv (check below), only if check_url_api is set to true. Otherwise, the log file will include the columns urlh_url (list o f matching malicious URLs) and urlh_hit (number of occurrences for every matching malicious URL), conditional on whether check_url is set to true.

URLHaus feature could be disabled in its entirety by setting the configuration option check_url to false.

It is important to note that this feature could slow scanning considering the huge number of malicious urls (~ 130 million entries at the time of this writing) that need to be checked, and the time it takes to get extra information from the URLHaus server (if the option check_url_api is set to true).

Files and Folders Structures

  1. \cl_sites
    • this is where the list of sites to be visited or crawled is stored.
    • supports multiple files and directories.
  2. \crawled
    • where all crawled/spidered URLs are saved to a text file.
  3. \certs
    • where all domains/sites digital certificates are stored (in .der format).
  4. \results
    • where visited websites are saved.
  5. \pg_cache
    • program cache for sites that are not part of the spider functionality.
  6. \cl_cache
    • crawler cache for sites that are part of the spider functionality.
  7. \yara_rules
    • this is where all Yara rules are stored. All rules that exist in this directory will be loaded by the engine, parsed, validated, and evaluated before execution.
  8. cl_config.ini
    • this file contains all the configuration parameters that can be adjusted to influence the behavior of the framework.
  9. cl_mlog_<current_date><current_time><(pm|am)>.csv
    • log file that contains a plethora of information about visited websites
    • date, time, the status of Yara scanning, list of fired Yara rules with the offsets and lengths of each of the matches, id, URL, HTTP status code, connection status, HTTP headers, page size, the path to a saved page on disk, and other columns related to URLHaus results.
    • file name is unique per session.
  10. cl_offl_mlog_<current_date><current_time><(pm|am)>.csv
    • log file that contains information about files scanned offline.
    • list of fired Yara rules with the offsets and lengths of the matches, and path to a saved page on disk.
    • file name is unique per session.
  11. cl_certs_<current_date><current_time><(pm|am)>.csv
    • log file that contains a plethora of information about found digital certificates
  12. \expanded\exp_subdomain_<pm|am>.txt
    • contains discovered subdomains (part of the [site] section)
  13. \expanded\exp_tld_<pm|am>.txt
    • contains discovered domains (part of the [site] section)

Configuration File (cl_config.ini)

It is very important that you familiarize yourself with the configuration file cl_config.ini before running any session. All of the sections and parameters are documented in the configuration file itself.

The Yara offline scanning feature is a standalone option, meaning, if enabled, Crawlector will execute this feature only irrespective of other enabled features. And, the same is true for the crawling for domains/sites digital certificate feature. Either way, it is recommended that you disable all non-used features in the configuration file.

  • Depending on the configuration settings (log_to_file or log_to_cons), if a Yara rule references only a module's attributes (ex., PE, ELF, Hash, etc...), then Crawlector will display only the rule's name upon a match, excluding offset and length data.

Sites Format Pattern

To visit/scan a website, the list of URLs must be stored in text files, in the directory β€œcl_sites”.

Crawlector accepts three types of URLs:

  1. Type 1: one URL per line
    • Crawlector will assign a unique name to every URL, derived from the URL hostname
  2. Type 2: one URL per line, with a unique name [a-zA-Z0-9_-]{1,128} = <url>
  3. Type 3: for the spider functionality, a unique format is used. One URL per line is as follows:

<id>[depth:<0|1>-><\d+>,total:<\d+>,sleep:<\d+>] = <url>

For example,

mfmokbel[depth:1->3,total:10,sleep:0] = https://www.mfmokbel.com

which is equivalent to: mfmokbel[d:1->3,t:10,s:0] = https://www.mfmokbel.com

where, <id> := [a-zA-Z0-9_-]{1,128}

depth, total and sleep, can also be replaced with their shortened versions d, t and s, respectively.

  • depth: the spider supports going two levels deep for finding additional URLs (this is a design decision).
  • A value of 0 indicates a depth of level 1, with the value that comes after the β€œ->” ignored.
  • A depth of level-1 is controlled by the total parameter. So, first, the spider tries to find that many additional URLs off of the specified URL.
  • The value after the β€œ->” represents the maximum number of URLs to spider for each of the URLs found (as per the total parameter value).
  • A value of 1, indicates a depth of level 2, with the value that comes after the β€œ->” representing the maximum number of URLs to find, for every URL found per the total parameter. For clarification, and as shown in the example above, first, the spider will look for 10 URLs (as specified in the total parameter), and then, each of those found URLs will be spidered up to a max of 3 URLs; therefore, and in the best-case scenario, we would end up with 40 (10 + (10*3)) URLs.
  • The sleep parameter takes an integer value representing the number of milliseconds to sleep between every HTTP request.

Note 1: Type 3 URL could be turned into type 1 URL by setting the configuration parameter live_crawler to false, in the configuration file, in the spider section.

Note 2: Empty lines and lines that start with β€œ;” or β€œ//” are ignored.

The Spider Functionality

The spider functionality is what gives Crawlector the capability to find additional links on the targeted page. The Spider supports the following featuers:

  • The domain has to be of Type 3, for the Spider functionality to work
  • You may specify a list of wildcarded patterns (pipe delimited) to prevent spidering matching urls via the exclude_url config. option. For example, *.zip|*.exe|*.rar|*.zip|*.7z|*.pdf|.*bat|*.db
  • You may specify a list of wildcarded patterns (pipe delimited) to spider only urls that match the pattern via the include_url config. option. For example, */checkout/*|*/products/*
  • You may exclude HTTPS urls via the config. option exclude_https
  • You may account for outbound/external links as well, for the main page only, via the config. option add_ext_links. This feature honours the exclude_url and include_url config. option.
  • You may account for outbound/external links of the main page only, excluding all other urls, via the config. option ext_links_only. This feature honours the exclude_url and include_url config. option.

Site Ranking Functionality

  • This is for checking the ranking of the website
  • You give it a file with a list of websites, with their ranking, in a csv file format
  • Services that provide lists of websites ranking include, Alexa top-1m (discontinued as of May 2022), Cisco Umbrella, Majestic, Quantcast, Farsight and Tranco, among others
  • CSV file format (2 columns only): first column holds the ranking, and the second column holds the domain name
  • If a cell to contain quoted data, it'll be automatically dequoted
  • Line breaks aren't allowed in quoted text
  • Leading and trailing spaces are trimmed from cells read
  • Empty and comment lines are skipped
  • The section site_ranking in the configuration file provides some options to alter how the CSV file is to be read
  • The performance of this query is dependent on the number of records in the CSV file
  • Crawlector compares every entry in the CSV file against the domain being investigated, and not the other way around
  • Only the registered/pay-level domain is compared

Finding TLDs and Subdomains - [site] Section

  • The site section provides the capability to expand on a given site, by attempting to find all available top-level domains (TLDs) and/or subdomains for the same domain. If found, new tlds/subdomains will be checked like any other domain
  • This feature uses the Omnisint Labs (https://omnisint.io/) and RapidAPI APIs
  • Omnisint Labs API returns subdomains and tlds, whereas RapidAPI returns only subdomains (the Omnisint Labs API is down as of March 10, 2023, however, the implementation is still available in case the site is back up)
  • For RapidAPI, you need a valid "Domains records" API key that you can request from RapidAPI, and plug it into the key rapid_api_key in the configuration file
  • With find_tlds enabled, in addition to Omnisint Labs API tlds results, the framework attempts to find other active/registered domains by going through every tld entry, either, in the tlds_file or tlds_url
  • If tlds_url is set, it should point to a url that hosts tlds, each one on a new line (lines that start with either of the characters ';', '#' or '//' are ignored)
  • tlds_file, holds the filename that contains the list of tlds (same as for tlds_url; only the tld is present, excluding the '.', for ex., "com", "org")
  • If tlds_file is set, it takes precedence over tlds_url
  • tld_dl_time_out, this is for setting the maximum timeout for the dnslookup function when attempting to check if the domain in question resolves or not
  • tld_use_connect, this option enables the functionality to connect to the domain in question over a list of ports, defined in the option tlds_connect_ports
  • The option tlds_connect_ports accepts a list of ports, comma separated, or a list of ranges, such as 25-40,90-100,80,443,8443 (range start and end are inclusive)
    • tld_con_time_out, this is for setting the maximum timeout for the connect function
  • tld_con_use_ssl, enable/disable the use of ssl when attempting to connect to the domain
  • If save_to_file_subd is set to true, discovered subdomains will be saved to "\expanded\exp_subdomain_<pm|am>.txt"
  • If save_to_file_tld is set to true, discovered domains will be saved to "\expanded\exp_tld_<pm|am>.txt"
  • If exit_here is set to true, then Crawlector bails out after executing this [site] function, irrespective of other enabled options. It means found sites won't be crawled/spidered

Design Considerations

  • A URL page is retrieved by sending a GET request to the server, reading the server response body, and passing it to Yara engine for detection.
  • Some of the GET request attributes are defined in the [default] section in the configuration file, including, the User-Agent and Referer headers, and connection timeout, among other options.
  • Although Crawlector logs a session's data to a CSV file, converting it to an SQL file is recommended for better performance, manipulation and retrieval of the data. This becomes evident when you’re crawling thousands of domains.
  • Repeated domains/urls in the cl_sites are allowed.

Limitations

  • Single threaded
  • Static detection (no dynamic evaluation of a given page's content)
  • No headless browser support, yet!

Third-party libraries used

Contributing

Open for pull requests and issues. Comments and suggestions are greatly appreciated.

Author

Mohamad Mokbel (@MFMokbel)



DoSinator - A Powerful Denial Of Service (DoS) Testing Tool

By: Zion3R


DoSinator is a versatile Denial of Service (DoS) testing tool developed in Python. It empowers security professionals and researchers to simulate various types of DoS attacks, allowing them to assess the resilience of networks, systems, and applications against potential cyber threats.Β 


Features

  • Multiple Attack Modes: DoSinator supports SYN Flood, UDP Flood, and ICMP Flood attack modes, allowing you to simulate various types of DoS attacks.
  • Customizable Parameters: Adjust the packet size, attack rate, and duration to fine-tune the intensity and duration of the attack.
  • IP Spoofing: Enable IP spoofing to mask the source IP address and enhance anonymity during the attack.
  • Multithreaded Packet Sending: Utilize multiple threads for simultaneous packet sending, maximizing the attack speed and efficiency.

Requirements

  • Python 3.x
  • scapy
  • argparse

Installation

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/DoSinator.git
  2. Navigate to the project directory:

    cd DoSinator
  3. Install the required dependencies:

    pip install -r requirements.txt

Usage

packets to send (default: 500). -ps PACKET_SIZE, --packet_size PACKET_SIZE Packet size in bytes (default: 64). -ar ATTACK_RATE, --attack_rate ATTACK_RATE Attack rate in packets per second (default: 10). -d DURATION, --duration DURATION Duration of the attack in seconds. -am {syn,udp,icmp,http,dns}, --attack-mode {syn,udp,icmp,http,dns} Attack mode (default: syn). -sp SPOOF_IP, --spoof-ip SPOOF_IP Spoof IP address. --data DATA Custom data string to send." dir="auto">
usage: dos_tool.py [-h] -t TARGET -p PORT [-np NUM_PACKETS] [-ps PACKET_SIZE]
[-ar ATTACK_RATE] [-d DURATION] [-am {syn,udp,icmp,http,dns}]
[-sp SPOOF_IP] [--data DATA]

optional arguments:
-h, --help Show this help message and exit.
-t TARGET, --target TARGET
Target IP address.
-p PORT, --port PORT Target port number.
-np NUM_PACKETS, --num_packets NUM_PACKETS
Number of packets to send (default: 500).
-ps PACKET_SIZE, --packet_size PACKET_SIZE
Packet size in bytes (default: 64).
-ar ATTACK_RATE, --attack_rate ATTACK_RATE
Attack rate in packets per second (default: 10).
-d DURATION, --duration DURATION
Duration of the attack in seconds.
-am {syn,udp,icmp,htt p,dns}, --attack-mode {syn,udp,icmp,http,dns}
Attack mode (default: syn).
-sp SPOOF_IP, --spoof-ip SPOOF_IP
Spoof IP address.
--data DATA Custom data string to send.
  • target_ip: IP address of the target system.
  • target_port: Port number of the target service.
  • num_packets: Number of packets to send (default: 500).
  • packet_size: Size of each packet in bytes (default: 64).
  • attack_rate: Attack rate in packets/second (default: 10).
  • duration: Duration of the attack in seconds.
  • attack_mode: Attack mode: syn, udp, icmp, http (default: syn).
  • spoof_ip: Spoof IP address (default: None).
  • data: Custom data string to send.

Disclaimer

The usage of the Dosinator tool for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws. The author assumes no liability and is not responsible for any misuse or damage caused by this program.

By using Dosinator, you agree to use this tool for educational and ethical purposes only. The author is not responsible for any actions or consequences resulting from misuse of this tool.

Please ensure that you have the necessary permissions to conduct any form of testing on a target network. Use this tool at your own risk.

Contributing

Contributions are welcome! If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request.

Contact

If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:



Associated-Threat-Analyzer - Detects Malicious IPv4 Addresses And Domain Names Associated With Your Web Application Using Local Malicious Domain And IPv4 Lists

By: Zion3R


Associated-Threat-Analyzer detects malicious IPv4 addresses and domain names associated with your web application using local malicious domain and IPv4 lists.


Installation

From Git

git clone https://github.com/OsmanKandemir/associated-threat-analyzer.git
cd associated-threat-analyzer && pip3 install -r requirements.txt
python3 analyzer.py -d target-web.com

From Dockerfile

You can run this application on a container after build a Dockerfile.

Warning : If you want to run a Docker container, associated threat analyzer recommends to use your malicious IPs and domains lists, because maintainer may not be update a default malicious IP and domain lists on docker image.
docker build -t osmankandemir/threatanalyzer .
docker run osmankandemir/threatanalyzer -d target-web.com

From DockerHub

docker pull osmankandemir/threatanalyzer
docker run osmankandemir/threatanalyzer -d target-web.com

Usage

-d DOMAIN , --domain DOMAIN Input Target. --domain target-web1.com
-t DOMAINSFILE, --DomainsFile Malicious Domains List to Compare. -t SampleMaliciousDomains.txt
-i IPSFILE, --IPsFile Malicious IPs List to Compare. -i SampleMaliciousIPs.txt
-o JSON, --json JSON JSON output. --json

DONE

  • First-level depth scan your domain address.

TODO list

  • Third-level or the more depth static files scanning for target web application.
Other linked github project. You can take a look.
Finds related domains and IPv4 addresses to do threat intelligence after Indicator-Intelligence v1.1.1 collects static files

https://github.com/OsmanKandemir/indicator-intelligence

Default Malicious IPs and Domains Sources

https://github.com/stamparm/blackbook

https://github.com/stamparm/ipsum

Development and Contribution

See; CONTRIBUTING.md



HEDnsExtractor - Raw Html Extractor From Hurricane Electric Portal

By: Zion3R

HEDnsExtractor

Raw html extractor from Hurricane Electric portal

Features

  • Automatically identify IPAddr ou Networks through command line parameter or stdin
  • Extract networks based on IPAddr.
  • Extract domains from networks.

Installation

go install -v github.com/HuntDownProject/hednsextractor/cmd/hednsextractor@latest

Usage

usage -h
Running

Getting the IP Addresses used for hackerone.com, and enumerating only the networks.

nslookup hackerone.com | awk '/Address: / {print $2}' | hednsextractor -silent -only-networks

[INF] [104.16.99.52] 104.16.0.0/12
[INF] [104.16.99.52] 104.16.96.0/20

Getting the IP Addresses used for hackerone.com, and enumerating only the domains (using tail to show the first 10 results).

nslookup hackerone.com | awk '/Address: / {print $2}' | hednsextractor -silent -only-domains | tail -n 10

herllus.com
hezzy.store
hilariostore.com
hiperdrop.com
hippratas.online
hitsstory.com
hobbyshop.site
holyangelstore.com
holzfallerstore.fun
homedescontoo.com

Running with Virustotal

Edit the config file and add the Virustotal API Key

cat $HOME/.config/hednsextractor/config.yaml 
virustotal score #vt: false # minimum virustotal score to show #vt-score: 0 # ip address or network to query #target: # show silent output #silent: false # show verbose output #verbose: false # virustotal api key vt-api-key: Your API Key goes here" dir="auto">
# hednsextractor config file
# generated by https://github.com/projectdiscovery/goflags

# show only domains
#only-domains: false

# show only networks
#only-networks: false

# show virustotal score
#vt: false

# minimum virustotal score to show
#vt-score: 0

# ip address or network to query
#target:

# show silent output
#silent: false

# show verbose output
#verbose: false

# virustotal api key
vt-api-key: Your API Key goes here

So, run the hednsextractor with -vt parameter.

nslookup hackerone.com | awk '/Address: / {print $2}' | hednsextractor -only-domains -vt             

And the output will be as below

          _______  ______   _        _______  _______          _________ _______  _______  _______ _________ _______  _______ 
|\ /|( ____ \( __ \ ( ( /|( ____ \( ____ \|\ /|\__ __/( ____ )( ___ )( ____ \\__ __/( ___ )( ____ )
| ) ( || ( \/| ( \ )| \ ( || ( \/| ( \/( \ / ) ) ( | ( )|| ( ) || ( \/ ) ( | ( ) || ( )|
| (___) || (__ | | ) || \ | || (_____ | (__ \ (_) / | | | (____)|| (___) || | | | | | | || (____)|
| ___ || __) | | | || (\ \) |(_____ )| __) ) _ ( | | | __)| ___ || | | | | | | || __)
| ( ) || ( | | ) || | \ | ) || ( / ( ) \ | | | (\ ( | ( ) || | | | | | | || (\ (
| ) ( || (____/\| (__/ )| ) \ |/\____) || (____/\( / \ ) | | | ) \ \__| ) ( || (____/\ | | | (___) || ) \ \__
|/ \|(_______/(______/ |/ )_)\_______)(_______/|/ \| )_( |/ \__/|/ \|(_______/ )_( (_______)|/ \__/

[INF] Current hednsextractor version v1.0.0
[INF] [104.16.0.0/12] domain: ohst.ltd VT Score: 0
[INF] [104.16.0.0/12] domain: jxcraft.net VT Score: 0
[INF] [104.16.0.0/12] domain: teatimegm.com VT Score: 2
[INF] [104.16.0.0/12] domain: debugcheat.com VT Score: 0


SOC-Multitool - A Powerful And User-Friendly Browser Extension That Streamlines Investigations For Security Professionals

By: Zion3R


Introducing SOC Multi-tool, a free and open-source browser extension that makes investigations faster and more efficient. Now available on the Chrome Web Store and compatible with all Chromium-based browsers such as Microsoft Edge, Chrome, Brave, and Opera.
Now available on Chrome Web Store!


Streamline your investigations

SOC Multi-tool eliminates the need for constant copying and pasting during investigations. Simply highlight the text you want to investigate, right-click, and navigate to the type of data highlighted. The extension will then open new tabs with the results of your investigation.

Modern and feature-rich

The SOC Multi-tool is a modernized multi-tool built from the ground up, with a range of features and capabilities. Some of the key features include:

  • IP Reputation Lookup using VirusTotal & AbuseIPDB
  • IP Info Lookup using Tor relay checker & WHOIS
  • Hash Reputation Lookup using VirusTotal
  • Domain Reputation Lookup using VirusTotal & AbuseIPDB
  • Domain Info Lookup using Alienvault
  • Living off the land binaries Lookup using the LOLBas project
  • Decoding of Base64 & HEX using CyberChef
  • File Extension & Filename Lookup using fileinfo.com & File.net
  • MAC Address manufacturer Lookup using maclookup.com
  • Parsing of UserAgent using user-agents.net
  • Microsoft Error code Lookup using Microsoft's DB
  • Event ID Lookup (Windows, Sharepoint, SQL Server, Exchange, and Sysmon) using ultimatewindowssecurity.com
  • Blockchain Address Lookup using blockchain.com
  • CVE Info using cve.mitre.org

Easy to install

You can easily install the extension by downloading the release from the Chrome Web Store!
If you wish to make edits you can download from the releases page, extract the folder and make your changes.
To load your edited extension turn on developer mode in your browser's extensions settings, click "Load unpacked" and select the extracted folder!


SOC Multi-tool is a community-driven project and the developer encourages users to contribute and share better resources.



KoodousFinder - A Simple Tool To Allows Users To Search For And Analyze Android Apps For Potential Security Threats And Vulnerabilities

By: Zion3R


A simple tool to allows users to search for and analyze android apps for potential security threats and vulnerabilities


Account and API Key

Create a Koodous account and get your api key https://koodous.com/settings/developers

Install

$ pip install koodousfinder

Arguments

Param description
-h, --help 'Show this help message and exit'
--package-name "General search for APKs"`
--app-name Name of the app to search for

Examples

koodous.py --package-name "app: Brata AND package: com.brata"
koodous.py --package-name "package: com.google.android.videos AND trusted: true"
koodous.py --package-name "com.metasploit"
python3 koodous.py --app-name "WhatsApp MOD"



Modifiers for advanced search

Attribute Modifier Description
Hash hash: Performs the search depending on the automatically inserted hash. The admitted hashes are sha1, sha256 and md5.
App name app: Searches for the specified app name. If it is a compound name, it can be searched enclosed in quotes, for example: app: "Whatsapp premium".
Package name. package: Searches the package name to see if it contains the indicated string, for example: package: com.whatsapp.
Name of the developer or company. developer: Searches whether the company or developer field includes the indicated string, for example: developer: "WhatsApp Inc.".
Certificate certificate: Searches the apps by their certificate. For example: cert: 60BBF1896747E313B240EE2A54679BB0CE4A5023 or certificate: 38A0F7D505FE18FEC64FBF343ECAAAF310DBD799.

More information: https://docs.koodous.com/apks.html.
#TODO

  • Discord Integration
  • Rulesets view


Wafaray - Enhance Your Malware Detection With WAF + YARA (WAFARAY)

By: Zion3R

WAFARAY is a LAB deployment based on Debian 11.3.0 (stable) x64 made and cooked between two main ingredients WAF + YARA to detect malicious files (e.g. webshells, virus, malware, binaries) typically through web functions (upload files).


Purpose

In essence, the main idea came to use WAF + YARA (YARA right-to-left = ARAY) to detect malicious files at the WAF level before WAF can forward them to the backend e.g. files uploaded through web functions see: https://owasp.org/www-community/vulnerabilities/Unrestricted_File_Upload

When a web page allows uploading files, most of the WAFs are not inspecting files before sending them to the backend. Implementing WAF + YARA could provide malware detection before WAF forwards the files to the backend.

Do malware detection through WAF?

Yes, one solution is to use ModSecurity + Clamav, most of the pages call ClamAV as a process and not as a daemon, in this case, analysing a file could take more than 50 seconds per file. See this resource: https://kifarunix.com/intercept-malicious-file-upload-with-modsecurity-and-clamav/

Do malware detection through WAF + YARA?

:-( A few clues here Black Hat Asia 2019 please continue reading and see below our quick LAB deployment.

WAFARAY: how does it work ?

Basically, It is a quick deployment (1) with pre-compiled and ready-to-use YARA rules via ModSecurity (WAF) using a custom rule; (2) this custom rule will perform an inspection and detection of the files that might contain malicious code, (3) typically web functions (upload files) if the file is suspicious will reject them receiving a 403 code Forbidden by ModSecurity.

βœ”οΈThe YaraCompile.py compiles all the yara rules. (Python3 code)
βœ”οΈThe test.conf is a virtual host that contains the mod security rules. (ModSecurity Code)
βœ”οΈModSecurity rules calls the modsec_yara.py in order to inspect the file that is trying to upload. (Python3 code)
βœ”οΈYara returns two options 1 (200 OK) or 0 (403 Forbidden)

Main Paths:

  • Yara Compiled rules: /YaraRules/Compiled
  • Yara Default rules: /YaraRules/rules
  • Yara Scripts: /YaraRules/YaraScripts
  • Apache vhosts: /etc/apache2/sites-enabled
  • Temporal Files: /temporal

Approach

  • Blueteamers: Rule enforcement, best alerting, malware detection on files uploaded through web functions.
  • Redteamers/pentesters: GreyBox scope , upload and bypass with a malicious file, rule enforcement.
  • Security Officers: Keep alerting, threat hunting.
  • SOC: Best monitoring about malicious files.
  • CERT: Malware Analysis, Determine new IOC.

Building Detection Lab

The Proof of Concept is based on Debian 11.3.0 (stable) x64 OS system, OWASP CRC v3.3.2 and Yara 4.0.5, you will find the automatic installation script here wafaray_install.sh and an optional manual installation guide can be found here: manual_instructions.txt also a PHP page has been created as a "mock" to observe the interaction and detection of malicious files using WAF + YARA.

Installation (recommended) with shell scripts

βœ”οΈStep 2: Deploy using VMware or VirtualBox
βœ”οΈStep 3: Once installed, please follow the instructions below:
alex@waf-labs:~$ su root 
root@waf-labs:/home/alex#

# Remember to change YOUR_USER by your username (e.g waf)
root@waf-labs:/home/alex# sed -i 's/^\(# User privi.*\)/\1\nalex ALL=(ALL) NOPASSWD:ALL/g' /etc/sudoers
root@waf-labs:/home/alex# exit
alex@waf-labs:~$ sudo sed -i 's/^\(deb cdrom.*\)/#\1/g' /etc/apt/sources.list
alex@waf-labs:~$ sudo sed -i 's/^# \(deb\-src http.*\)/ \1/g' /etc/apt/sources.list
alex@waf-labs:~$ sudo sed -i 's/^# \(deb http.*\)/ \1/g' /etc/apt/sources.list
alex@waf-labs:~$ echo -ne "\n\ndeb http://deb.debian.org/debian/ bullseye main\ndeb-src http://deb.debian.org/debian/ bullseye main\n" | sudo tee -a /etc/apt/sources.list
alex@waf-labs:~$ sudo apt-get update
alex@waf-labs:~$ sudo apt-get install sudo -y
alex@waf-labs:~$ sudo apt-get install git vim dos2unix net-tools -y
alex@waf-labs:~$ git clone https://github.com/alt3kx/wafarayalex@waf-labs:~$ cd wafaray
alex@waf-labs:~$ dos2unix wafaray_install.sh
alex@waf-labs:~$ chmod +x wafaray_install.sh
alex@waf-labs:~$ sudo ./wafaray_install.sh >> log_install.log

# Test your LAB environment
alex@waf-labs:~$ firefox localhost:8080/upload.php

Yara Rules

Once the Yara Rules were downloaded and compiled.

It is similar to when you deploy ModSecurity, you need to customize what kind of rule you need to apply. The following log is an example of when the Web Application Firewall + Yara detected a malicious file, in this case, eicar was detected.

Message: Access denied with code 403 (phase 2). File "/temporal/20220812-184146-YvbXKilOKdNkDfySME10ywAAAAA-file-Wx1hQA" rejected by 
the approver script "/YaraRules/YaraScripts/modsec_yara.py": 0 SUSPECTED [YaraSignature: eicar]
[file "/etc/apache2/sites-enabled/test.conf"] [line "56"] [id "500002"]
[msg "Suspected File Upload:eicar.com.txt -> /temporal/20220812-184146-YvbXKilOKdNkDfySME10ywAAAAA-file-Wx1hQA - URI: /upload.php"]

Testing WAFARAY... voilΓ ...

Stop / Start ModSecurity

$ sudo service apache2 stop
$ sudo service apache2 start

Apache Logs

$ cd /var/log
$ sudo tail -f apache2/test_access.log apache2/test_audit.log apache2/test_error.log

Demos

Be careful about your test. The following demos were tested on isolated virtual machines.

Demo 1 - EICAR

A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of Malware: https://secure.eicar.org/eicar.com.txt) NOT EXECUTE THE FILE.

Demo 2 - WebShell.php

For this demo, we disable the rule 933110 - PHP Inject Attack to validate Yara Rules. A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of WebShell PHP: https://github.com/drag0s/php-webshell) NOT EXECUTE THE FILE.

Demo 3 - Malware Bazaar (RecordBreaker) Published: 2022-08-13

A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of Malware Bazaar (RecordBreaker): https://bazaar.abuse.ch/sample/94ffc1624939c5eaa4ed32d19f82c369333b45afbbd9d053fa82fe8f05d91ac2/) NOT EXECUTE THE FILE.

YARA Rules sources

In case that you want to download more yara rules, you can see the following repositories:

References

Roadmap until next release

  • Malware Hash Database (MLDBM). The Database stores the MD5 or SHA1 that files were detected as suspicious.
  • To be tested CRS Modsecurity v.3.3.3 new rules
  • ModSecurity rules improvement to malware detection with Database.
  • To be created blacklist and whitelist related to MD5 or SHA1.
  • To be tested, run in background if the Yara analysis takes more than 3 seconds.
  • To be tested, new payloads, example: Powershell Obfuscasted (WebShells)
  • Remarks for live enviroments. (WAF AWS, WAF GCP, ...)

Authors

Alex Hernandez aka (@_alt3kx_)
Jesus Huerta aka @mindhack03d

Contributors

Israel Zeron Medina aka @spk085



Indicator-Intelligence - Finds Related Domains And IPv4 Addresses To Do Threat Intelligence After Indicator-Intelligence Collects Static Files

By: Zion3R


Finds related domains and IPv4 addresses to do threat intelligence after Indicator-Intelligence collects static files.


Done

  • Related domains, IPs collect

Installation

From Source Code

You can use virtualenv for package dependencies before installation.

git clone https://github.com/OsmanKandemir/indicator-intelligence.git
cd indicator-intelligence
python setup.py build
python setup.py install

From Pypi

The script is available on PyPI. To install with pip:

pip install indicatorintelligence

From Dockerfile

You can run this application on a container after build a Dockerfile.

docker build -t indicator .
docker run indicator --domains target-web.com --json

From DockerHub

docker pull osmankandemir/indicator
docker run osmankandemir/indicator --domains target-web.com --json

From Poetry

pip install poetry
poetry install

Usage

-d DOMAINS [DOMAINS], --domains DOMAINS [DOMAINS] Input Targets. --domains target-web1.com target-web2.com
-p PROXY, --proxy PROXY Use HTTP proxy. --proxy 0.0.0.0:8080
-a AGENT, --agent AGENT Use agent. --agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'
-o JSON, --json JSON JSON output. --json

Function Usage

Development and Contribution

See; CONTRIBUTING.md

License

Copyright (c) 2023 Osman Kandemir
Licensed under the GPL-3.0 License.

Donations

If you like Indicator-Intelligence and would like to show support, you can use Buy A Coffee or Github Sponsors feature for the developer using the button below.

You can use the github sponsored tiers feature for purchasing and other features.

Sponsor me : https://github.com/sponsors/OsmanKandemir




MacOSThreatTrack - Bash Tool Used For Proactive Detection Of Malicious Activity On macOS Systems


The tool is being tested in the beta phase, and it only gathers MacOS system information at this time.

The code is poorly organized and requires significant improvements.

Description

Bash tool used for proactive detection of malicious activity on macOS systems.

I was inspired by Venator-Swift and decided to create a bash version of the tool.

OneLiner command

curl https://raw.githubusercontent.com/ab2pentest/MacOSThreatTrack/main/MacOSThreatTrack.sh | bash

Gathered information

[+] System info
[+] Users list
[+] Environment variables
[+] Process list
[+] Active network connections
[+] SIP status
[+] GateKeeper status
[+] Zsh history
[+] Bash history
[+] Shell startup scripts
[+] PF rules
[+] Periodic scripts
[+] CronJobs list
[+] LaunchDaemons data
[+] Kernel extensions
[+] Installed applications
[+] Installation history
[+] Chrome extensions

Todo

  1. Saving output as JSON instead of printing out the result.


X-force - IBM Security Utilitary Library In Python. Search And Query All Sources: Threat_Activities And Groups, Malware_Analysis, Industries


IBM Security X-FORCE ExchangeΒ libraryΒ in Python 3. Search: threat_activities, threat_groups, malware_analysis, collector and industries.


Install

pip3 install XForce

Use

Using you API_KEY make a basic authentication. After make a base64 code β†’ Key + : + Password:

printf "d2f5f0f9-2995-42c6-b1dd-4c92252da129:06c41d5e-0604-4c7c-a599-300c367d2090" | base64
# ZDJmNWYwZjktMjk5NS00MmM2LWIxZGQtNGM5MjI1MmRhMTI5OjA2YzQxZDVlLTA2MDQtNGM3Yy1hNTk5LTMwMGMzNjdkMjA5MAo=

Using API_KEY, call functions.

Call functions

Threat activity search return in string XForce.threat_activities(Term, API_KEY) # Malware analysis search return in string XForce.malware_analysis(Term, API_KEY) # Threat groups search return in string XForce.threat_groups(Term, API_KEY) # Industries search return in string XForce.industries(Term, API_KEY) # All categories search return in list with dict XForce.industries(Term, API_KEY)" dir="auto">
import XForce

# Args: 1 - Term of search, 2 - API KEY

# Threat activity search return in string
XForce.threat_activities(Term, API_KEY)

# Malware analysis search return in string
XForce.malware_analysis(Term, API_KEY)

# Threat groups search return in string
XForce.threat_groups(Term, API_KEY)

# Industries search return in string
XForce.industries(Term, API_KEY)

# All categories search return in list with dict
XForce.industries(Term, API_KEY)

For see more details of consult, run:

from XForce import details

# Args: 1 - GUID, 2 - API KEY
# IMPORTANT: all GUID are correspondent to category
# All function of details have:
# url Ò†’ with x-force exchange panel
details.activity(Id, API_KEY)
details.group(Id, API_KEY)
details.malware(Id, API_KEY)
details.industry(Id, API_KEY)


Cortex-XDR-Config-Extractor - Cortex XDR Config Extractor


This tool is meant to be used during Red Team Assessments and to audit the XDR Settings.

With this tool its possible to parse the Database Lock Files of the Cortex XDR Agent by Palo Alto Networks and extract Agent Settings, the Hash and Salt of the Uninstall Password, as well as possible Exclusions.


Supported Extractions

  • Uninstall Password Hash & Salt
  • Excluded Signer Names
  • DLL Security Exclusions & Settings
  • PE Security Exclusions & Settings
  • Office Files Security Exclusions & Settings
  • Credential Gathering Module Exclusions
  • Webshell Protection Module Exclusions
  • Childprocess Executionchain Exclusions
  • Behavorial Threat Module Exclusions
  • Local Malware Scan Module Exclusions
  • Memory Protection Module Status
  • Global Hash Exclusions
  • Ransomware Protection Module Modus & Settings

Usage

Usage = ./XDRConfExtractor.py [Filename].ldb
Help = ./XDRConfExtractor.py -h

Getting Hold of Database Lock Files

Agent Version <7.8

With Agent Versions prior to 7.8 any authenticated user can generate a Support File on Windows via Cortex XDR Console in the System Tray. The databse lock files can be found within the zip:

logs_[ID].zip\Persistence\agent_settings.db\

Agent Version β‰₯7.8

Support files from Agents running Version 7.8 or higher are encrypted, but if you have elevated privileges on the Windows Maschine the files can be directly copied from the following directory, without encryption.

Method I

C:\ProgramData\Cyvera\LocalSystem\Persistence\agent_settings.db\

Method II

Generated Support Files are not deleted regulary, so it might be possible to find old, unencrypted Support Files in the following folder:

C:\Users\[Username]\AppData\Roaming\PaloAltoNetworks\Traps\support\

Agent Version >8.1

Supposedly, since Agent version 8.1, it should no longer be possible to pull the data from the lock files. This has not been tested yet.

Credits

This tool relies on a technique originally released by mr.d0x in April 2022 https://mrd0x.com/cortex-xdr-analysis-and-bypass/

Legal disclaimer

Usage of Cortex-XDR-Config-Extractor for attacking targets without prior mutual consent is illegal. It's the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program. Only use for educational purposes.



ThreatHound - Tool That Help You On Your IR & Threat Hunting And CA


This tool will help you on your IR & Threat Hunting & CA. just drop your event log file and anlayze the results.


New Release Features:

  • support windows (ThreatHound.exe)
  • C for Linux based
  • new vesion available in C also
  • now you can save results in json file or print on screen it as you want by arg 'print' "'yes' to print the results on screen and 'no' to save the results on json file"
  • you can give windows event logs folder or single evtx file or multiple evtx separated by comma by arg -p
  • you can now give sigam ruels path by arg -s
  • add multithreading to improve runing speed
  • ThreatHound.exe is agent based you can push it and run it on multiple servers
  • Example:
$ ThreatHound.exe -s ..\sigma_rules\ -p C:\Windows\System32\winevt\Logs\ -print no
  • NOTE: give cmd full promission to read from "C:\Windows\System32\winevt\Logs"

  • Linux Based:

  • Windows Based

I’ve built the following:

  • A dedicated backend to support Sigma rules for python
  • A dedicated backend for parsing evtx for python
  • A dedicated backend to match between evtx and the Sigma rules

Features of the tool:

  • Automation for Threat hunting, Compromise Assessment, and Incident Response for the Windows Event Logs
  • Downloading and updating the Sigma rules daily from the source
  • More then 50 detection rules included
  • support for more then 1500 detection rules for Sigma
  • Support for new sigma rules dynamically and adding it to the detection rules
  • Saving of all the outputs in JSON format
  • Easily add any detection rules you prefer
  • you can add new event log source type in mapping.py easily

To-do:

  • Support for Sigma rules dedicated for DNS query
  • Modifying the speed of algorithm dedicated for the detection and making it faster
  • Adding JSON output that supports Splunk
  • More features

installiton:

$ git clone https://github.com/MazX0p/ThreatHound.git
$ cd ThreatHound
$ pip install - r requirements.txt
$ pyhton3 ThreatHound.py
  • Note: glob doesn't support get path of the directory if it has spaces on folder names, please ensure the path of the tool is without spaces (folders names)

Demo:

https://player.vimeo.com/video/784137549?h=6a0e7ea68a&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479

Screenshots:



Misp-Extractor - Tool That Connects To A MISP Instance And Retrieves Attributes Of Specific Types (Such As IP Addresses, URLs, And Hashes)


This code connects to a given MISP (Malware Information Sharing Platform) server and parses a given number of events, writing the IP addresses, URLs, and MD5 hashes found in the events to three separate files.


Usage

To use this script, you will need to provide the URL of your MISP instance and a valid API key. You can then call the MISPConnector.run() method to retrieve the attributes and save them to files.

To use the code, run the following command:

python3 misp_connector.py --misp-url <MISP_URL> --misp-key <MISP_API_KEY> --limit <EVENT_LIMIT>

Supported attribute types

The MISPConnector class currently supports the following attribute types:

  • ip-src
  • ip-dst
  • md5
  • url
  • domain

If an attribute of one of these types is found in an event, it will be added to the appropriate set (for example, IP addresses will be added to the network_set) and written to the corresponding file (network.txt, hash.txt, or url.txt).

Configuration

The code can be configured by passing arguments to the command-line script. The available arguments are:

  • misp-url: The URL of the MISP server. This argument is required.
  • misp-key: The API key for the MISP server. This argument is required.
  • limit: The maximum number of events to parse. The default is 2000.

Limitations

This script has the following limitations:

  • It only retrieves attributes of specific types (as listed above).
  • It only writes the retrieved attributes to files, without any further processing or analysis.
  • It only retrieves a maximum of 2000 events, as specified by the limit parameter in the misp.search() method.

License

This code is provided under the MIT License. See the LICENSE file for more details.



Pylirt - Python Linux Incident Response Toolkit


With this application, it is aimed to accelerate the incident response processes by collecting information in linux operating systems.


Features

Information is collected in the following contents.

/etc/passwd

cat /etc/group

cat /etc/sudoers

lastlog

cat /var/log/auth.log

uptime/proc/meminfo

ps aux

/etc/resolv.conf

/etc/hosts

iptables -L -v -n

find / -type f -size +512k -exec ls -lh {}/;

find / -mtime -1 -ls

ip a

netstat -nap

arp -a

echo $PATH

Installation

git clone https://github.com/anil-yelken/pylirt

cd pylirt

sudo pip3 install paramiko

Usage

The following information should be specified in the cred_list.txt file:

IP|Username|Password

sudo python3 plirt.py

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Stegowiper - A Powerful And Flexible Tool To Apply Active Attacks For Disrupting Stegomalware


Over the last 10 years, many threat groups have employed stegomalware or other steganography-based techniques to attack organizations from all sectors and in all regions of the world. Some examples are: APT15/Vixen Panda, APT23/Tropic Trooper, APT29/Cozy Bear, APT32/OceanLotus, APT34/OilRig, APT37/ScarCruft, APT38/Lazarus Group, Duqu Group, Turla, Vawtrack, Powload, Lokibot, Ursnif, IceID, etc.


Our research (see APTs/) shows that most groups are employing very simple techniques (at least from an academic perspective) and known tools to circumvent perimeter defenses, although more advanced groups are also using steganography to hide C&C communication and data exfiltration. We argue that this lack of sophistication is not due to the lack of knowledge in steganography (some APTs, like Turla, have already experimented with advanced algorithms), but simply because organizations are not able to defend themselves, even against the simplest steganography techniques.

For this reason, we have created stegoWiper, a tool to blindly disrupt any image-based stegomalware, by attacking the weakest point of all steganography algorithms: their robustness. We have checked that it is capable of disrupting all steganography techniques and tools (Invoke-PSImage, F5, Steghide, openstego, ...) employed nowadays, as well as the most advanced algorithms available in the academic literature, based on matrix encryption, wet-papers, etc. (e.g. Hill, J-Uniward, Hugo). In fact, the more sophisticated a steganography technique is, the more disruption stegoWiper produces.

Moreover, our active attack allows us to disrupt any steganography payload from all the images exchanged by an organization by means of a web proxy ICAP (Internet Content Adaptation Protocol) service (see c-icap/), in real time and without having to identify whether the images contain hidden data first.

Usage & Parameters

stegoWiper v0.1 - Cleans stego information from image files
(png, jpg, gif, bmp, svg)

Usage: ${myself} [-hvc <comment>] <input file> <output file>

Options:
-h Show this message and exit
-v Verbose mode
-c <comment> Add <comment> to output image file

Examples - Breaking steganography

stegowiper.sh -c "stegoWiped" ursnif.png ursnif_clean.png

The examples/ directory includes several base images that have been employed to hide secret information using different steganography algorithms, as well as the result of cleanign them with stegoWiper.

How it works?

stegoWiper removes all metadata comments from the input file, and also adds some imperceptible noise to the image (it doesn't matter if it really includes a hidden payload or not). If the image does contain a steganographic payload, this random noise alters it, so if you try to extract it, it will either fail or be corrupted, so steganomalware fails to execute.

We have tested several kinds (Uniform, Poisson, Laplacian, Impulsive, Multiplicative) and levels of noise, and the best one in terms of payload disruption and reducing the impact on the input image is the Gaussian one (see tests/ for a summary of our experiments). It is also worth noting that, since the noise is random and distributed all over the image, attackers cannot know how to avoid it. This is important because other authors have proposed deterministic alterations (such as clearing the least significant bit of all pixels), so the attackers can easily bypass them (e.g. just by using the second least significaby bit).

Author & license

This project has been developed by Dr. Alfonso MuΓ±oz and Dr. Manuel UrueΓ±a The code is released under the GNU General Public License v3.



Sandbox_Scryer - Tool For Producing Threat Hunting And Intelligence Data From Public Sandbox Detonation Output


The Sandbox Scryer is an open-source tool for producing threat hunting and intelligence data from public sandbox detonation output The tool leverages the MITRE ATT&CK Framework to organize and prioritize findings, assisting in the assembly of IOCs, understanding attack movement and in threat hunting By allowing researchers to send thousands of samples to a sandbox for building a profile that can be used with the ATT&CK technique, the Sandbox Scryer delivers an unprecedented ability to solve use cases at scale The tool is intended for cybersecurity professionals who are interested in threat hunting and attack analysis leveraging sandbox output data. The Sandbox Scryer tool currently consumes output from the free and public Hybrid Analysis malware analysis service helping analysts expedite and scale threat hunting


Repository contents

[root] version.txt - Current tool version LICENSE - Defines license for source and other contents README.md - This file

[root\bin] \Linux - Pre-build binaries for running tool in Linux. Currently supports: Ubuntu x64 \MacOS - Pre-build binaries for running tool in MacOS. Currently supports: OSX 10.15 x64 \Windows - Pre-build binaries for running tool in Windows. Currently supports: Win10 x64

[root\presentation_video] Sandbox_Scryer__BlackHat_Presentation_and_demo.mp4 - Video walking through slide deck and showing demo of tool

[root\screenshots_and_videos] Various backing screenshots

[root\scripts] Parse_report_set.* - Windows PowerShell and DOS Command Window batch file scripts that invoke tool to parse each HA Sandbox report summary in test set Collate_Results.* - Windows PowerShell and DOS Command Window batch file scripts that invoke tool to collate data from parsing report summaries and generate a MITRE Navigator layer file

[root\slides] BlackHat_Arsenal_2022__Sandbox_Scryer__BH_template.pdf - PDF export of slides used to present the Sandbox Scryer at Black Hat 2022

[root\src] Sandbox_Scryer - Folder with source for Sandbox Scryer tool (in c#) and Visual Studio 2019 solution file

[root\test_data] (SHA256 filenames).json - Report summaries from submissions to Hybrid Analysis enterprise-attack__062322.json - MITRE CTI data TopAttackTechniques__High__060922.json - Top MITRE ATT&CK techniques generated with the MITRE calculator. Used to rank techniques for generating heat map in MITRE Navigator

[root\test_output] (SHA256)_report__summary_Error_Log.txt - Errors (if any) encountered while parsing report summary for SHA256 included in name (SHA256)_report__summary_Hits__Complete_List.png - Graphic showing tecniques noted while parsing report summary for SHA256 included in name (SHA256)_report__summary_MITRE_Attck_Hits.csv - For collation step, techniques and tactics with select metadata from parsing report summary for SHA256 included in name (SHA256)_report__summary_MITRE_Attck_Hits.txt - More human-readable form of .csv file. Includes ranking data of noted techniques

\collated_data collated_080122_MITRE_Attck_Heatmap.json - Layer file for import into MITRE Navigator

Operation

The Sandbox Scryer is intended to be invoked as a command-line tool, to facilitate scripting

Operation consists of two steps:

  • Parsing, where a specified report summary is parsed to extract the output noted earlier
  • Collation, where the data from the set of parsing results from the parsing step is collated to produce a Navigator layer file

Invocation examples:

  • Parsing

  • Collation

If the parameter "-h" is specified, the built-in help is displayed as shown here Sandbox_Scryer.exe -h

        Options:
-h Display command-line options
-i Input filepath
-ita Input filepath - MITRE report for top techniques
-o Output folder path
-ft Type of file to submit
-name Name to use with output
-sb_name Identifier of sandbox to use (default: ha)
-api_key API key to use with submission to sandbox
-env_id Environment ID to use with submission to sandbox
-inc_sub Include sub-techniques in graphical output (default is to not include)
-mitre_data Filepath for mitre cti data to parse (to populate att&ck techniques)
-cmd Command
Options:
parse Process report file from prior sandbox submission
Uses -i, -ita, - o, -name, -inc_sub, -sig_data parameters
col Collates report data from prior sandbox submissions
Uses -i (treated as folder path), -ita, -o, -name, -inc_sub, -mitre_data parameters

Once the Navigator layer file is produced, it may be loaded into the Navigator for viewing via https://mitre-attack.github.io/attack-navigator/

Within the Navigator, techniques noted in the sandbox report summaries are highlighted and shown with increased heat based on a combined scoring of the technique ranking and the count of hits on the technique in the sandbox report summaries. Howevering of techniques will show select metadata.



Threatest - Threatest Is A Go Framework For End-To-End Testing Threat Detection Rules


Threatest is a Go framework for testing threat detection end-to-end.

Threatest allows you to detonate an attack technique, and verify that the alert you expect was generated in your favorite security platform.

Read the announcement blog post: https://securitylabs.datadoghq.com/articles/threatest-end-to-end-testing-threat-detection/


Concepts

Detonators

A detonator describes how and where an attack technique is executed.

Supported detonators:

  • Local command execution
  • SSH command execution
  • Stratus Red Team
  • AWS detonator

Alert matchers

An alert matcher is a platform-specific integration that can check if an expected alert was triggered.

Supported alert matchers:

  • Datadog security signals

Detonation and alert correlation

Each detonation is assigned a UUID. This UUID is reflected in the detonation and used to ensure that the matched alert corresponds exactly to this detonation.

The way this is done depends on the detonator; for instance, Stratus Red Team and the AWS Detonator inject it in the user-agent; the SSH detonator uses a parent process containing the UUID.

Sample usage

See examples for complete usage example.

Testing Datadog Cloud SIEM signals triggered by Stratus Red Team

threatest := Threatest()

threatest.Scenario("AWS console login").
WhenDetonating(StratusRedTeamTechnique("aws.initial-access.console-login-without-mfa")).
Expect(DatadogSecuritySignal("AWS Console login without MFA").WithSeverity("medium")).
WithTimeout(15 * time.Minute)

assert.NoError(t, threatest.Run())

Testing Datadog Cloud Workload Security signals triggered by running commands over SSH

ssh, _ := NewSSHCommandExecutor("test-box", "", "")

threatest := Threatest()

threatest.Scenario("curl to metadata service").
WhenDetonating(NewCommandDetonator(ssh, "curl http://169.254.169.254 --connect-timeout 1")).
Expect(DatadogSecuritySignal("EC2 Instance Metadata Service Accessed via Network Utility"))

assert.NoError(t, threatest.Run())


Whids - Open Source EDR For Windows


What

EDR with artifact collection driven by detection. The detection engine is built on top of a previous project Gene specially designed to match Windows events against user defined rules.

What do you mean by "artifact collection driven by detection" ?

It means that an alert can directly trigger some artifact collection (file, registry, process memory). This way you are sure you collected the artifacts as soon as you could (near real time).

All this work has been done on my free time in the hope it would help other people, I hope you will enjoy it. Unless I get some funding to further develop this project, I will continue doing so. I will make all I can to fix issues in time and provide updates. Feel free to open issues to improve that project and keep it alive.


Why

  • Provide an Open Source EDR to the community
  • Make transparency on the detection rules to make analysts understand why a rule triggered
  • Offer powerful detection primitives though a flexible rule engine
  • Optimize Incident Response processes by drastically reducing the time between detection and artifact collection

How

NB: the EDR agent can be ran standalone (without being connected to an EDR manager)

Strengths

  • Open Source
  • Relies on Sysmon for all the heavy lifting (kernel component)
  • Very powerful but also customizable detection engine
  • Built by an Incident Responder for all Incident Responders to make their job easier
  • Low footprint (no process injection)
  • Can co-exist with any antivirus product (advised to run it along with MS Defender)
  • Designed for high throughput. It can easily enrich and analyze 4M events a day per endpoint without performance impact. Good luck to achieve that with a SIEM.
  • Easily integrable with other tools (Splunk, ELK, MISP ...)
  • Integrated with ATT&CK framework

Weaknesses

  • Only works on Windows
  • Detection limited to what is available in Windows event logs channels ETW providers/sessions (already a lot in there)
  • No process instrumentation (it is also a strength as it depends on the point of view)
  • No GUI yet (will develop one if requested by the community)
  • No support for ETW (available in beta)
  • Tell me if you notice others ...

Installation

Requirements

  1. Install Sysmon
  2. Configure Sysmon
    • You can find optimized Sysmon configurations here
    • Logging any ProcessCreate and ProcessTerminate is mandatory
  3. Take note of the path to your Sysmon binary because you will need it later on

NB: event filtering can be done at 100% with Gene rules so do not bother creating a complicated Sysmon configuration.

Pre-Installation Recommendations

In order to get the most of WHIDS you might want to improve your logging policy.

  • Enable Powershell Module Logging
  • Audit Service Creation: gpedit.msc -> Computer Configuration\Windows Settings\Security Settings\Advanced Audit Policy Configuration\System Audit Policies\System\Audit Security System Extension -> Enable
  • Enable File System Audit. Sysmon only provides FileCreate events when new files are created, so if you want/need to log other kind of accesses (Read, Write, ...) you need to enable FS Auditing.
    1. gpedit.msc -> Computer Configuration\Windows Settings\Security Settings\Advanced Audit Policy Configuration\System Audit Policies\Object Access\Audit File System -> Enable
    2. Right Click Any Folder -> Properties -> Security -> Advanced -> Auditing -> Add
      1. Select a principal (put here the name of the user/group you want the audit for). Put group Everyone if you want to log access from any user.
      2. Apply this to is used to select the scope of this audit policy starting from the folder you have selected
      3. Basic permissions select the kinds of accesses you want the logs to be generated for
      4. Validate
    3. File System auditing logs will appear in the Security log channel
  • If you want an antivirus to run on your endpoints, keep Microsoft Defender, first because it is a good AV but also because it logs alerts in a dedicated log channel Microsoft-Windows-Windows Defender/Operational monitored by the EDR.

EDR Endpoint agent (Whids.exe)

This section covers the installation of the agent on the endpoint.

  1. Download and extract the latest WHIDS release https://github.com/0xrawsec/whids/releases
  2. Run manage.bat as administrator
  3. Launch installation by selecting the appropriate option
  4. Verify that files have been created at the installation directory
  5. Edit configuration file by selecting the appropriate option in manage.bat or using your preferred text editor
  6. Skip this if running with a connection to a manager, because rules will be updated automatically. If there is nothing in the rules directory the tool will be useless, so make sure there are some gene rules in there. Some rules are packaged with WHIDS and you will be prompted to choose if you want to install those or not. If you want the last up to date rules, you can get those here (take the compiled ones)
  7. Start the services from appropriate option in manage.bat or just reboot (preferred option otherwise some enrichment fields will be incomplete leading to false alerts)
  8. If you configured a manager do not forget to run it in order to receive alerts and dumps

NB: At installation time the Sysmon service will be made dependent of WHIDS service so that we are sure the EDR runs before Sysmon starts generating some events.

EDR Manager

The EDR manager can be installed on several platforms, pre-built binaries are provided for Windows, Linux and Darwin.

  1. Create TLS certificate if needed for HTTPS connections
  2. Create a configuration file (there is a command line argument to generate a basic config)
  3. Run the binary

Configuration Examples

Please visit doc/configuration.md

Further Documentation

Known Issues

  • Does not work properly when ran from a network share mapped as a network drive (this case prevent whids to identify itself and thus generate some noise). Example: if \\vbox\test is mounted as Z: drive, running Z:\whids.exe won't work while running \\vbox\test\whids.exe actually would.

Roadmap until next release

  • find a new name to the project because we all agree it sucks
  • better sysmon integration (config, deployment, update)
  • endpoint configuration from manager
  • tooling management (update, install), like OSQuery
  • code refactoring and optimization
  • implement a performance monitor
  • get rid of any on-disk configuration
  • implement IOC management capabilities
  • ETW support
  • automatic documentation (OpenAPI) and testing of manager's API
  • provide endpoint system information in manager
  • implement actionable rules
  • provide canary file management
  • builtin commands to be executed by endpoints
  • provide Incident Response reports about endpoints
  • overall manager API improvement
  • provide event streams so that a client can receive events in realtime
  • standardize HTTP headers
  • provide a python library to interact with EDR manager (https://github.com/0xrawsec/pywhids)

Changelog

v1.7

  • New Administrative HTTP API with following features:
    • Manage endpoints (list, create, delete)
    • Get basic statistics about the manager
    • Execute commands on endpoints and get results
      • Can drop files prior to execution, to execute binaries/scripts not present on endpoint. Dropped files are deleted after command was ran.
      • Can retrieve files (post command execution), to retrieve results of the command
    • Collect files from endpoints for forensic purposes
    • Contain / Uncontain endpoints by restricting any network traffic except communication to the manager.
    • Query endpoints logs
    • Query endpoints alerts
    • Pivot on a timestamp and retrieve logs/alerts around that time pivot
    • Access endpoint report
      • Scoring (relative to each environment) allowing to sort endpoints and spot the ones behaving differently from the others.
      • Alerts / TTPs observed on a given time frame
    • Manage rules (list, create, update, save, delete)
  • Integration with Sysmon v12 and v13
    • Integrate ClipboardData events
      • Put the content of the clipboard data inside the event to allow creating rule on the content of the clipboard
    • Integrate ProcessTampering events
      • Enrich event with a diffing score between .text section on disk and in memory
  • Implemented certificate pinning on client to enhance security of the communication channel between endpoints and management server
  • Log filtering capabilities, allowing one to collect contextual events. Log filtering is achieved by creating Gene filtering rules (c.f. Gene Documentation).
  • Configuration files in TOML format for better readability
  • Better protection of the installation directory

Related Work

Sponsors

Github:https://github.com/tines Website:https://www.tines.com/ Twitter:@tines_io



Matano - The Open-Source Security Lake Platform For AWS


Matano is an open source security lake platform for AWS. It lets you ingest petabytes of security and log data from various sources, store and query them in an open Apache Iceberg data lake, and create Python detections as code for realtime alerting. Matano is fully serverless and designed specifically for AWS and focuses on enabling high scale, low cost, and zero-ops. Matano deploys fully into your AWS account.


Features

Collect data from all your sources

Matano lets you collect log data from sources using S3 or SQS based ingestion.

Ingest, transform, normalize log data

Matano normalizes and transforms your data using Vector Remap Language (VRL). Matano works with the Elastic Common Schema (ECS) by default and you can define your own schema.

Store data in S3 object storage

Log data is always stored in S3 object storage, for cost effective, long term, durable storage.

Apache Iceberg Data lake

All data is ingested into an Apache Iceberg based data lake, allowing you to perform ACID transactions, time travel, and more on all your log data. Apache Iceberg is an open table format, so you always own your own data, with no vendor lock-in.

Serverless

Matano is a fully serverless platform, designed for zero-ops and unlimited elastic horizontal scaling.

Detections as code

Write Python detections to implement realtime alerting on your log data.

Installing

View the complete installation instructions.

You can install the matano CLI to deploy Matano into your AWS account, and manage your Matano deployment.

Requirements

  • Docker

Installation

Matano provides a nightly release with the latest prebuilt files to install the Matano CLI on GitHub. You can download and execute these files to install Matano.

For example, to install the Matano CLI for Linux, run:

curl -OL https://github.com/matanolabs/matano/releases/download/nightly/matano-linux-x64.sh
chmod +x matano-linux-x64.sh
sudo ./matano-linux-x64.sh

Getting started

Read the complete docs on getting started.

Deployment

To get started with Matano, run the matano init command. Make sure you have AWS credentials in your environment (or in an AWS CLI profile).

The interactive CLI wizard will walk you through getting started by generating an initial Matano directory for you, initializing your AWS account, and deploying Matano into your AWS account.


Initial deployment takes a few minutes.

Documentation

View our complete documentation.

License



BeatRev - POC For Frustrating/Defeating Malware Analysts


BeatRev Version 2

Disclaimer/Liability

The work that follows is a POC to enable malware to "key" itself to a particular victim in order to frustrate efforts of malware analysts.

I assume no responsibility for malicious use of any ideas or code contained within this project. I provide this research to further educate infosec professionals and provide additional training/food for thought for Malware Analysts, Reverse Engineers, and Blue Teamers at large.


TLDR

The first time the malware runs on a victim it AES encrypts the actual payload(an RDLL) using environmental data from that victim. Each subsequent time the malware is ran it gathers that same environmental info, AES decrypts the payload stored as a byte array within the malware, and runs it. If it fails to decrypt/the payload fails to run, the malware deletes itself. Protection against reverse engineers and malware analysts.



Updated 6 JUNE 2022



I didn't feel finished with this project so I went back and did a fairly substantial re-write. The original research and tradecraft may be found Here.

Major changes are as follows:

  1. I have released all source code
  2. I integrated Stephen Fewer's ReflectiveDLL into the project to replace Stage2
  3. I formatted some of the byte arrays in this project into string format and parse them with UuidFromStringA. This Repo was used as a template. This was done to lower entropy of Stage0 and Stage1
  4. Stage0 has had a fair bit of AV evasion built into it. Thanks to Cerbersec's Project Ares for inspiration
  5. The builder application to produce Stage0 has been included

There are quite a few different things that could be taken from the source code of this project for use elsewhere. Hopefully it will be useful for someone.

Problems with Original Release and Mitigations

There were a few shortcomings with the original release of BeatRev that I decided to try and address.

Stage2 was previously a standalone executable that was stored as the alternate data stream(ADS) of Stage1. In order to acheive the AES encryption-by-victim and subsequent decryption and execution, each time Stage1 was ran it would read the ADS, decrypt it, write back to the ADS, call CreateProcess, and then re-encrypt Stage2 and write it back to disk in the ADS. This was a lot of I/O operations and the CreateProcess call of course wasn't great.

I happened to come upon Steven Fewer's research concerning Reflective DLL's and it seemed like a good fit. Stage2 is now an RDLL; our malware/shellcode runner/whatever we want to protect can be ported to RDLL format and stored as a byte array within Stage1 that is then decrypted on runtime and executed by Stage1. This removes all of the I/O operations and the CreateProcess call from Version1 and is a welcome change.

Stage1 did not have any real kind of AV evasion measures programmed in; this was intentional, as it is extra work and wasn't really the point of this research. During the re-write I took it as an added challenge and added API-hashing to remove functions from the Import Address Table of Stage1. This has helped with detection and Stage1 has a 4/66 detection rate on VirusTotal. I was comfortable uploading Stage1 given that is is already keyed to the original box it was ran on and the file signature constantly changes because of the AES encryption that happens.

I recently started paying attention to entropy as a means to detect malware; to try and lower the otherwise very high entropy that a giant AES encrypted binary blob gives an executable I looked into integrating shellcode stored as UUID's. Because the binary is stored in string representation, there is lower overall entropy in the executable. Using this technique The entropy of Stage0 is now ~6.8 and Stage1 ~4.5 (on a max scale of 8).

Finally it is a giant chore to integrate and produce a complete Stage0 due to all of the pieces that must be manipulated. To make this easier I made a builder application that will ingest a Stage0.c template file, a Stage1 stub, a Stage2 stub, and a raw shellcode file (this was build around Stage2 being a shellcode runner containing CobaltStrike shellcode) and produce a compiled Stage0 payload for use on target.

Technical Details

The Reflective DLL code from Stephen Fewer contains some Visual Studio compiler-specific instructions; I'm sure it is possible to port the technique over to MingW but I do not have the skills to do so. The main problem here is that the CobaltStrike shellcode (stageless is ~265K) needs to go inside the RDLL and be compiled. To get around this and integrate it nicely with the rest of the process I wrote my Stage2 RDLL to contain a global variable chunk of memory that is the size of the CS shellcode; this ~265K chunk of memory has a small placeholder in it that can be located in the compiled binary. The code in src/Stage2 has this added already.

Once compiled, this Stage2stub is transfered to kali where a binary patch may be performed to stick the real CS shellcode into the place in memory that it belongs. This produces the complete Stage2.

To avoid the I/O and CreateProcess fiasco previously described, the complete Stage2 must also be patched into the compiled Stage1 by Stage0; this is necessary in order to allow Stage2 to be encrypted once on-target in addition to preventing Stage2 from being stored separately on disk. The same concept previously described for Stage2 is conducted by Stage0 on target in order to assemble the final Stage1 payload. It should be noted that the memmem function is used in order to locate the placeholder within each stub; this function is no available on Windows, so a custom implementation was used. Thanks to Foxik384 for his code.

In order to perform a binary patch, we must allocate the required memory up front; this has a compounding effect, as Stage1 must now be big enough to also contain Stage2. With the added step of converting Stage2 to a UUID string, Stage2 balloons in size as does Stage1 in order to hold it. A stage2 RDLL with a compiled size of ~290K results in a Stage0 payload of ~1.38M, and a Stage1 payload of ~700K.

The builder application only supports creating x64 EXE's. However with a little more work in theory you could make Stage0 a DLL, as well as Stage1, and have the whole lifecycle exist as a DLL hijack instead of a standalone executable.

Instructions

These instructions will get you on your way to using this POC.

  1. Compile Builder using gcc -o builder src/Builder/BeatRevV2Builder.c
  2. Modify sc_length variable in src/Stage2/dll/src/ReflectiveDLL.c to match the length of raw shellcode file used with builder ( I have included fakesc.bin for example)
  3. Compile Stage2 (in visual studio, ReflectiveDLL project uses some VS compiler-specific instructions)
  4. Move compiled stage2stub.dll back to kali, modify src/Stage1/newstage1.c and define stage2size as the size of stage2stub
  5. Compile stage1stub using x86_64-w64-mingw32-gcc newstage1.c -o stage1stub.exe -s -DUNICODE -Os -L /usr/x86_64-w64-mingw32/lib -l:librpcrt4.a
  6. Run builder using syntax: ./builder src/Stage0/newstage0_exe.c x64 stage1stub.exe stage2stub.dll shellcode.bin
  7. Builder will produce dropper.exe. This is a formatted and compiled Stage0 payload for use on target.

BeatRev Original Release

Introduction

About 6 months ago it occured to me that while I had learned and done a lot with malware concerning AV/EDR evasion, I had spent very little time concerned with trying to evade or defeat reverse engineering/malware analysis. This was for a few good reasons:

  1. I don't know anything about malware analysis or reverse engineering
  2. When you are talking about legal, sanctioned Red Team work there isn't really a need to try and frustrate or defeat a reverse engineer because the activity should have been deconflicted long before it reaches that stage.

Nonetheless it was an interesting thought experiment and I had a few colleagues who DO know about malware analysis that I could bounce ideas off of. It seemed a challenge of a whole different magnitude compared to AV/EDR evasion and one I decided to take a stab at.

Premise

My initial premise was that the malware, on the first time of being ran, would somehow "key" itself to that victim machine; any subsequent attempts to run it would evaluate something in the target environment and compare it for a match in the malware. If those two factors matched, it executes as expected. If they do not (as in the case where the sample had been transfered to a malware analysts sandbox), the malware deletes itself (Once again heavily leaning on the work of LloydLabs and his delete-self-poc).

This "key" must be something "unique" to the victim computer. Ideally it will be a combination of several pieces of information, and then further obfuscated. As an example, we could gather the hostname of the computer as well as the amount of RAM installed; these two values can then be concatenated (e.g. Client018192MB) and then hashed using a user-defined function to produce a number (e.g. 5343823956).

There are a ton of choices in what information to gather, but thought should be given as to what values a Blue Teamer could easily spoof; a MAC address for example may seem like an attractive "unique" identifier for a victim, however MAC addresses can easily be set manually in order for a Reverse Engineer to match their sandbox to the original victim. Ideally the values chosen and enumerated will be one that are difficult for a reverse engineer to replicate in their environment.

With some self-deletion magic, the malware could read itself into a buffer, locate a placeholder variable and replace it with this number, delete itself, and then write the modified malware back to disk in the same location. Combined with an if/else statement in Main, the next time the malware runs it will detect that it has been ran previously and then go gather the hostname and amount of RAM again in order to produce the hashed number. This would then be evaluated against the number stored in the malware during the first run (5343823956). If it matches (as is the case if the malware is running on the same machine as it originally did), it executes as expected however if a different value is returned it will again call the self-delete function in order to remove itself from disk and protect the author from the malware analyst.

This seemed like a fine idea in theory until I spoke with a colleague who has real malware analysis and reverse engineering experience. I was told that a reverse engineer would be able to observe the conditional statement in the malware (if ValueFromFirstRun != GetHostnameAndRAM()), and seeing as the expected value is hard-coded on one side of the conditional statement, simply modify the registers to contain the expected value thus completely bypassing the entire protection mechanism.

This new knowledge completely derailed the thought experiment and seeing as I didn't really have a use for a capability like this in the first place, this is where the project stopped for ~6 months.

Overview

This project resurfaced a few times over the intervening 6 months but each time was little more than a passing thought, as I had gained no new knowledge of reversing/malware analysis and again had no need for such a capability. A few days ago the idea rose again and while still neither of those factors have really changed, I guess I had a little bit more knowledge under my belt and couldn't let go of the idea this time.

With the aforementioned problem regarding hard-coding values in mind, I ultimately decided to go for a multi-stage design. I will refer to them as Stage0, Stage1, and Stage2.

Stage0: Setup. Ran on initial infection and deleted afterwards

Stage1: Runner. Ran each subsequent time the malware executes

Stage2: Payload. The malware you care about protecting. Spawns a process and injects shellcode in order to return a Beacon.

Lifecycle

Stage0

Stage0 is the fresh executable delivered to target by the attacker. It contains Stage1 and Stage2 as AES encrypted byte arrays; this is done to protect the malware in transit, or should a defender somehow get their hands on a copy of Stage0 (which shouldn't happen). The AES Key and IV are contained within Stage0 so in reality this won't protect Stage1 or Stage2 from a competent Blue Teamer.

Stage0 performs the following actions:

  1. Sandbox evasion.
  2. Delete itself from disk. It is still running in memory.
  3. Decrypts Stage1 using stored AES Key/IV and writes to disk in place of Stage0.
  4. Gathers the processor name and the Microsoft ProductID.
  5. Hashes this value and then pads it to fit a 16 byte AES key length. This value reversed serves as the AES IV.
  6. Decrypts Stage2 using stored AES Key/IV.
  7. Encrypts Stage2 using new victim-specific AES Key/IV.
  8. Writes Stage2 to disk as an alternate data stream of Stage1.

At the conclusion of this sequence of events, Stage0 exits. Because it was deleted from disk in step 2 and is no longer running in memory, Stage0 is effectively gone; Without prior knowledge of this technique the rest of the malware lifecycle will be a whole lot more confusing than it already is.

In step 4 the processor name and Microsoft ProductID are gathered; the ProductID is retreived from the Registry, and this value can be manually modified which presents and easy opportunity for a Blue Teamer to match their sandbox to the target environment. Depending on what environmental information is gathered this can become easier or more difficult.

Stage1

Stage1 was dropped by Stage0 and exists in the same exact location as Stage0 did (to include the name). Stage2 is stored as an ADS of Stage1. When the attacker/persistence subsequently executes the malware, they are executing Stage1.

Stage1 performs the following actions:

  1. Sandbox evasion.
  2. Gathers the processor name and the Microsoft ProductID.
  3. Hashes this value and then pads it to fit a 16 byte AES key length. This value reversed serves as the AES IV.
  4. Reads Stage2 from Stage1's ADS into memory.
  5. Decrypts Stage2 using the victim-specific AES Key/IV.
  6. Checks first two bytes of decryted Stage2 buffer; if not MZ (unsuccessful decryption), delete Stage1/Stage2, exit.
  7. Writes decrypted Stage2 back to disk as ADS of Stage1
  8. Calls CreateProcess on Stage2. If this fails (unsuccessful decryption), delete Stage1/Stage2, exit.
  9. Sleeps 5 seconds to allow Stage2 to execute + exit so it can be overwritten.
  10. Encrypts Stage2 using victim-specific AES Key/IV
  11. Writes encrypted Stage2 back to disk as ADS of Stage1.

Note that Stage2 MUST exit in order for it to be overwritten; the self-deletion trick does not appear to work on files that are already ADS's, as the self-deletion technique relies on renaming the primary data stream of the executable. Stage2 will ideally be an inject or spawn+inject executable.

There are two points that Stage1 could detect that it is not being ran from the same victim and delete itself/Stage2 in order to protect the threat actor. The first is the check for the executable header after decrypting Stage2 using the gathered environmental information; in theory this step could be bypassed by a reverse engineer, but it is a first good check. The second protection point is the result of the CreateProcess call- if it fails because Stage2 was not properly decrypted, the malware is similiary deleted. The result of this call could also be modified to prevent deletion by the reverse engineer, however this doesn't change the fact that Stage2 is encrypted and inaccessible.

Stage2

Stage2 is the meat and potatoes of the malware chain; It is a fully fledged shellcode runner/piece of malware itself. By encrypting and protecting it in the way that we have, the actions of the end state malware are much better obfuscated and protected from reverse engineers and malware analysts. During development I used one of my existing shellcode runners containing CobaltStrike shellcode, but this could be anything the attacker wants to run and protect.

Impact, Mitigation, and Further Work

So what is actually accomplished with a malware lifecycle like this? There are a few interesting quirks to talk about.

Alternate data streams are a feature unique to NTFS file systems; this means that most ways of transfering the malware after initial infection will strip and lose Stage2 because it is an ADS of Stage1. Special care would have to be given in order to transfer the sample in order to preserve Stage2, as without it a lot of reverse engineers and malware analysts are going to be very confused as to what is happening. RAR archives are able to preserve ADS's and tools like 7Z and Peazip can extract files and their ADS's.

As previously mentioned, by the time malware using this lifecycle hits a Blue Teamer it should be at Stage1; Stage0 has come and gone, and Stage2 is already encrypted with the environmental information gathered by stage 0. Not knowing that Stage0 even existed will add considerable uncertainty to understanding the lifecycle and decrypting Stage2.

In theory (because again I have no reversing experience), Stage1 should be able to be reversed (after the Blue Teamers rolls through a few copies of it because it keeps deleting itself) and the information that Stage1 gathers from the target system should be able to be identified. Provided a well-orchestrated response, Blue Team should be able to identify the victim that the malware came from and go and gather that information from it and feed it into the program so that it may be transformed appropriately into the AES Key/IV that decrypts Stage2. There are a lot "ifs" in there however related to the relative skill of the reverse engineer as well as the victim machine being available for that information to be recovered.

Application Whitelisting would significantly frustrate this lifecycle. Stage0/Stage1 may be able to be side loaded as a DLL, however I suspect that Stage2 as an ADS would present some issues. I do not have an environment to test malware against AWL nor have I bothered porting this all to DLL format so I cannot say. I am sure there are creative ways around these issues.

I am also fairly confident that there are smarter ways to run Stage2 than dropping to disk and calling CreateProcess; Either manually mapping the executable or using a tool like Donut to turn it into shellcode seem like reasonable ideas.

Code and binary

During development I created a Builder application that Stage1 and Stage2 may be fed to in order to produce a functional Stage0; this will not be provided however I will be providing most of the source code for stage1 as it is the piece that would be most visible to a Blue Teamer. Stage0 will be excluded as an exercise for the reader, and stage2 is whatever standalone executable you want to run+protect. This POC may be further researched at the effort and discretion of able readers.

I will be providing a compiled copy of this malware as Dropper64.exe. Dropper64.exe is compiled for x64. Dropper64.exe is Stage0; it contains Stage1 and Stage2. On execution, Stage1 and Stage2 will drop to disk but will NOT automatically execute, you must run Dropper64.exe(now Stage1) again. Stage2 is an x64 version of calc.exe. I am including this for any Blue Teamers who want to take a look at this, but keep in mind in an incident response scenario 99& of the time you will be getting Stage1/Stage2, Stage0 will be gone.

Conclusion

This was an interesting pet project that ate up a long weekend. I'm sure it would be a lot more advanced/more complete if I had experience in a debugger and disassembler, but you do the best with what you have. I am eager to hear from Blue Teamers and other Malware Devs what they think. I am sure I have over-complicatedly re-invented the wheel here given what actual APT's are doing, but I learned a few things along the way. Thank you for reading!



❌