FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

By: Zion3R


Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

The feed should update daily. Actively working on making the backend more reliable


Honorable Mentions

Many of the Shodan queries have been sourced from other CTI researchers:

Huge shoutout to them!

Thanks to BertJanCyber for creating the KQL query for ingesting this feed

And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

What do I track?

Running Locally

If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
bash
python3 -m pip install -r requirements.txt
python3 tracker.py

Contributing

I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

References



ST Smart Things Sentinel - Advanced Security Tool To Detect Threats Within The Intricate Protocols utilized By IoT Devices

By: Zion3R


ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.


~ Hilali Abdel

USAGE

python st_tool.py [-h] [-s] [--add ADD] [--scan SCAN] [--id ID] [--search SEARCH] [--bug BUG] [--firmware FIRMWARE] [--type TYPE] [--detect] [--tty] [--uart UART] [--fz FZ]

[Add new Device]

python3 smartthings.py -a 192.168.1.1

python3 smarthings.py -s --type TPLINK

python3 smartthings.py -s --firmware TP-Link Archer C7v2

Search for CVE and Poc [ firmware and device type]

Β 

Scan device for open upnp ports

python3 smartthings.py -s --scan upnp --id

get data from mqtt 'subscribe'

python3 smartthings.py -s --scan mqtt --id



SwaggerSpy - Automated OSINT On SwaggerHub

By: Zion3R


SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.


What is Swagger?

Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.


About SwaggerHub

SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.


Why OSINT on SwaggerHub?

Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:

  1. Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.

  2. Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.

  3. Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.

  4. Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.

  5. Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.

  6. Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.

By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.


How SwaggerSpy Works

SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.


Getting Started

To use SwaggerSpy, follow these steps:

  1. Installation: Clone the SwaggerSpy repository and install the required dependencies.
git clone https://github.com/UndeadSec/SwaggerSpy.git
cd SwaggerSpy
pip install -r requirements.txt
  1. Usage: Run SwaggerSpy with the target search terms (more accurate with domains).
python swaggerspy.py searchterm
  1. Results: SwaggerSpy will generate a report containing OSINT findings, including information about the API, endpoints, and secrets.

Disclaimer

SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.


Contribution

Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.


About the Author

SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)

I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.


TODO

Regular Expressions Enhancement
  • [ ] Review and improve existing regular expressions.
  • [ ] Ensure that regular expressions adhere to best practices.
  • [ ] Check for any potential optimizations in the regex patterns.
  • [ ] Test regular expressions with various input scenarios for accuracy.
  • [ ] Document any complex or non-trivial regex patterns for better understanding.
  • [ ] Explore opportunities to modularize or break down complex patterns.
  • [ ] Verify the regular expressions against the latest specifications or requirements.
  • [ ] Update documentation to reflect any changes made to the regular expressions.

License

SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.


Thanks

Special thanks to @Liodeus for providing project inspiration through swaggerHole.



Melee - Tool To Detect Infections In MySQL Instances

By: Zion3R

MELEE: A Tool to Detect Ransomware Infections in MySQL Instances


Attackers are abusing MySQL instances for conducting nefarious operations on the Internet. The cybercriminals are targeting exposed MySQL instances and triggering infections at scale to exfiltrate data, destruct data, and extort money via ransom. For example one of the significant threats MySQL deployments face is ransomware. We have authored a tool named "MELEE" to detect potential infections in MySQL instances. The tool allows security researchers, penetration testers, and threat intelligence experts to detect compromised and infected MySQL instances running malicious code. The tool also enables you to conduct efficient research in the field of malware targeting cloud databases. In this release of the tool, the following modules are supported:

  • MySQL instance information gathering and reconnaissance
  • MySQL instance exposure to the Internet
  • MySQL access permissions for assessing remote command execution
  • MySQL user enumeration
  • MySQL ransomware infections
  • Basic assessment checks for detecting ransomware infections
  • Extensive assessment checks for extracting insidious details about potential ransomware infections
  • MySQL ransomware detection and scanning for both unauthenticated and authenticated deployments

Tool Usage

Researched and Developed By Aditya K Sood and Rohit BansalΒ 

Douglas-042 - Powershell Script To Help Speed ​​Up Threat Hunting Incident Response Processes

By: Zion3R


DOUGLAS-042 stands as an ingenious embodiment of a PowerShell script meticulously designed to expedite the triage process and facilitate the meticulous collection of crucial evidence derived from both forensic artifacts and the ephemeral landscape of volatile data. Its fundamental mission revolves around providing indispensable aid in the arduous task of pinpointing potential security breaches within Windows ecosystems. With an overarching focus on expediency, DOUGLAS-042 orchestrates the efficient prioritization and methodical aggregation of data, ensuring that no vital piece of information eludes scrutiny when investigating a possible compromise. As a testament to its organized approach, the amalgamated data finds its sanctuary within the confines of a meticulously named text file, bearing the nomenclature of the host system's very own hostname. This practice of meticulous data archival emerges not just as a systematic convention, but as a cornerstone that paves the way for seamless transitions into subsequent stages of the Forensic journey.


Content Queries

  • General information
  • Accountand group information
  • Network
  • Process Information
  • OS Build and HOTFIXE
  • Persistence
  • HARDWARE Information
  • Encryption information
  • FIREWALL INFORMATION
  • Services
  • History
  • SMB Queries
  • Remoting queries
  • REGISTRY Analysis
  • LOG queries
  • Instllation of Software
  • User activity

Advanced Queries

  • Prefetch file information
  • DLL List
  • WMI filters and consumers
  • Named pipes

Usage

Using administrative privileges, just run the script from a PowerShell console, then the results will be saved in the directory as a txt file.

$ PS >./douglas.ps1

Advance usage

$ PS >./douglas.ps1 -a


Video




Crawlector - Threat Hunting Framework Designed For Scanning Websites For Malicious Objects

By: Zion3R


Crawlector (the name Crawlector is a combination of Crawler & Detector) is a threat hunting framework designed for scanning websites for malicious objects.

Note-1: The framework was first presented at the No Hat conference in Bergamo, Italy on October 22nd, 2022 (Slides, YouTube Recording). Also, it was presented for the second time at the AVAR conference, in Singapore, on December 2nd, 2022.

Note-2: The accompanying tool EKFiddle2Yara (is a tool that takes EKFiddle rules and converts them into Yara rules) mentioned in the talk, was also released at both conferences.


Features

  • Supports spidering websites for findings additional links for scanning (up to 2 levels only)
  • Integrates Yara as a backend engine for rule scanning
  • Supports online and offline scanning
  • Supports crawling for domains/sites digital certificate
  • Supports querying URLhaus for finding malicious URLs on the page
  • Supports hashing the page's content with TLSH (Trend Micro Locality Sensitive Hash), and other standard cryptographic hash functions such as md5, sha1, sha256, and ripemd128, among others
    • TLSH won't return a value if the page size is less than 50 bytes or not "enough amount of randomness" is present in the data
  • Supports querying the rating and category of every URL
  • Supports expanding on a given site, by attempting to find all available TLDs and/or subdomains for the same domain
    • This feature uses the Omnisint Labs API (this site is down as of March 10, 2023) and RapidAPI APIs
    • TLD expansion implementation is native
    • This feature along with the rating and categorization, provides the capability to find scam/phishing/malicious domains for the original domain
  • Supports domain resolution (IPv4 and IPv6)
  • Saves scanned websites pages for later scanning (can be saved as a zip compressed)
  • The entirety of the framework’s settings is controlled via a single customizable configuration file
  • All scanning sessions are saved into a well-structured CSV file with a plethora of information about the website being scanned, in addition to information about the Yara rules that have triggered
  • All HTTP(S) communications are proxy-aware
  • One executable
  • Written in C++

URLHaus Scanning & API Integration

This is for checking for malicious urls against every page being scanned. The framework could either query the list of malicious URLs from URLHaus server (configuration: url_list_web), or from a file on disk (configuration: url_list_file), and if the latter is specified, then, it takes precedence over the former.

It works by searching the content of every page against all URL entries in url_list_web or url_list_file, checking for all occurrences. Additionally, upon a match, and if the configuration option check_url_api is set to true, Crawlector will send a POST request to the API URL set in the url_api configuration option, which returns a JSON object with extra information about a matching URL. Such information includes urlh_status (ex., online, offline, unknown), urlh_threat (ex., malware_download), urlh_tags (ex., elf, Mozi), and urlh_reference (ex., https://urlhaus.abuse.ch/url/1116455/). This information will be included in the log file cl_mlog_<current_date><current_time><(pm|am)>.csv (check below), only if check_url_api is set to true. Otherwise, the log file will include the columns urlh_url (list o f matching malicious URLs) and urlh_hit (number of occurrences for every matching malicious URL), conditional on whether check_url is set to true.

URLHaus feature could be disabled in its entirety by setting the configuration option check_url to false.

It is important to note that this feature could slow scanning considering the huge number of malicious urls (~ 130 million entries at the time of this writing) that need to be checked, and the time it takes to get extra information from the URLHaus server (if the option check_url_api is set to true).

Files and Folders Structures

  1. \cl_sites
    • this is where the list of sites to be visited or crawled is stored.
    • supports multiple files and directories.
  2. \crawled
    • where all crawled/spidered URLs are saved to a text file.
  3. \certs
    • where all domains/sites digital certificates are stored (in .der format).
  4. \results
    • where visited websites are saved.
  5. \pg_cache
    • program cache for sites that are not part of the spider functionality.
  6. \cl_cache
    • crawler cache for sites that are part of the spider functionality.
  7. \yara_rules
    • this is where all Yara rules are stored. All rules that exist in this directory will be loaded by the engine, parsed, validated, and evaluated before execution.
  8. cl_config.ini
    • this file contains all the configuration parameters that can be adjusted to influence the behavior of the framework.
  9. cl_mlog_<current_date><current_time><(pm|am)>.csv
    • log file that contains a plethora of information about visited websites
    • date, time, the status of Yara scanning, list of fired Yara rules with the offsets and lengths of each of the matches, id, URL, HTTP status code, connection status, HTTP headers, page size, the path to a saved page on disk, and other columns related to URLHaus results.
    • file name is unique per session.
  10. cl_offl_mlog_<current_date><current_time><(pm|am)>.csv
    • log file that contains information about files scanned offline.
    • list of fired Yara rules with the offsets and lengths of the matches, and path to a saved page on disk.
    • file name is unique per session.
  11. cl_certs_<current_date><current_time><(pm|am)>.csv
    • log file that contains a plethora of information about found digital certificates
  12. \expanded\exp_subdomain_<pm|am>.txt
    • contains discovered subdomains (part of the [site] section)
  13. \expanded\exp_tld_<pm|am>.txt
    • contains discovered domains (part of the [site] section)

Configuration File (cl_config.ini)

It is very important that you familiarize yourself with the configuration file cl_config.ini before running any session. All of the sections and parameters are documented in the configuration file itself.

The Yara offline scanning feature is a standalone option, meaning, if enabled, Crawlector will execute this feature only irrespective of other enabled features. And, the same is true for the crawling for domains/sites digital certificate feature. Either way, it is recommended that you disable all non-used features in the configuration file.

  • Depending on the configuration settings (log_to_file or log_to_cons), if a Yara rule references only a module's attributes (ex., PE, ELF, Hash, etc...), then Crawlector will display only the rule's name upon a match, excluding offset and length data.

Sites Format Pattern

To visit/scan a website, the list of URLs must be stored in text files, in the directory β€œcl_sites”.

Crawlector accepts three types of URLs:

  1. Type 1: one URL per line
    • Crawlector will assign a unique name to every URL, derived from the URL hostname
  2. Type 2: one URL per line, with a unique name [a-zA-Z0-9_-]{1,128} = <url>
  3. Type 3: for the spider functionality, a unique format is used. One URL per line is as follows:

<id>[depth:<0|1>-><\d+>,total:<\d+>,sleep:<\d+>] = <url>

For example,

mfmokbel[depth:1->3,total:10,sleep:0] = https://www.mfmokbel.com

which is equivalent to: mfmokbel[d:1->3,t:10,s:0] = https://www.mfmokbel.com

where, <id> := [a-zA-Z0-9_-]{1,128}

depth, total and sleep, can also be replaced with their shortened versions d, t and s, respectively.

  • depth: the spider supports going two levels deep for finding additional URLs (this is a design decision).
  • A value of 0 indicates a depth of level 1, with the value that comes after the β€œ->” ignored.
  • A depth of level-1 is controlled by the total parameter. So, first, the spider tries to find that many additional URLs off of the specified URL.
  • The value after the β€œ->” represents the maximum number of URLs to spider for each of the URLs found (as per the total parameter value).
  • A value of 1, indicates a depth of level 2, with the value that comes after the β€œ->” representing the maximum number of URLs to find, for every URL found per the total parameter. For clarification, and as shown in the example above, first, the spider will look for 10 URLs (as specified in the total parameter), and then, each of those found URLs will be spidered up to a max of 3 URLs; therefore, and in the best-case scenario, we would end up with 40 (10 + (10*3)) URLs.
  • The sleep parameter takes an integer value representing the number of milliseconds to sleep between every HTTP request.

Note 1: Type 3 URL could be turned into type 1 URL by setting the configuration parameter live_crawler to false, in the configuration file, in the spider section.

Note 2: Empty lines and lines that start with β€œ;” or β€œ//” are ignored.

The Spider Functionality

The spider functionality is what gives Crawlector the capability to find additional links on the targeted page. The Spider supports the following featuers:

  • The domain has to be of Type 3, for the Spider functionality to work
  • You may specify a list of wildcarded patterns (pipe delimited) to prevent spidering matching urls via the exclude_url config. option. For example, *.zip|*.exe|*.rar|*.zip|*.7z|*.pdf|.*bat|*.db
  • You may specify a list of wildcarded patterns (pipe delimited) to spider only urls that match the pattern via the include_url config. option. For example, */checkout/*|*/products/*
  • You may exclude HTTPS urls via the config. option exclude_https
  • You may account for outbound/external links as well, for the main page only, via the config. option add_ext_links. This feature honours the exclude_url and include_url config. option.
  • You may account for outbound/external links of the main page only, excluding all other urls, via the config. option ext_links_only. This feature honours the exclude_url and include_url config. option.

Site Ranking Functionality

  • This is for checking the ranking of the website
  • You give it a file with a list of websites, with their ranking, in a csv file format
  • Services that provide lists of websites ranking include, Alexa top-1m (discontinued as of May 2022), Cisco Umbrella, Majestic, Quantcast, Farsight and Tranco, among others
  • CSV file format (2 columns only): first column holds the ranking, and the second column holds the domain name
  • If a cell to contain quoted data, it'll be automatically dequoted
  • Line breaks aren't allowed in quoted text
  • Leading and trailing spaces are trimmed from cells read
  • Empty and comment lines are skipped
  • The section site_ranking in the configuration file provides some options to alter how the CSV file is to be read
  • The performance of this query is dependent on the number of records in the CSV file
  • Crawlector compares every entry in the CSV file against the domain being investigated, and not the other way around
  • Only the registered/pay-level domain is compared

Finding TLDs and Subdomains - [site] Section

  • The site section provides the capability to expand on a given site, by attempting to find all available top-level domains (TLDs) and/or subdomains for the same domain. If found, new tlds/subdomains will be checked like any other domain
  • This feature uses the Omnisint Labs (https://omnisint.io/) and RapidAPI APIs
  • Omnisint Labs API returns subdomains and tlds, whereas RapidAPI returns only subdomains (the Omnisint Labs API is down as of March 10, 2023, however, the implementation is still available in case the site is back up)
  • For RapidAPI, you need a valid "Domains records" API key that you can request from RapidAPI, and plug it into the key rapid_api_key in the configuration file
  • With find_tlds enabled, in addition to Omnisint Labs API tlds results, the framework attempts to find other active/registered domains by going through every tld entry, either, in the tlds_file or tlds_url
  • If tlds_url is set, it should point to a url that hosts tlds, each one on a new line (lines that start with either of the characters ';', '#' or '//' are ignored)
  • tlds_file, holds the filename that contains the list of tlds (same as for tlds_url; only the tld is present, excluding the '.', for ex., "com", "org")
  • If tlds_file is set, it takes precedence over tlds_url
  • tld_dl_time_out, this is for setting the maximum timeout for the dnslookup function when attempting to check if the domain in question resolves or not
  • tld_use_connect, this option enables the functionality to connect to the domain in question over a list of ports, defined in the option tlds_connect_ports
  • The option tlds_connect_ports accepts a list of ports, comma separated, or a list of ranges, such as 25-40,90-100,80,443,8443 (range start and end are inclusive)
    • tld_con_time_out, this is for setting the maximum timeout for the connect function
  • tld_con_use_ssl, enable/disable the use of ssl when attempting to connect to the domain
  • If save_to_file_subd is set to true, discovered subdomains will be saved to "\expanded\exp_subdomain_<pm|am>.txt"
  • If save_to_file_tld is set to true, discovered domains will be saved to "\expanded\exp_tld_<pm|am>.txt"
  • If exit_here is set to true, then Crawlector bails out after executing this [site] function, irrespective of other enabled options. It means found sites won't be crawled/spidered

Design Considerations

  • A URL page is retrieved by sending a GET request to the server, reading the server response body, and passing it to Yara engine for detection.
  • Some of the GET request attributes are defined in the [default] section in the configuration file, including, the User-Agent and Referer headers, and connection timeout, among other options.
  • Although Crawlector logs a session's data to a CSV file, converting it to an SQL file is recommended for better performance, manipulation and retrieval of the data. This becomes evident when you’re crawling thousands of domains.
  • Repeated domains/urls in the cl_sites are allowed.

Limitations

  • Single threaded
  • Static detection (no dynamic evaluation of a given page's content)
  • No headless browser support, yet!

Third-party libraries used

Contributing

Open for pull requests and issues. Comments and suggestions are greatly appreciated.

Author

Mohamad Mokbel (@MFMokbel)



DoSinator - A Powerful Denial Of Service (DoS) Testing Tool

By: Zion3R


DoSinator is a versatile Denial of Service (DoS) testing tool developed in Python. It empowers security professionals and researchers to simulate various types of DoS attacks, allowing them to assess the resilience of networks, systems, and applications against potential cyber threats.Β 


Features

  • Multiple Attack Modes: DoSinator supports SYN Flood, UDP Flood, and ICMP Flood attack modes, allowing you to simulate various types of DoS attacks.
  • Customizable Parameters: Adjust the packet size, attack rate, and duration to fine-tune the intensity and duration of the attack.
  • IP Spoofing: Enable IP spoofing to mask the source IP address and enhance anonymity during the attack.
  • Multithreaded Packet Sending: Utilize multiple threads for simultaneous packet sending, maximizing the attack speed and efficiency.

Requirements

  • Python 3.x
  • scapy
  • argparse

Installation

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/DoSinator.git
  2. Navigate to the project directory:

    cd DoSinator
  3. Install the required dependencies:

    pip install -r requirements.txt

Usage

packets to send (default: 500). -ps PACKET_SIZE, --packet_size PACKET_SIZE Packet size in bytes (default: 64). -ar ATTACK_RATE, --attack_rate ATTACK_RATE Attack rate in packets per second (default: 10). -d DURATION, --duration DURATION Duration of the attack in seconds. -am {syn,udp,icmp,http,dns}, --attack-mode {syn,udp,icmp,http,dns} Attack mode (default: syn). -sp SPOOF_IP, --spoof-ip SPOOF_IP Spoof IP address. --data DATA Custom data string to send." dir="auto">
usage: dos_tool.py [-h] -t TARGET -p PORT [-np NUM_PACKETS] [-ps PACKET_SIZE]
[-ar ATTACK_RATE] [-d DURATION] [-am {syn,udp,icmp,http,dns}]
[-sp SPOOF_IP] [--data DATA]

optional arguments:
-h, --help Show this help message and exit.
-t TARGET, --target TARGET
Target IP address.
-p PORT, --port PORT Target port number.
-np NUM_PACKETS, --num_packets NUM_PACKETS
Number of packets to send (default: 500).
-ps PACKET_SIZE, --packet_size PACKET_SIZE
Packet size in bytes (default: 64).
-ar ATTACK_RATE, --attack_rate ATTACK_RATE
Attack rate in packets per second (default: 10).
-d DURATION, --duration DURATION
Duration of the attack in seconds.
-am {syn,udp,icmp,htt p,dns}, --attack-mode {syn,udp,icmp,http,dns}
Attack mode (default: syn).
-sp SPOOF_IP, --spoof-ip SPOOF_IP
Spoof IP address.
--data DATA Custom data string to send.
  • target_ip: IP address of the target system.
  • target_port: Port number of the target service.
  • num_packets: Number of packets to send (default: 500).
  • packet_size: Size of each packet in bytes (default: 64).
  • attack_rate: Attack rate in packets/second (default: 10).
  • duration: Duration of the attack in seconds.
  • attack_mode: Attack mode: syn, udp, icmp, http (default: syn).
  • spoof_ip: Spoof IP address (default: None).
  • data: Custom data string to send.

Disclaimer

The usage of the Dosinator tool for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws. The author assumes no liability and is not responsible for any misuse or damage caused by this program.

By using Dosinator, you agree to use this tool for educational and ethical purposes only. The author is not responsible for any actions or consequences resulting from misuse of this tool.

Please ensure that you have the necessary permissions to conduct any form of testing on a target network. Use this tool at your own risk.

Contributing

Contributions are welcome! If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request.

Contact

If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:



Associated-Threat-Analyzer - Detects Malicious IPv4 Addresses And Domain Names Associated With Your Web Application Using Local Malicious Domain And IPv4 Lists

By: Zion3R


Associated-Threat-Analyzer detects malicious IPv4 addresses and domain names associated with your web application using local malicious domain and IPv4 lists.


Installation

From Git

git clone https://github.com/OsmanKandemir/associated-threat-analyzer.git
cd associated-threat-analyzer && pip3 install -r requirements.txt
python3 analyzer.py -d target-web.com

From Dockerfile

You can run this application on a container after build a Dockerfile.

Warning : If you want to run a Docker container, associated threat analyzer recommends to use your malicious IPs and domains lists, because maintainer may not be update a default malicious IP and domain lists on docker image.
docker build -t osmankandemir/threatanalyzer .
docker run osmankandemir/threatanalyzer -d target-web.com

From DockerHub

docker pull osmankandemir/threatanalyzer
docker run osmankandemir/threatanalyzer -d target-web.com

Usage

-d DOMAIN , --domain DOMAIN Input Target. --domain target-web1.com
-t DOMAINSFILE, --DomainsFile Malicious Domains List to Compare. -t SampleMaliciousDomains.txt
-i IPSFILE, --IPsFile Malicious IPs List to Compare. -i SampleMaliciousIPs.txt
-o JSON, --json JSON JSON output. --json

DONE

  • First-level depth scan your domain address.

TODO list

  • Third-level or the more depth static files scanning for target web application.
Other linked github project. You can take a look.
Finds related domains and IPv4 addresses to do threat intelligence after Indicator-Intelligence v1.1.1 collects static files

https://github.com/OsmanKandemir/indicator-intelligence

Default Malicious IPs and Domains Sources

https://github.com/stamparm/blackbook

https://github.com/stamparm/ipsum

Development and Contribution

See; CONTRIBUTING.md



HEDnsExtractor - Raw Html Extractor From Hurricane Electric Portal

By: Zion3R

HEDnsExtractor

Raw html extractor from Hurricane Electric portal

Features

  • Automatically identify IPAddr ou Networks through command line parameter or stdin
  • Extract networks based on IPAddr.
  • Extract domains from networks.

Installation

go install -v github.com/HuntDownProject/hednsextractor/cmd/hednsextractor@latest

Usage

usage -h
Running

Getting the IP Addresses used for hackerone.com, and enumerating only the networks.

nslookup hackerone.com | awk '/Address: / {print $2}' | hednsextractor -silent -only-networks

[INF] [104.16.99.52] 104.16.0.0/12
[INF] [104.16.99.52] 104.16.96.0/20

Getting the IP Addresses used for hackerone.com, and enumerating only the domains (using tail to show the first 10 results).

nslookup hackerone.com | awk '/Address: / {print $2}' | hednsextractor -silent -only-domains | tail -n 10

herllus.com
hezzy.store
hilariostore.com
hiperdrop.com
hippratas.online
hitsstory.com
hobbyshop.site
holyangelstore.com
holzfallerstore.fun
homedescontoo.com

Running with Virustotal

Edit the config file and add the Virustotal API Key

cat $HOME/.config/hednsextractor/config.yaml 
virustotal score #vt: false # minimum virustotal score to show #vt-score: 0 # ip address or network to query #target: # show silent output #silent: false # show verbose output #verbose: false # virustotal api key vt-api-key: Your API Key goes here" dir="auto">
# hednsextractor config file
# generated by https://github.com/projectdiscovery/goflags

# show only domains
#only-domains: false

# show only networks
#only-networks: false

# show virustotal score
#vt: false

# minimum virustotal score to show
#vt-score: 0

# ip address or network to query
#target:

# show silent output
#silent: false

# show verbose output
#verbose: false

# virustotal api key
vt-api-key: Your API Key goes here

So, run the hednsextractor with -vt parameter.

nslookup hackerone.com | awk '/Address: / {print $2}' | hednsextractor -only-domains -vt             

And the output will be as below

          _______  ______   _        _______  _______          _________ _______  _______  _______ _________ _______  _______ 
|\ /|( ____ \( __ \ ( ( /|( ____ \( ____ \|\ /|\__ __/( ____ )( ___ )( ____ \\__ __/( ___ )( ____ )
| ) ( || ( \/| ( \ )| \ ( || ( \/| ( \/( \ / ) ) ( | ( )|| ( ) || ( \/ ) ( | ( ) || ( )|
| (___) || (__ | | ) || \ | || (_____ | (__ \ (_) / | | | (____)|| (___) || | | | | | | || (____)|
| ___ || __) | | | || (\ \) |(_____ )| __) ) _ ( | | | __)| ___ || | | | | | | || __)
| ( ) || ( | | ) || | \ | ) || ( / ( ) \ | | | (\ ( | ( ) || | | | | | | || (\ (
| ) ( || (____/\| (__/ )| ) \ |/\____) || (____/\( / \ ) | | | ) \ \__| ) ( || (____/\ | | | (___) || ) \ \__
|/ \|(_______/(______/ |/ )_)\_______)(_______/|/ \| )_( |/ \__/|/ \|(_______/ )_( (_______)|/ \__/

[INF] Current hednsextractor version v1.0.0
[INF] [104.16.0.0/12] domain: ohst.ltd VT Score: 0
[INF] [104.16.0.0/12] domain: jxcraft.net VT Score: 0
[INF] [104.16.0.0/12] domain: teatimegm.com VT Score: 2
[INF] [104.16.0.0/12] domain: debugcheat.com VT Score: 0


SOC-Multitool - A Powerful And User-Friendly Browser Extension That Streamlines Investigations For Security Professionals

By: Zion3R


Introducing SOC Multi-tool, a free and open-source browser extension that makes investigations faster and more efficient. Now available on the Chrome Web Store and compatible with all Chromium-based browsers such as Microsoft Edge, Chrome, Brave, and Opera.
Now available on Chrome Web Store!


Streamline your investigations

SOC Multi-tool eliminates the need for constant copying and pasting during investigations. Simply highlight the text you want to investigate, right-click, and navigate to the type of data highlighted. The extension will then open new tabs with the results of your investigation.

Modern and feature-rich

The SOC Multi-tool is a modernized multi-tool built from the ground up, with a range of features and capabilities. Some of the key features include:

  • IP Reputation Lookup using VirusTotal & AbuseIPDB
  • IP Info Lookup using Tor relay checker & WHOIS
  • Hash Reputation Lookup using VirusTotal
  • Domain Reputation Lookup using VirusTotal & AbuseIPDB
  • Domain Info Lookup using Alienvault
  • Living off the land binaries Lookup using the LOLBas project
  • Decoding of Base64 & HEX using CyberChef
  • File Extension & Filename Lookup using fileinfo.com & File.net
  • MAC Address manufacturer Lookup using maclookup.com
  • Parsing of UserAgent using user-agents.net
  • Microsoft Error code Lookup using Microsoft's DB
  • Event ID Lookup (Windows, Sharepoint, SQL Server, Exchange, and Sysmon) using ultimatewindowssecurity.com
  • Blockchain Address Lookup using blockchain.com
  • CVE Info using cve.mitre.org

Easy to install

You can easily install the extension by downloading the release from the Chrome Web Store!
If you wish to make edits you can download from the releases page, extract the folder and make your changes.
To load your edited extension turn on developer mode in your browser's extensions settings, click "Load unpacked" and select the extracted folder!


SOC Multi-tool is a community-driven project and the developer encourages users to contribute and share better resources.



KoodousFinder - A Simple Tool To Allows Users To Search For And Analyze Android Apps For Potential Security Threats And Vulnerabilities

By: Zion3R


A simple tool to allows users to search for and analyze android apps for potential security threats and vulnerabilities


Account and API Key

Create a Koodous account and get your api key https://koodous.com/settings/developers

Install

$ pip install koodousfinder

Arguments

Param description
-h, --help 'Show this help message and exit'
--package-name "General search for APKs"`
--app-name Name of the app to search for

Examples

koodous.py --package-name "app: Brata AND package: com.brata"
koodous.py --package-name "package: com.google.android.videos AND trusted: true"
koodous.py --package-name "com.metasploit"
python3 koodous.py --app-name "WhatsApp MOD"



Modifiers for advanced search

Attribute Modifier Description
Hash hash: Performs the search depending on the automatically inserted hash. The admitted hashes are sha1, sha256 and md5.
App name app: Searches for the specified app name. If it is a compound name, it can be searched enclosed in quotes, for example: app: "Whatsapp premium".
Package name. package: Searches the package name to see if it contains the indicated string, for example: package: com.whatsapp.
Name of the developer or company. developer: Searches whether the company or developer field includes the indicated string, for example: developer: "WhatsApp Inc.".
Certificate certificate: Searches the apps by their certificate. For example: cert: 60BBF1896747E313B240EE2A54679BB0CE4A5023 or certificate: 38A0F7D505FE18FEC64FBF343ECAAAF310DBD799.

More information: https://docs.koodous.com/apks.html.
#TODO

  • Discord Integration
  • Rulesets view


Wafaray - Enhance Your Malware Detection With WAF + YARA (WAFARAY)

By: Zion3R

WAFARAY is a LAB deployment based on Debian 11.3.0 (stable) x64 made and cooked between two main ingredients WAF + YARA to detect malicious files (e.g. webshells, virus, malware, binaries) typically through web functions (upload files).


Purpose

In essence, the main idea came to use WAF + YARA (YARA right-to-left = ARAY) to detect malicious files at the WAF level before WAF can forward them to the backend e.g. files uploaded through web functions see: https://owasp.org/www-community/vulnerabilities/Unrestricted_File_Upload

When a web page allows uploading files, most of the WAFs are not inspecting files before sending them to the backend. Implementing WAF + YARA could provide malware detection before WAF forwards the files to the backend.

Do malware detection through WAF?

Yes, one solution is to use ModSecurity + Clamav, most of the pages call ClamAV as a process and not as a daemon, in this case, analysing a file could take more than 50 seconds per file. See this resource: https://kifarunix.com/intercept-malicious-file-upload-with-modsecurity-and-clamav/

Do malware detection through WAF + YARA?

:-( A few clues here Black Hat Asia 2019 please continue reading and see below our quick LAB deployment.

WAFARAY: how does it work ?

Basically, It is a quick deployment (1) with pre-compiled and ready-to-use YARA rules via ModSecurity (WAF) using a custom rule; (2) this custom rule will perform an inspection and detection of the files that might contain malicious code, (3) typically web functions (upload files) if the file is suspicious will reject them receiving a 403 code Forbidden by ModSecurity.

βœ”οΈThe YaraCompile.py compiles all the yara rules. (Python3 code)
βœ”οΈThe test.conf is a virtual host that contains the mod security rules. (ModSecurity Code)
βœ”οΈModSecurity rules calls the modsec_yara.py in order to inspect the file that is trying to upload. (Python3 code)
βœ”οΈYara returns two options 1 (200 OK) or 0 (403 Forbidden)

Main Paths:

  • Yara Compiled rules: /YaraRules/Compiled
  • Yara Default rules: /YaraRules/rules
  • Yara Scripts: /YaraRules/YaraScripts
  • Apache vhosts: /etc/apache2/sites-enabled
  • Temporal Files: /temporal

Approach

  • Blueteamers: Rule enforcement, best alerting, malware detection on files uploaded through web functions.
  • Redteamers/pentesters: GreyBox scope , upload and bypass with a malicious file, rule enforcement.
  • Security Officers: Keep alerting, threat hunting.
  • SOC: Best monitoring about malicious files.
  • CERT: Malware Analysis, Determine new IOC.

Building Detection Lab

The Proof of Concept is based on Debian 11.3.0 (stable) x64 OS system, OWASP CRC v3.3.2 and Yara 4.0.5, you will find the automatic installation script here wafaray_install.sh and an optional manual installation guide can be found here: manual_instructions.txt also a PHP page has been created as a "mock" to observe the interaction and detection of malicious files using WAF + YARA.

Installation (recommended) with shell scripts

βœ”οΈStep 2: Deploy using VMware or VirtualBox
βœ”οΈStep 3: Once installed, please follow the instructions below:
alex@waf-labs:~$ su root 
root@waf-labs:/home/alex#

# Remember to change YOUR_USER by your username (e.g waf)
root@waf-labs:/home/alex# sed -i 's/^\(# User privi.*\)/\1\nalex ALL=(ALL) NOPASSWD:ALL/g' /etc/sudoers
root@waf-labs:/home/alex# exit
alex@waf-labs:~$ sudo sed -i 's/^\(deb cdrom.*\)/#\1/g' /etc/apt/sources.list
alex@waf-labs:~$ sudo sed -i 's/^# \(deb\-src http.*\)/ \1/g' /etc/apt/sources.list
alex@waf-labs:~$ sudo sed -i 's/^# \(deb http.*\)/ \1/g' /etc/apt/sources.list
alex@waf-labs:~$ echo -ne "\n\ndeb http://deb.debian.org/debian/ bullseye main\ndeb-src http://deb.debian.org/debian/ bullseye main\n" | sudo tee -a /etc/apt/sources.list
alex@waf-labs:~$ sudo apt-get update
alex@waf-labs:~$ sudo apt-get install sudo -y
alex@waf-labs:~$ sudo apt-get install git vim dos2unix net-tools -y
alex@waf-labs:~$ git clone https://github.com/alt3kx/wafarayalex@waf-labs:~$ cd wafaray
alex@waf-labs:~$ dos2unix wafaray_install.sh
alex@waf-labs:~$ chmod +x wafaray_install.sh
alex@waf-labs:~$ sudo ./wafaray_install.sh >> log_install.log

# Test your LAB environment
alex@waf-labs:~$ firefox localhost:8080/upload.php

Yara Rules

Once the Yara Rules were downloaded and compiled.

It is similar to when you deploy ModSecurity, you need to customize what kind of rule you need to apply. The following log is an example of when the Web Application Firewall + Yara detected a malicious file, in this case, eicar was detected.

Message: Access denied with code 403 (phase 2). File "/temporal/20220812-184146-YvbXKilOKdNkDfySME10ywAAAAA-file-Wx1hQA" rejected by 
the approver script "/YaraRules/YaraScripts/modsec_yara.py": 0 SUSPECTED [YaraSignature: eicar]
[file "/etc/apache2/sites-enabled/test.conf"] [line "56"] [id "500002"]
[msg "Suspected File Upload:eicar.com.txt -> /temporal/20220812-184146-YvbXKilOKdNkDfySME10ywAAAAA-file-Wx1hQA - URI: /upload.php"]

Testing WAFARAY... voilΓ ...

Stop / Start ModSecurity

$ sudo service apache2 stop
$ sudo service apache2 start

Apache Logs

$ cd /var/log
$ sudo tail -f apache2/test_access.log apache2/test_audit.log apache2/test_error.log

Demos

Be careful about your test. The following demos were tested on isolated virtual machines.

Demo 1 - EICAR

A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of Malware: https://secure.eicar.org/eicar.com.txt) NOT EXECUTE THE FILE.

Demo 2 - WebShell.php

For this demo, we disable the rule 933110 - PHP Inject Attack to validate Yara Rules. A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of WebShell PHP: https://github.com/drag0s/php-webshell) NOT EXECUTE THE FILE.

Demo 3 - Malware Bazaar (RecordBreaker) Published: 2022-08-13

A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of Malware Bazaar (RecordBreaker): https://bazaar.abuse.ch/sample/94ffc1624939c5eaa4ed32d19f82c369333b45afbbd9d053fa82fe8f05d91ac2/) NOT EXECUTE THE FILE.

YARA Rules sources

In case that you want to download more yara rules, you can see the following repositories:

References

Roadmap until next release

  • Malware Hash Database (MLDBM). The Database stores the MD5 or SHA1 that files were detected as suspicious.
  • To be tested CRS Modsecurity v.3.3.3 new rules
  • ModSecurity rules improvement to malware detection with Database.
  • To be created blacklist and whitelist related to MD5 or SHA1.
  • To be tested, run in background if the Yara analysis takes more than 3 seconds.
  • To be tested, new payloads, example: Powershell Obfuscasted (WebShells)
  • Remarks for live enviroments. (WAF AWS, WAF GCP, ...)

Authors

Alex Hernandez aka (@_alt3kx_)
Jesus Huerta aka @mindhack03d

Contributors

Israel Zeron Medina aka @spk085



Indicator-Intelligence - Finds Related Domains And IPv4 Addresses To Do Threat Intelligence After Indicator-Intelligence Collects Static Files

By: Zion3R


Finds related domains and IPv4 addresses to do threat intelligence after Indicator-Intelligence collects static files.


Done

  • Related domains, IPs collect

Installation

From Source Code

You can use virtualenv for package dependencies before installation.

git clone https://github.com/OsmanKandemir/indicator-intelligence.git
cd indicator-intelligence
python setup.py build
python setup.py install

From Pypi

The script is available on PyPI. To install with pip:

pip install indicatorintelligence

From Dockerfile

You can run this application on a container after build a Dockerfile.

docker build -t indicator .
docker run indicator --domains target-web.com --json

From DockerHub

docker pull osmankandemir/indicator
docker run osmankandemir/indicator --domains target-web.com --json

From Poetry

pip install poetry
poetry install

Usage

-d DOMAINS [DOMAINS], --domains DOMAINS [DOMAINS] Input Targets. --domains target-web1.com target-web2.com
-p PROXY, --proxy PROXY Use HTTP proxy. --proxy 0.0.0.0:8080
-a AGENT, --agent AGENT Use agent. --agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'
-o JSON, --json JSON JSON output. --json

Function Usage

Development and Contribution

See; CONTRIBUTING.md

License

Copyright (c) 2023 Osman Kandemir
Licensed under the GPL-3.0 License.

Donations

If you like Indicator-Intelligence and would like to show support, you can use Buy A Coffee or Github Sponsors feature for the developer using the button below.

You can use the github sponsored tiers feature for purchasing and other features.

Sponsor me : https://github.com/sponsors/OsmanKandemir




MacOSThreatTrack - Bash Tool Used For Proactive Detection Of Malicious Activity On macOS Systems


The tool is being tested in the beta phase, and it only gathers MacOS system information at this time.

The code is poorly organized and requires significant improvements.

Description

Bash tool used for proactive detection of malicious activity on macOS systems.

I was inspired by Venator-Swift and decided to create a bash version of the tool.

OneLiner command

curl https://raw.githubusercontent.com/ab2pentest/MacOSThreatTrack/main/MacOSThreatTrack.sh | bash

Gathered information

[+] System info
[+] Users list
[+] Environment variables
[+] Process list
[+] Active network connections
[+] SIP status
[+] GateKeeper status
[+] Zsh history
[+] Bash history
[+] Shell startup scripts
[+] PF rules
[+] Periodic scripts
[+] CronJobs list
[+] LaunchDaemons data
[+] Kernel extensions
[+] Installed applications
[+] Installation history
[+] Chrome extensions

Todo

  1. Saving output as JSON instead of printing out the result.


❌