FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayKitPloit - PenTest Tools!

Drs-Malware-Scan - Perform File-Based Malware Scan On Your On-Prem Servers With AWS

By: Zion3R


Perform malware scan analysis of on-prem servers using AWS services

Challenges with on-premises malware detection

It can be difficult for security teams to continuously monitor all on-premises servers due to budget and resource constraints. Signature-based antivirus alone is insufficient as modern malware uses various obfuscation techniques. Server admins may lack visibility into security events across all servers historically. Determining compromised systems and safe backups to restore from during incidents is challenging without centralized monitoring and alerting. It is onerous for server admins to setup and maintain additional security tools for advanced threat detection. The rapid mean time to detect and remediate infections is critical but difficult to achieve without the right automated solution.

Determining which backup image is safe to restore from during incidents without comprehensive threat intelligence is another hard problem. Even if backups are available, without knowing when exactly a system got compromised, it is risky to blindly restore from backups. This increases the chance of restoring malware and losing even more valuable data and systems during incident response. There is a need for an automated solution that can pinpoint the timeline of infiltration and recommend safe backups for restoration.


How to use AWS services to address these challenges

The solution leverages AWS Elastic Disaster Recovery (AWS DRS), Amazon GuardDuty and AWS Security Hub to address the challenges of malware detection for on-premises servers.

This combo of services provides a cost-effective way to continuously monitor on-premises servers for malware without impacting performance. It also helps determine safe recovery point in time backups for restoration by identifying timeline of compromises through centralized threat analytics.

  • AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery.

  • Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.

  • AWS Security Hub is a cloud security posture management (CSPM) service that performs security best practice checks, aggregates alerts, and enables automated remediation.

Architecture

Solution description

The Malware Scan solution assumes on-premises servers are already being replicated with AWS DRS, and Amazon GuardDuty & AWS Security Hub are enabled. The cdk stack in this repository will only deploy the boxes labelled as DRS Malware Scan in the architecture diagram.

  1. AWS DRS is replicating source servers from the on-premises environment to AWS (or from any cloud provider for that matter). For further details about setting up AWS DRS please follow the Quick Start Guide.
  2. Amazon GuardDuty is already enabled.
  3. AWS Security Hub is already enabled.
  4. The Malware Scan solution is triggered by a Schedule Rule in Amazon EventBridge (with prefix DrsMalwareScanStack-ScheduleScanRule). You can adjust the scan frequency as needed (i.e. once a day, a week, etc).
  5. The Schedule Rule in Amazon EventBridge triggers the Submit Orders lambda function (with prefix DrsMalwareScanStack-SubmitOrders) which gathers the source servers to scan from the Source Servers DynamoDB table.
  6. Orders are placed on the SQS FIFO queue named Scan Orders (with prefix DrsMalwareScanStack-ScanOrdersfifo). The queue is used to serialize scan requests mapped to the same DRS instance, preventing a race condition.
  7. The Process Order lambda picks a malware scan order from the queue and enriches it, preparing the upcoming malware scan operation. For instance, it inserts the id of the replicating DRS instance associated to the DRS source server provided in the order. The output of Process Order are malware scan commands containing all the necessary information to invoke GuardDuty malware scan.
  8. Malware scan operations are tracked using the DRSVolumeAnnotationsDDBTable at the volume-level, providing reporting capabilities.
  9. Malware scan commands are inserted in the Scan Commands SQS FIFO queue (with prefix DrsMalwareScanStack-ScanCommandsfifo) to increase resiliency.
  10. The Process Commands function submits queued scan commands at a maximum rate of 1 command per second to avoid API throttling. It triggers the on-demand malware scan function provided by Amazon GuardDuty.
  11. The execution of the on-demand Amazon GuardDuty Malware job can be monitored from the Amazon GuardDuty service.
  12. The outcome of malware scan job is routed to Amazon Cloudwath Logs.
  13. The Subscription Filter lambda function receives the outcome of the scan and tracks the result using DynamoDB (step #14).
  14. The DRS Instance Annotations DynamoDB Table tracks the status of the malware scan job at the instance level.
  15. The CDK stack named ScanReportStack deploys the Scan Report lambda function (with prefix ScanReportStack-ScanReport) to populate the Amazon S3 bucket with prefix scanreportstack-scanreportbucket.
  16. AWS Security Hub aggregates and correlates findings from Amazon GuardDuty.
  17. The Security Hub finding event is caught by an EventBridge Rule (with prefix DrsMalwareScanStack-SecurityHubAnnotationsRule)
  18. The Security Hub Annotations lambda function (with prefix DrsMalwareScanStack-SecurityHubAnnotation) generates additional Notes (Annotations) to the Finding with contextualized information about the source server being affected. This additional information can be seen in the Notes section within the Security Hub Finding.
  19. The follow-up activities will depend on the incident response process being adopted. For example based on the date of the infection, AWS DRS can be used to perform a point in time recovery using a snapshot previous to the date of the malware infection.
  20. In a Multi-Account scenario, this solution can be deployed directly on the AWS account hosting the AWS DRS solution. The Amazon GuardDuty findings will be automatically sent to the centralized Security Account.

Usage

Pre-requisites

  • An AWS Account.
  • Amazon Elastic Disaster Recovery (DRS) configured, with at least 1 server source in sync. If not, please check this documentation. The Replication Configuration must consider EBS encryption using Custom Managed Key (CMK) from AWS Key Management Service (AWS KMS). Amazon GuardDuty Malware Protection does not support default AWS managed key for EBS.
  • IAM Privileges to deploy the components of this solution.
  • Amazon GuardDuty enabled. If not, please check this documentation
  • Amazon Security Hub enabled. If not, please check this documentation

    Warning
    Currently, Amazon GuardDuty Malware scan does not support EBS volumes encrypted with EBS-managed keys. If you want to use this solution to scan your on-prem (or other-cloud) servers replicated with DRS, you need to setup DRS replication with your own encryption key in KMS. If you are currently using EBS-managed keys with your replicating servers, you can change encryption settings to use your own KMS key in the DRS console.

Deploy

  1. Create a Cloud9 environment with Ubuntu image (at least t3.small for better performance) in your AWS account. Open your Cloud9 environment and clone the code in this repository. Note: Amazon Linux 2 has node v16 which is not longer supported since 2023-09-11 git clone https://github.com/aws-samples/drs-malware-scan

    cd drs-malware-scan

    sh check_loggroup.sh

  2. Deploy the CDK stack by running the following command in the Cloud9 terminal and confirm the deployment

    npm install cdk bootstrap cdk deploy --all Note
    The solution is made of 2 stacks: * DrsMalwareScanStack: it deploys all resources needed for malware scanning feature. This stack is mandatory. If you want to deploy only this stack you can run cdk deploy DrsMalwareScanStack
    * ScanReportStack: it deploys the resources needed for reporting (Amazon Lambda and Amazon S3). This stack is optional. If you want to deploy only this stack you can run cdk deploy ScanReportStack

    If you want to deploy both stacks you can run cdk deploy --all

Troubleshooting

All lambda functions route logs to Amazon CloudWatch. You can verify the execution of each function by inspecting the proper CloudWatch log groups for each function, look for the /aws/lambda/DrsMalwareScanStack-* pattern.

The duration of the malware scan operation will depend on the number of servers/volumes to scan (and their size). When Amazon GuardDuty finds malware, it generates a SecurityHub finding: the solution intercepts this event and runs the $StackName-SecurityHubAnnotations lambda to augment the SecurityHub finding with a note containing the name(s) of the DRS source server(s) with malware.

The SQS FIFO queues can be monitored using the Messages available and Message in flight metrics from the AWS SQS console

The DRS Volume Annotations DynamoDB tables keeps track of the status of each Malware scan operation.

Amazon GuardDuty has documented reasons to skip scan operations. For further information please check Reasons for skipping resource during malware scan

In order to analize logs from Amazon GuardDuty Malware scan operations, you can check /aws/guardduty/malware-scan-events Amazon Cloudwatch LogGroup. The default log retention period for this log group is 90 days, after which the log events are deleted automatically.

Cleanup

  1. Run the following commands in your terminal:

    cdk destroy --all

  2. (Optional) Delete the CloudWatch log groups associated with Lambda Functions.

AWS Cost Estimation Analysis

For the purpose of this analysis, we have assumed a fictitious scenario to take as an example. The following cost estimates are based on services located in the North Virginia (us-east-1) region.

Estimated scenario:

  • 2 Source Servers to replicate (DR) (Total Storage: 100GB - 4 disks)
  • 3 TB Malware Scanned/Month
  • 30 days of EBS snapshot Retention period
  • Daily Malware scans
Monthly Cost Total Cost for 12 Months
171.22 USD 2,054.74 USD

Service Breakdown:

Service Name Description Monthly Cost (USD)
AWS Elastic Disaster Recovery 2 Source Servers / 1 Replication Server / 4 disks / 100GB / 30 days of EBS Snapshot Retention Period 71.41
Amazon GuardDuty 3 TB Malware Scanned/Month 94.56
Amazon DynamoDB 100MB 1 Read/Second 1 Writes/Second 3.65
AWS Security Hub 1 Account / 100 Security Checks / 1000 Finding Ingested 0.10
AWS EventBridge 1M custom events 1.00
Amazon Cloudwatch 1GB ingested/month 0.50
AWS Lambda 5 ARM Lambda Functions - 128MB / 10secs 0.00
Amazon SQS 2 SQS Fifo 0.00
Total 171.22

Note The figures presented here are estimates based on the assumptions described above, derived from the AWS Pricing Calculator. For further details please check this pricing calculator as a reference. You can adjust the services configuration in the referenced calculator to make your own estimation. This estimation does not include potential taxes or additional charges that might be applicable. It's crucial to remember that actual fees can vary based on usage and any additional services not covered in this analysis. For critical environments is advisable to include Business Support Plan (not considered in the estimation)

Security

See CONTRIBUTING for more information.

Authors



C2-Cloud - The C2 Cloud Is A Robust Web-Based C2 Framework, Designed To Simplify The Life Of Penetration Testers

By: Zion3R


The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.

C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.

Reverse shells support:

  1. Reverse TCP
  2. Reverse HTTP
  3. Reverse HTTPS (configure it behind an LB)
  4. Telegram C2

Demo

C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
Telegram C2: https://youtu.be/WLQtF4hbCKk

Key Features

πŸ”’ Anywhere Access: Reach the C2 Cloud from any location.
πŸ”„ Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
πŸ–±οΈ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
πŸ“œ Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.

Tech Stack

πŸ› οΈ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
πŸ”— TCP Socket: Serving reverse TCP requests for enhanced functionality.
🌐 Nginx: Effortlessly routing traffic between web and backend systems.
πŸ“¨ Redis PubSub: Serving as a robust message broker for seamless communication.
πŸš€ Websockets: Delivering real-time updates to browser clients for enhanced user experience.
πŸ’Ύ Postgres DB: Ensuring persistent storage for seamless continuity.

Architecture

Application setup

  • Management port: 9000
  • Reversse HTTP port: 8000
  • Reverse TCP port: 8888

  • Clone the repo

  • Optional: Update chait_id, bot_token in c2-telegram/config.yml
  • Execute docker-compose up -d to start the containers Note: The c2-api service will not start up until the database is initialized. If you receive 500 errors, please try after some time.

Credits

Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.

License

Distributed under the MIT License. See LICENSE for more information.

Contact



RansomwareSim - A Simulated Ransomware

By: Zion3R

Overview

RansomwareSim is a simulated ransomware application developed for educational and training purposes. It is designed to demonstrate how ransomware encrypts files on a system and communicates with a command-and-control server. This tool is strictly for educational use and should not be used for malicious purposes.

Features

  • Encrypts specified file types within a target directory.
  • Changes the desktop wallpaper (Windows only).
  • Creates&Delete a README file on the desktop with a simulated ransom note.
  • Simulates communication with a command-and-control server to send system data and receive a decryption key.
  • Decrypts files after receiving the correct key.

Usage

Important: This tool should only be used in controlled environments where all participants have given consent. Do not use this tool on any system without explicit permission. For more, read SECURE

Requirements

  • Python 3.x
  • cryptography
  • colorama

Installation

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/RansomwareSim.git
  2. Navigate to the project directory:

    cd RansomwareSim
  3. Install the required dependencies:

    pip install -r requirements.txt

ο“– My Book

Running the Control Server

  1. Open controlpanel.py.
  2. Start the server by running controlpanel.py.
  3. The server will listen for connections from RansomwareSim and the Decoder.

Running the Simulator

  1. Navigate to the directory containing RansomwareSim.
  2. Modify the main function in encoder.py to specify the target directory and other parameters.
  3. Run encoder.py to start the encryption process.
  4. Follow the instructions displayed on the console.

Running the Decoder

  1. Run decoder.py after the files have been encrypted.
  2. Follow the prompts to input the decryption key.

Disclaimer

RansomwareSim is developed for educational purposes only. The creators of RansomwareSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.

Contributing

Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions.

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

For any inquiries or further information, you can reach me through the following channels:



LATMA - Lateral Movement Analyzer Tool


Lateral movement analyzer (LATMA) collects authentication logs from the domain and searches for potential lateral movement attacks and suspicious activity. The tool visualizes the findings with diagrams depicting the lateral movement patterns. This tool contains two modules, one that collects the logs and one that analyzes them. You can execute each of the modules separately, the event log collector should be executed in a Windows machine in an active directory domain environment with python 3.8 or above. The analyzer can be executed in a linux machine and a Windows machine.


The Collector

The Event Log Collector module scans domain controllers for successful NTLM authentication logs and endpoints for successful Kerberos authentication logs. It requires LDAP/S port 389 and 636 and RPC port 135 access to the domain controller and clients. In addition it requires domain admin privileges or a user in the Event log Reader group or one with equivalent permissions. This is required to pull event logs from all endpoints and domain controllers.

The collector gathers NTLM logs from event 8004 on the domain controllers and Kerberos logs from event 4648 on the clients. It generates as an output a csv comma delimited format file with all the available authentication traffic. The output contains the fields source host, destination, username, auth type, SPN and timestamps in the format %Y/%m/%d %H:%M. The collector requires credential of a valid user with event viewer privileges across the environment and queries the specific logs for each protocol.

Verify Kerberos and NTLM protocols are audited across the environment using group policy:

  1. Kerberos - Computer configuration -> policies -> Windows Settings -> Security settings -> Local policies -> Audit Policies -> audit account logon events
  2. NTLM - Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Local Policies -> Security Options -> Network Security: Restrict NTLM: audit NTLM authentication in this domain

The Analyzer

The Analyzer receives as input a spreadsheet with authentication data formatted as specified in Collector's output structure. It searches for suspicious activity with the lateral movement analyzer algorithm and also detects additional IoCs of lateral movement. The authentication source and destination should be formalized with netbios name and not ip addresses.

Preliminaries and key concepts of the LATMA algorithm

LATMA gets a batch of authentication requests and sends an alert when it finds suspicious lateral movement attacks. We define the following:

  • Authentication Graph: A directed graph that contains information about authentication traffic in the environment. The nodes of the graphs are computers, and the edges are authentications between the computers. The graph edges have the attributes: protocol type, date of authentication and the account that sent the request. The graph nodes contain information about the computer it represents, detailed below.

  • Lateral movement graph: A sub-graph of the authentication graph that represents the attacker’s movement. The lateral movement graph is not always a path in the sub-graph, in some attacks the attacker goes in many different directions.

  • Alert: A sub-graph the algorithm suspects are part of the lateral movement graph.

LATMA performs several actions during its execution:

  • Information gathering: LATMA monitors normal behavior of the users and machines and characterizes them. The learning is used later to decide which authentication requests deviate from a normal behavior and might be involved in a lateral movement attack. For a learning period of three weeks LATMA does not throw any alerts and only learns the environment. The learning continues after those three weeks.

  • Authentication graph building: After the learning period every relevant authentication is added to the authentication graph. It is critical to filter only for relevant authentication, otherwise the number of edges the graph holds might be too big. We filter on the following protocol types: NTLM and Kerberos with the services β€œrpc”, β€œrpcss” and β€œtermsrv.”

Alert handling:

Adding an authentication to the graph might trigger a process of alerting. In general, a new edge can create a new alert, join an existing alert or merge two alerts.

Information gathering

Every authentication request monitored by LATMA is used for learning and stored in a dedicated data structure. First, we identify sinks and hubs. We define sinks as machines accessed by many (at least 50) different accounts, such as a company portal or exchange server. We define hubs as machines many different accounts (at least 20) authenticate from, such as proxies and VPNs. Authentications to sinks or from hubs are considered benign and are therefore removed from the authentication graph.

In addition to basic classification, LATMA matches between accounts and machines they frequently authenticate from. If an account authenticates from a machine at least three different days in a three weeks’ period, it means that this account matches the machine and any authentication of this account from the machine is considered benign and removed from the authentication graph.

The lateral movement IoCs are:

Whiteβ€― cane β€―- User accounts authenticating from a single machine to multiple ones in a relatively short time.

Bridge - User account X authenticating from machine A to machine B and following that, from machine B to machine C. This IoC potentially indicates an attacker performing actual advance from its initial foothold (A) to destination machine that better serves the attack’s objectives.

Switched Bridge - User account X authenticating from machine A to machine B, followed by user account Y authenticating from machine B to machine C. This IoC potentially indicates an attacker that discovers and compromises an additional account along its path and uses the new account to advance forward (a common example is account X being a standard domain user and account Y being a admin user)

Weight Shift - White cane (see above) from machine A to machines {B1,…, Bn}, followed by another White cane from machine Bx to machines {C1,…,Cn}. This IoC potentially indicates an attacker that has determined that machine B would better serve the attack’s purposes from now on uses machine B as the source for additional searches.

Blast - User account X authenticating from machine A to multiple machines in a very short timeframe. A common example is an attacker that plants \ executes ransomware on a mass number of machines simultaneously

Output:

The analyzer outputs several different files

  1. A spreadsheet with all the suspected authentications (all_authentications.csv) and their role classification and a different spreadsheet for the authentications that are suspected to be part of lateral movement (propagation.csv)
  2. A GIF file represents the progression, wherby each frame of the GIF specifies exactly what was the suspicious action
  3. An interactive timeline with all the suspicious events. Events that are related to each other have the same color

Dependencies:

  1. Python 3.8
  2. libraries as follows in requirements.txt
  3. Run pip install . for running setup automatically
  4. Audit Kerberos and NTLM across the environment
  5. LDAP queries to the domain controllers
  6. Domain admin credentials or any credentials with MS-EVEN6 remote event viewer permissions.

usage

The Collector

Required arguments:

  1. credentials [domain.com/]username[:password] credentials format alternatively [domain.com/]username and then password will be prompted securely. For domain please insert the FQDN (Fully Quallified Domain Name). Optional arguments:
  2. -ntlm Retrieve ntlm authentication logs from DC
  3. -kerberos Retrieve kerberos authentication logs from all computers in the domain
  4. -debug Turn DEBUG output ON
  5. -help show this help message and exit
  6. -filter Query specific ou or container in the domain, will result all workstations in the sub-OU as well. Each OU will be in format of DN (Distinguished Name). Supports multiple OUs with a semicolon delimiter. Example: OU=subunit,OU=unit;OU=anotherUnit,DC=domain,DC=com Example: CN=container,OU=unit;OU=anotherUnit,DC=domain,DC=com
  7. -date Starting date to collect event logs from. month-day-year format, if not specified take all available data
  8. -threads amount of working threads to use
  9. -ldap Use Unsecure LDAP instead of LDAP/S
  10. -ldap_domain Custom domain on ldap login credentials. If empty, will use current user's session domain

The Analyzer

Required arguments:

  1. authentication_file authentication file should contain list of NTLM and Kerberos requests

Optional arguments: 2. -output_file The location the csv with the all the IOCs is going to be saved to 3. -progression_output_file The location the csv with the the IOCs of the lateral movements is going to be save to 4. -sink_threshold number of accounts from which a machine is considered sink, default is 50 5. -hub_threshold number of accounts from which a machine is considered hub, default is 20 6. -learning_period learning period in days, default is 7 days 7. -show_all_iocs Show IoC that are not connected to any other IoCs 8. -show_gant If true, output the events in a gant format

Binary Usage Open command prompt and navigate to the binary folder. Run executables with the specified above arguments.

Examples

In the example files you have several samples of real environments (some contain lateral movement attacks and some don't) which you can give as input for the analyzer.

Usage example

  1. python eventlogcollector.py domain.com/username:password -ntlm -kerberos
  2. python analyzer.py logs.csv


Shennina - Automating Host Exploitation With AI


Shennina is an automated host exploitation framework. The mission of the project is to fully automate the scanning, vulnerability scanning/analysis, and exploitation using Artificial Intelligence. Shennina is integrated with Metasploit and Nmap for performing the attacks, as well as being integrated with an in-house Command-and-Control Server for exfiltrating data from compromised machines automatically.

This was developed by Mazin Ahmed and Khalid Farah within the HITB CyberWeek 2019 AI challenge. The project is developed based on the concept of DeepExploit by Isao Takaesu.

Shennina scans a set of input targets for available network services, uses its AI engine to identify recommended exploits for the attacks, and then attempts to test and attack the targets. If the attack succeeds, Shennina proceeds with the post-exploitation phase.

The AI engine is initially trained against live targets to learn reliable exploits against remote services.

Shennina also supports a "Heuristics" mode for identfying recommended exploits.

The documentation can be found in the Docs directory within the project.


Features

  • Automated self-learning approach for finding exploits.
  • High performance using managed concurrency design.
  • Intelligent exploits clustering.
  • Post exploitation capabilities.
  • Deception detection.
  • Ransomware simulation capabilities.
  • Automated data exfiltration.
  • Vulnerability scanning mode.
  • Heuristic mode support for recommending exploits.
  • Windows, Linux, and macOS support for agents.
  • Scriptable attack method within the post-exploitation phase.
  • Exploits suggestions for Kernel exploits.
  • Out-of-Band technique testing for exploitation checks.
  • Automated exfiltration of important data on compromised servers.
  • Reporting capabilities.
  • Coverage for 40+ TTPs within the MITRE ATT&CK Framework.
  • Supports multi-input targets.

Why are we solving this problem with AI?

The problem should be solved by a hash tree without using "AI", however, the HITB Cyber Week AI Challenge required the project to find ways to solve it through AI.

Note

This project is a security experiment.

Legal Disclaimer

This project is made for educational and ethical testing purposes only. Usage of Shennina for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program.

Authors



❌