FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Bugsy - Command-line Interface Tool That Provides Automatic Security Vulnerability Remediation For Your Code

By: Zion3R


Bugsy is a command-line interface (CLI) tool that provides automatic security vulnerability remediation for your code. It is the community edition version of Mobb, the first vendor-agnostic automated security vulnerability remediation tool. Bugsy is designed to help developers quickly identify and fix security vulnerabilities in their code.


What is Mobb?

Mobb is the first vendor-agnostic automatic security vulnerability remediation tool. It ingests SAST results from Checkmarx, CodeQL (GitHub Advanced Security), OpenText Fortify, and Snyk and produces code fixes for developers to review and commit to their code.

What does Bugsy do?

Bugsy has two modes - Scan (no SAST report needed) & Analyze (the user needs to provide a pre-generated SAST report from one of the supported SAST tools).

Scan

  • Uses Checkmarx or Snyk CLI tools to run a SAST scan on a given open-source GitHub/GitLab repo
  • Analyzes the vulnerability report to identify issues that can be remediated automatically
  • Produces the code fixes and redirects the user to the fix report page on the Mobb platform

Analyze

  • Analyzes the a Checkmarx/CodeQL/Fortify/Snyk vulnerability report to identify issues that can be remediated automatically
  • Produces the code fixes and redirects the user to the fix report page on the Mobb platform

Disclaimer

This is a community edition version that only analyzes public GitHub repositories. Analyzing private repositories is allowed for a limited amount of time. Bugsy does not detect any vulnerabilities in your code, it uses findings detected by the SAST tools mentioned above.

Usage

You can simply run Bugsy from the command line, using npx:

npx mobbdev


DakshSCRA - Source Code Review Assist

By: Zion3R


Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.

Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.

What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.


Debut

Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.

While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.

Features and Functionalities

Distinctive Features (Multiple World’s First)

  • Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.

  • Identifies Areas of Interest in File Paths (World’s First): Recognises patterns in file paths to pinpoint relevant sections for review.

  • Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.

  • Automated Scientific Effort Estimation for Code Review (World’s First): Providing a measurable approach for estimating efforts required for a code review process.

Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.

Additionally, the tool offers the following functionalities:

  • Options to use platform-specific rules specific for finding areas of interests
  • Options to extend or add new rules for any new or existing languages
  • Generates report in text, HTML and PDF format for inspection

Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki

Feel free to contribute towards updating or adding new rules and future development.

If you find any bugs, report them to d3basis.m0hanty@gmail.com.

Tool Setup

Pre-requisites

Python3 and all the libraries listed in requirements.txt

Setting up environment to run this tool

1. Setup a virtual environment

$ pip install virtualenv

$ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
Example: virtualenv -p python3 venv

$ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
Example: source venv/bin/activate

After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $

2. Ensure all required libraries are installed within the virtual environment

You must run the below command after activating the virtual environment as mentioned in the previous steps.

pip install -r requirements.txt

Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.

Tool Usage

$ python3 dakshscra.py -h // To view avaialble options and arguments

usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]

options:
-h, --help show this help message and exit
-r RULE_FILE Specify platform specific rule name
-f FILE_TYPES Specify file types to scan
-v Specify verbosity level {'-v', '-vv', '-vvv'}
-t TARGET_DIR Specify target directory path
-l {R,RF}, --list {R,RF}
List rules [R] OR rules and filetypes [RF]
-recon Detects platform, framework and programming language used
-estimate Estimate efforts required for code review

Example Usage

$ python3 dakshscra.py // To view tool usage along with examples

Examples:
# '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
dakshsca.py -r php -t /source_dir_path

# To override default settings, other filetypes can be specified with '-f' option.
dakshsca.py -r php -f dotnet -t /path_to_source_dir
dakshsca.py -r php -f custom -t /path_to_source_dir

# Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
dakshsca.py -recon -r php -t /path_to_source_dir

# Perform only reconnaissance if '-recon' used without the '-r' option.
dakshsca.py -recon -t /path_to_source_dir

# Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
dakshsca.py -r php -vv -t /path_to_source_dir


Supported RULE_FILE: dotnet, java, php, javascript
Supported FILE_TY PES: dotnet, php, java, custom, allfiles

Reports

The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.

Scanning (Areas of Security Concerns) Report

HTML Report:
  • DakshSCRA/reports/html/report.html
PDF Report:
  • DakshSCRA/reports/html/report.pdf
RAW TEXT Based Reports:
  • Areas of Interest - Identified Patterns : DakshSCRA/reports/text/areas_of_interest.txt
  • Areas of Interest - Project Files: DakshSCRA/reports/text/filepaths_aoi.txt
  • Identified Project Files: DakshSCRA/runtime/filepaths.txt

Reconnaissance (Recon) Report

  • Reconnaissance Summary: /reports/text/recon.txt

Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.

Code Review Effort Estimation Report

  • Effort estimation report: /reports/html/estimation.html

Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.

Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.



RustChain - Hide Memory Artifacts Using ROP And Hardware Breakpoints

By: Zion3R


This tool is a simple PoC of how to hide memory artifacts using a ROP chain in combination with hardware breakpoints. The ROP chain will change the main module memory page's protections to N/A while sleeping (i.e. when the function Sleep is called). For more detailed information about this memory scanning evasion technique check out the original project Gargoyle. x64 only.

The idea is to set up a hardware breakpoint in kernel32!Sleep and a new top-level filter to handle the exception. When Sleep is called, the exception filter function set before is triggered, allowing us to call the ROP chain without the need of using classic function hooks. This way, we avoid leaving weird and unusual private memory regions in the process related to well known dlls.

The ROP chain simply calls VirtualProtect() to set the current memory page to N/A, then calls SleepEx and finally restores the RX memory protection.


The overview of the process is as follows:

  • We use SetUnhandledExceptionFilter to set a new exception filter function.
  • SetThreadContext is used in order to set a hardware breakpoint on kernel32!Sleep.
  • We call Sleep, triggering the hardware breakpoint and driving the execution flow towards our exception filter function.
  • The ROP chain is called from the exception filter function, allowing to change the current memory page protection to N/A. Then SleepEx is called. Finally, the ROP chain restores the RX memory protection and the normal execution continues.

This process repeats indefinitely.

As it can be seen in the image, the main module's memory protection is changed to N/A while sleeping, which avoids memory scans looking for pages with execution permission.

Compilation

Since we are using LITCRYPT plugin to obfuscate string literals, it is required to set up the environment variable LITCRYPT_ENCRYPT_KEY before compiling the code:

C:\Users\User\Desktop\RustChain> set LITCRYPT_ENCRYPT_KEY="yoursupersecretkey"

After that, simply compile the code and run the tool:

C:\Users\User\Desktop\RustChain> cargo build
C:\Users\User\Desktop\RustChain\target\debug> rustchain.exe

Limitations

This tool is just a PoC and some extra features should be implemented in order to be fully functional. The main purpose of the project was to learn how to implement a ROP chain and integrate it within Rust. Because of that, this tool will only work if you use it as it is, and failures are expected if you try to use it in other ways (for example, compiling it to a dll and trying to reflectively load and execute it).

Credits



Nuclearpond - A Utility Leveraging Nuclei To Perform Internet Wide Scans For The Cost Of A Cup Of Coffee


Nuclear Pond is used to leverage Nuclei in the cloud with unremarkable speed, flexibility, and perform internet wide scans for far less than a cup of coffee.

It leverages AWS Lambda as a backend to invoke Nuclei scans in parallel, choice of storing json findings in s3 to query with AWS Athena, and is easily one of the cheapest ways you can execute scans in the cloud.


Features

  • Output results to your terminal, json, or to S3
  • Specify threads and parallel invocations in any desired number of batches
  • Specify any Nuclei arguments just like you would locally
  • Specify a single host or from a file
  • Run the http server to take scans from the API
  • Run the http server to get status of the scans
  • Query findings through Athena for searching S3
  • Specify a custom nuclei and reporting configurations

Usage

Think of Nuclear Pond as just a way for you to run Nuclei in the cloud. You can use it just as you would on your local machine but run them in parallel and with however many hosts you want to specify. All you need to think of is the nuclei command line flags you wish to pass to it.

Setup & Installation

To install Nuclear Pond, you need to configure the backend terraform module. You can do this by running terraform apply or by leveraging terragrunt.

$ go install github.com/DevSecOpsDocs/nuclearpond@latest

Environment Variables

You can either pass in your backend with flags or through environment variables. You can use -f or --function-name to specify your Lambda function and -r or --region to the specified region. Below are environment variables you can use.

  • AWS_LAMBDA_FUNCTION_NAME is the name of your lambda function to execute the scans on
  • AWS_REGION is the region your resources are deployed
  • NUCLEARPOND_API_KEY is the API key for authenticating to the API
  • AWS_DYNAMODB_TABLE is the dynamodb table to store API scan states

Command line flags

Below are some of the flags you can specify when running nuclearpond. The primary flags you need are -t or -l for your target(s), -a for the nuclei args, and -o to specify your output. When specifying Nuclei args you must pass them in as base64 encoded strings by performing -a $(echo -ne "-t dns" | base64).

Commands

Below are the subcommands you can execute within nuclearpond.

  • run: Execute nuclei scans
  • service: Basic API to execute nuclei scans

Run

To run nuclearpond subcommand nuclearpond run -t devsecopsdocs.com -r us-east-1 -f jwalker-nuclei-runner-function -a $(echo -ne "-t dns" | base64) -o cmd -b 1 in which the target is devsecopsdocs.com, region is us-east-1, lambda function name is jwalker-nuclei-runner-function, nuclei arguments are -t dns, output is cmd, and executes one function through a batch of one host through -b 1.

$ nuclearpond run -h
Executes nuclei tasks in parallel by invoking lambda asynchronously

Usage:
nuclearpond run [flags]

Flags:
-a, --args string nuclei arguments as base64 encoded string
-b, --batch-size int batch size for number of targets per execution (default 1)
-f, --function-name string AWS Lambda function name
-h, --help help for run
-o, --output string output type to save nuclei results(s3, cmd, or json) (default "cmd")
-r, --region string AWS region to run nuclei
-s, --silent silent command line output
-t, --target string individual target to specify
-l, --targets string list of targets in a file
-c, --threads int number of threads to run lambda funct ions, default is 1 which will be slow (default 1)

Custom Templates

The terraform module by default downloads the templates on execution as well as adds the templates as a layer. The variables to download templates use the terraform github provider to download the release zip. The folder name within the zip will be located within /opt. Since Nuclei downloads them on run we do not have to but to improve performance you can specify -t /opt/nuclei-templates-9.3.4/dns to execute templates from the downloaded zip. To specify your own templates you must reference a release. When doing so on your own repository you must specify these variables in the terraf orm module, github_token is not required if your repository is public.

  • github_repository
  • github_owner
  • release_tag
  • github_token

Retrieving Findings

If you have specified s3 as the output, your findings will be located in S3. The fastest way to get at them is to do so with Athena. Assuming you setup the terraform-module as your backend, all you need to do is query them directly through athena. You may have to configure query results if you have not done so already.

select
*
from
nuclei_db.findings_db
limit 10;

Advance Query

In order to get down into queries a little deeper, I thought I would give you a quick example. In the select statement we drill down into info column, "matched-at" column must be in double quotes due to - character, and you are searching only for high and critical findings generated by Nuclei.

SELECT
info.name,
host,
type,
info.severity,
"matched-at",
info.description,
template,
dt
FROM
"nuclei_db"."findings_db"
where
host like '%devsecopsdocs.com'
and info.severity in ('high','critical')

Infrastructure

The backend infrastructure, all within terraform module. I would strongly recommend reading the readme associated to it as it will have some important notes.

  • Lambda function
  • S3 bucket
    • Stores nuclei binary
    • Stores configuration files
    • Stores findings
  • Glue Database and Table
    • Allows you to query the findings in S3
    • Partitioned by the hour
    • Partition projection
  • IAM Role for Lambda Function


PortexAnalyzerGUI - Graphical Interface For PortEx, A Portable Executable And Malware Analysis Library



Graphical interface for PortEx, a Portable Executable and Malware Analysis Library

Download

Releases page

Features

  • Header information from: MSDOS Header, Rich Header, COFF File Header, Optional Header, Section Table
  • PE Structures: Import Section, Resource Section, Export Section, Debug Section
  • Scanning for file format anomalies
  • Visualize file structure, local entropies and byteplot, and save it as PNG
  • Calculate Shannon Entropy, Imphash, MD5, SHA256, Rich and RichPV hash
  • Overlay and overlay signature scanning
  • Version information and manifest
  • Icon extraction and saving as PNG
  • Customized signature scanning via Yara. Internal signature scans using PEiD signatures and an internal filetype scanner.

Supported OS and JRE

I test this program on Linux and Windows. But it should work on any OS with JRE version 9 or higher.

Future

I will be including more and more features that PortEx already provides.

These features include among others:

  • customized visualization
  • extraction and conversion of icons to .ICO files
  • dumping of sections, overlay, resources
  • export reports to txt, json, csv

Some of these features are already provided by PortexAnalyzer CLI version, which you can find here: PortexAnalyzer CLI

Donations

I develop PortEx and PortexAnalyzer as a hobby in my free time. If you like it, please consider buying me a coffee: https://ko-fi.com/struppigel

Author

Karsten Hahn

Twitter: @Struppigel

Mastodon: struppigel@infosec.exchange

Youtube: MalwareAnalysisForHedgehogs

License

License



❌