Bugsy is a command-line interface (CLI) tool that provides automatic security vulnerability remediation for your code. It is the community edition version of Mobb, the first vendor-agnostic automated security vulnerability remediation tool. Bugsy is designed to help developers quickly identify and fix security vulnerabilities in their code.
Mobb is the first vendor-agnostic automatic security vulnerability remediation tool. It ingests SAST results from Checkmarx, CodeQL (GitHub Advanced Security), OpenText Fortify, and Snyk and produces code fixes for developers to review and commit to their code.
Bugsy has two modes - Scan (no SAST report needed) & Analyze (the user needs to provide a pre-generated SAST report from one of the supported SAST tools).
Scan
Analyze
This is a community edition version that only analyzes public GitHub repositories. Analyzing private repositories is allowed for a limited amount of time. Bugsy does not detect any vulnerabilities in your code, it uses findings detected by the SAST tools mentioned above.
You can simply run Bugsy from the command line, using npx:
Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.
Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.
What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.
Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.
While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.
Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.
Identifies Areas of Interest in File Paths (Worldβs First): Recognises patterns in file paths to pinpoint relevant sections for review.
Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.
Automated Scientific Effort Estimation for Code Review (Worldβs First): Providing a measurable approach for estimating efforts required for a code review process.
Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.
Additionally, the tool offers the following functionalities:
Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki
Feel free to contribute towards updating or adding new rules and future development.
If you find any bugs, report them to d3basis.m0hanty@gmail.com.
Python3 and all the libraries listed in requirements.txt
$ pip install virtualenv
$ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
Example: virtualenv -p python3 venv
$ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
Example: source venv/bin/activate
After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $
You must run the below command after activating the virtual environment as mentioned in the previous steps.
pip install -r requirements.txt
Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.
$ python3 dakshscra.py -h // To view avaialble options and arguments
usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]
options:
-h, --help show this help message and exit
-r RULE_FILE Specify platform specific rule name
-f FILE_TYPES Specify file types to scan
-v Specify verbosity level {'-v', '-vv', '-vvv'}
-t TARGET_DIR Specify target directory path
-l {R,RF}, --list {R,RF}
List rules [R] OR rules and filetypes [RF]
-recon Detects platform, framework and programming language used
-estimate Estimate efforts required for code review
$ python3 dakshscra.py // To view tool usage along with examples
Examples:
# '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
dakshsca.py -r php -t /source_dir_path
# To override default settings, other filetypes can be specified with '-f' option.
dakshsca.py -r php -f dotnet -t /path_to_source_dir
dakshsca.py -r php -f custom -t /path_to_source_dir
# Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
dakshsca.py -recon -r php -t /path_to_source_dir
# Perform only reconnaissance if '-recon' used without the '-r' option.
dakshsca.py -recon -t /path_to_source_dir
# Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
dakshsca.py -r php -vv -t /path_to_source_dir
Supported RULE_FILE: dotnet, java, php, javascript
Supported FILE_TY PES: dotnet, php, java, custom, allfiles
The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.
Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.
Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.
Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.
This tool is a simple PoC of how to hide memory artifacts using a ROP chain in combination with hardware breakpoints. The ROP chain will change the main module memory page's protections to N/A while sleeping (i.e. when the function Sleep is called). For more detailed information about this memory scanning evasion technique check out the original project Gargoyle. x64 only.
The idea is to set up a hardware breakpoint in kernel32!Sleep and a new top-level filter to handle the exception. When Sleep is called, the exception filter function set before is triggered, allowing us to call the ROP chain without the need of using classic function hooks. This way, we avoid leaving weird and unusual private memory regions in the process related to well known dlls.
The ROP chain simply calls VirtualProtect() to set the current memory page to N/A, then calls SleepEx and finally restores the RX memory protection.
The overview of the process is as follows:
This process repeats indefinitely.
As it can be seen in the image, the main module's memory protection is changed to N/A while sleeping, which avoids memory scans looking for pages with execution permission.
Since we are using LITCRYPT plugin to obfuscate string literals, it is required to set up the environment variable LITCRYPT_ENCRYPT_KEY before compiling the code:
C:\Users\User\Desktop\RustChain> set LITCRYPT_ENCRYPT_KEY="yoursupersecretkey"
After that, simply compile the code and run the tool:
C:\Users\User\Desktop\RustChain> cargo build
C:\Users\User\Desktop\RustChain\target\debug> rustchain.exe
This tool is just a PoC and some extra features should be implemented in order to be fully functional. The main purpose of the project was to learn how to implement a ROP chain and integrate it within Rust. Because of that, this tool will only work if you use it as it is, and failures are expected if you try to use it in other ways (for example, compiling it to a dll and trying to reflectively load and execute it).
Nuclear Pond is used to leverage Nuclei in the cloud with unremarkable speed, flexibility, and perform internet wide scans for far less than a cup of coffee.
It leverages AWS Lambda as a backend to invoke Nuclei scans in parallel, choice of storing json findings in s3 to query with AWS Athena, and is easily one of the cheapest ways you can execute scans in the cloud.
Think of Nuclear Pond as just a way for you to run Nuclei in the cloud. You can use it just as you would on your local machine but run them in parallel and with however many hosts you want to specify. All you need to think of is the nuclei command line flags you wish to pass to it.
To install Nuclear Pond, you need to configure the backend terraform module. You can do this by running terraform apply
or by leveraging terragrunt.
$ go install github.com/DevSecOpsDocs/nuclearpond@latest
You can either pass in your backend with flags or through environment variables. You can use -f
or --function-name
to specify your Lambda function and -r
or --region
to the specified region. Below are environment variables you can use.
AWS_LAMBDA_FUNCTION_NAME
is the name of your lambda function to execute the scans onAWS_REGION
is the region your resources are deployedNUCLEARPOND_API_KEY
is the API key for authenticating to the APIAWS_DYNAMODB_TABLE
is the dynamodb table to store API scan statesBelow are some of the flags you can specify when running nuclearpond
. The primary flags you need are -t
or -l
for your target(s), -a
for the nuclei args, and -o
to specify your output. When specifying Nuclei args you must pass them in as base64 encoded strings by performing -a $(echo -ne "-t dns" | base64)
.
Below are the subcommands you can execute within nuclearpond.
To run nuclearpond subcommand nuclearpond run -t devsecopsdocs.com -r us-east-1 -f jwalker-nuclei-runner-function -a $(echo -ne "-t dns" | base64) -o cmd -b 1
in which the target is devsecopsdocs.com
, region is us-east-1
, lambda function name is jwalker-nuclei-runner-function
, nuclei arguments are -t dns
, output is cmd
, and executes one function through a batch of one host through -b 1
.
$ nuclearpond run -h
Executes nuclei tasks in parallel by invoking lambda asynchronously
Usage:
nuclearpond run [flags]
Flags:
-a, --args string nuclei arguments as base64 encoded string
-b, --batch-size int batch size for number of targets per execution (default 1)
-f, --function-name string AWS Lambda function name
-h, --help help for run
-o, --output string output type to save nuclei results(s3, cmd, or json) (default "cmd")
-r, --region string AWS region to run nuclei
-s, --silent silent command line output
-t, --target string individual target to specify
-l, --targets string list of targets in a file
-c, --threads int number of threads to run lambda funct ions, default is 1 which will be slow (default 1)
The terraform module by default downloads the templates on execution as well as adds the templates as a layer. The variables to download templates use the terraform github provider to download the release zip. The folder name within the zip will be located within /opt
. Since Nuclei downloads them on run we do not have to but to improve performance you can specify -t /opt/nuclei-templates-9.3.4/dns
to execute templates from the downloaded zip. To specify your own templates you must reference a release. When doing so on your own repository you must specify these variables in the terraf orm module, github_token
is not required if your repository is public.
If you have specified s3
as the output, your findings will be located in S3. The fastest way to get at them is to do so with Athena. Assuming you setup the terraform-module as your backend, all you need to do is query them directly through athena. You may have to configure query results if you have not done so already.
select
*
from
nuclei_db.findings_db
limit 10;
In order to get down into queries a little deeper, I thought I would give you a quick example. In the select statement we drill down into info
column, "matched-at"
column must be in double quotes due to -
character, and you are searching only for high and critical findings generated by Nuclei.
SELECT
info.name,
host,
type,
info.severity,
"matched-at",
info.description,
template,
dt
FROM
"nuclei_db"."findings_db"
where
host like '%devsecopsdocs.com'
and info.severity in ('high','critical')
The backend infrastructure, all within terraform module. I would strongly recommend reading the readme associated to it as it will have some important notes.
Graphical interface for PortEx, a Portable Executable and Malware Analysis Library
I test this program on Linux and Windows. But it should work on any OS with JRE version 9 or higher.
I will be including more and more features that PortEx already provides.
These features include among others:
Some of these features are already provided by PortexAnalyzer CLI version, which you can find here: PortexAnalyzer CLI
I develop PortEx and PortexAnalyzer as a hobby in my free time. If you like it, please consider buying me a coffee: https://ko-fi.com/struppigel
Karsten Hahn
Twitter: @Struppigel
Mastodon: struppigel@infosec.exchange
Youtube: MalwareAnalysisForHedgehogs