FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Tai-e - An Easy-To-Learn/Use Static Analysis Framework For Java


Tai-e

What is Tai-e?

Tai-e (Chinese: ε€ͺ阿; pronunciation: [ˈtaΙͺΙ™:]) is a new static analysis framework for Java (please see our technical report for details), which features arguably the "best" designs from both the novel ones we proposed and those of classic frameworks such as Soot, WALA, Doop, and SpotBugs. Tai-e is easy-to-learn, easy-to-use, efficient, and highly extensible, allowing you to easily develop new analyses on top of it.

Currently, Tai-e provides the following major analysis components (and more analyses are on the way):

  • Powerful pointer analysis framework
    • On-the-fly call graph construction
    • Various classic and advanced techniques of heap abstraction and context sensitivity for pointer analysis
    • Extensible analysis plugin system (allows to conveniently develop and add new analyses that interact with pointer analysis)
  • Various fundamental/client/utility analyses
    • Fundamental analyses, e.g., reflection analysis and exception analysis
    • Modern language feature analyses, e.g., lambda and method reference analysis, and invokedynamic analysis
    • Clients, e.g., configurable taint analysis (allowing to configure sources, sinks and taint transfers)
    • Utility tools like analysis timer, constraint checker (for debugging), and various graph dumpers
  • Control/Data-flow analysis framework
    • Control-flow graph construction
    • Classic data-flow analyses, e.g., live variable analysis, constant propagation
    • Your data-flow analyses
  • SpotBugs-like bug detection system
    • Bug detectors, e.g., null pointer detector, incorrect clone() detector
    • Your bug detectors

Tai-e is developed in Java, and it can run on major operating systems including Windows, Linux, and macOS.


How to Obtain Runnable Jar of Tai-e?

The simplest way is to download it from GitHub Releases.

Alternatively, you might build the latest Tai-e yourself from the source code. This can be simply done via Gradle (be sure that Java 17 (or higher version) is available on your system). You just need to run command gradlew fatJar, and then the runnable jar will be generated in tai-e/build/, which includes Tai-e and all its dependencies.

Documentation

We are hosting the documentation of Tai-e on the GitHub wiki, where you could find more information about Tai-e such as Setup in IntelliJ IDEA , Command-Line Options , and Development of New Analysis .

Tai-e Assignments

In addition, we have developed an educational version of Tai-e where eight programming assignments are carefully designed for systematically training learners to implement various static analysis techniques to analyze real Java programs. The educational version shares a large amount of code with Tai-e, thus doing the assignments would be a good way to get familiar with Tai-e.



Bkcrack - Crack Legacy Zip Encryption With Biham And Kocher's Known Plaintext Attack


Crack legacy zip encryption with Biham and Kocher's known plaintext attack.

Overview

A ZIP archive may contain many entries whose content can be compressed and/or encrypted. In particular, entries can be encrypted with a password-based Encryption Algorithm symmetric encryption algorithm referred to as traditional PKWARE encryption, legacy encryption or ZipCrypto. This algorithm generates a pseudo-random stream of bytes (keystream) which is XORed to the entry's content (plaintext) to produce encrypted data (ciphertext). The generator's state, made of three 32-bits integers, is initialized using the password and then continuously updated with plaintext as encryption goes on. This encryption algorithm is vulnerable to known plaintext attacks as shown by Eli Biham and Paul C. Kocher in the research paper A known plaintext attack on the PKZIP stream cipher. Given ciphertext and 12 or more bytes of the corresponding plaintext, the internal state of the keystream generator can be recovered. This internal state is enough to decipher ciphertext entirely as well as other entries which were encrypted with the same password. It can also be used to bruteforce the password with a complexity of nl-6 where n is the size of the character set and l is the length of the password.

bkcrack is a command-line tool which implements this known plaintext attack. The main features are:

  • Recover internal state from ciphertext and plaintext.
  • Change a ZIP archive's password using the internal state.
  • Recover the original password from the internal state.

Install

Precompiled packages

You can get the latest official release on GitHub.

Precompiled packages for Ubuntu, MacOS and Windows are available for download. Extract the downloaded archive wherever you like.

On Windows, Microsoft runtime libraries are needed for bkcrack to run. If they are not already installed on your system, download and install the latest Microsoft Visual C++ Redistributable package.

Compile from source

Alternatively, you can compile the project with CMake.

First, download the source files or clone the git repository. Then, running the following commands in the source tree will create an installation in the install folder.

cmake -S . -B build -DCMAKE_INSTALL_PREFIX=install
cmake --build build --config Release
cmake --build build --config Release --target install

Thrid-party packages

bkcrack is available in the package repositories listed on the right. Those packages are provided by external maintainers.

Usage

List entries

You can see a list of entry names and metadata in an archive named archive.zip like this:

bkcrack -L archive.zip

Entries using ZipCrypto encryption are vulnerable to a known-plaintext attack.

Recover internal keys

The attack requires at least 12 bytes of known plaintext. At least 8 of them must be contiguous. The larger the contiguous known plaintext, the faster the attack.

Load data from zip archives

Having a zip archive encrypted.zip with the entry cipher being the ciphertext and plain.zip with the entry plain as the known plaintext, bkcrack can be run like this:

bkcrack -C encrypted.zip -c cipher -P plain.zip -p plain

Load data from files

Having a file cipherfile with the ciphertext (starting with the 12 bytes corresponding to the encryption header) and plainfile with the known plaintext, bkcrack can be run like this:

bkcrack -c cipherfile -p plainfile

Offset

If the plaintext corresponds to a part other than the beginning of the ciphertext, you can specify an offset. It can be negative if the plaintext includes a part of the encryption header.

bkcrack -c cipherfile -p plainfile -o offset

Sparse plaintext

If you know little contiguous plaintext (between 8 and 11 bytes), but know some bytes at some other known offsets, you can provide this information to reach the requirement of a total of 12 known bytes. To do so, use the -x flag followed by an offset and bytes in hexadecimal.

bkcrack -c cipherfile -p plainfile -x 25 4b4f -x 30 21

Number of threads

If bkcrack was built with parallel mode enabled, the number of threads used can be set through the environment variable OMP_NUM_THREADS.

Decipher

If the attack is successful, the deciphered data associated to the ciphertext used for the attack can be saved:

bkcrack -c cipherfile -p plainfile -d decipheredfile

If the keys are known from a previous attack, it is possible to use bkcrack to decipher data:

bkcrack -c cipherfile -k 12345678 23456789 34567890 -d decipheredfile

Decompress

The deciphered data might be compressed depending on whether compression was used or not when the zip file was created. If deflate compression was used, a Python 3 script provided in the tools folder may be used to decompress data.

python3 tools/inflate.py < decipheredfile > decompressedfile

Unlock encrypted archive

It is also possible to generate a new encrypted archive with the password of your choice:

bkcrack -C encrypted.zip -k 12345678 23456789 34567890 -U unlocked.zip password

The archive generated this way can be extracted using any zip file utility with the new password. It assumes that every entry was originally encrypted with the same password.

Recover password

Given the internal keys, bkcrack can try to find the original password. You can look for a password up to a given length using a given character set:

bkcrack -k 1ded830c 24454157 7213b8c5 -r 10 ?p

You can be more specific by specifying a minimal password length:

bkcrack -k 18f285c6 881f2169 b35d661d -r 11..13 ?p

Learn

A tutorial is provided in the example folder.

For more information, have a look at the documentation and read the source.

Contribute

Do not hesitate to suggest improvements or submit pull requests on GitHub.

License

This project is provided under the terms of the zlib/png license.



Villain - Windows And Linux Backdoor Generator And Multi-Session Handler That Allows Users To Connect With Sibling Servers And Share Their Backdoor Sessions


Villain is a Windows & Linux backdoor generator and multi-session handler that allows users to connect with sibling servers (other machines running Villain) and share their backdoor sessions, handy for working as a team.

The main idea behind the payloads generated by this tool is inherited from HoaxShell. One could say that Villain is an evolved, steroid-induced version of it.

This is an early release currently being tested.
If you are having detection issues, watch this video on how to bypass signature-based detection

Video Presentation

[2022-11-30] Recent & awesome, made by John Hammond -> youtube.com/watch?v=pTUggbSCqA0
[2022-11-14] Original release demo, made by me -> youtube.com/watch?v=NqZEmBsLCvQ

Disclaimer: Running the payloads generated by this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.


Installation & Usage

git clone https://github.com/t3l3machus/Villain
cd ./Villain
pip3 install -r requirements.txt

You should run as root:

Villain.py [-h] [-p PORT] [-x HOAX_PORT] [-c CERTFILE] [-k KEYFILE] [-u] [-q]

For more information about using Villain check out the Usage Guide.

Important Notes

  1. Villain has a built-in auto-obfuscate payload function to assist users in bypassing AV solutions (for Windows payloads). As a result, payloads are undetected (for the time being).
  2. Each generated payload is going to work only once. An already used payload cannot be reused to establish a session.
  3. The communication between sibling servers is AES encrypted using the recipient sibling server's ID as the encryption KEY and the 16 first bytes of the local server's ID as IV. During the initial connection handshake of two sibling servers, each server's ID is exchanged clear text, meaning that the handshake could be captured and used to decrypt traffic between sibling servers. I know it's "weak" that way. It's not supposed to be super secure as this tool was designed to be used during penetration testing / red team assessments, for which this encryption schema should be enough.
  4. Villain instances connected with each other (sibling servers) must be able to directly reach each other as well. I intend to add a network route mapping utility so that sibling servers can use one another as a proxy to achieve cross network communication between them.

Approach

A few notes about the http(s) beacon-like reverse shell approach:

Limitations

  • A backdoor shell is going to hang if you execute a command that initiates an interactive session. For more information read this.

Advantages

  • When it comes to Windows, the generated payloads can run even in PowerShell constraint Language Mode.
  • The generated payloads can run even by users with limited privileges.

Contributions

Pull requests are generally welcome. Please, keep in mind: I am constantly working on new offsec tools as well as maintaining several existing ones. I rarely accept pull requests because I either have a plan for the course of a project or I evaluate that it would be hard to test and/or maintain the foreign code. It doesn't have to do with how good or bad is an idea, it's just too much work and also, I am kind of developing all these tools to learn myself.

There are parts of this project that were removed before publishing because I considered them to be buggy or hard to maintain (at this early stage). If you have an idea for an addition that comes with a significant chunk of code, I suggest you first contact me to discuss if there's something similar already in the making, before making a PR.



Shennina - Automating Host Exploitation With AI


Shennina is an automated host exploitation framework. The mission of the project is to fully automate the scanning, vulnerability scanning/analysis, and exploitation using Artificial Intelligence. Shennina is integrated with Metasploit and Nmap for performing the attacks, as well as being integrated with an in-house Command-and-Control Server for exfiltrating data from compromised machines automatically.

This was developed by Mazin Ahmed and Khalid Farah within the HITB CyberWeek 2019 AI challenge. The project is developed based on the concept of DeepExploit by Isao Takaesu.

Shennina scans a set of input targets for available network services, uses its AI engine to identify recommended exploits for the attacks, and then attempts to test and attack the targets. If the attack succeeds, Shennina proceeds with the post-exploitation phase.

The AI engine is initially trained against live targets to learn reliable exploits against remote services.

Shennina also supports a "Heuristics" mode for identfying recommended exploits.

The documentation can be found in the Docs directory within the project.


Features

  • Automated self-learning approach for finding exploits.
  • High performance using managed concurrency design.
  • Intelligent exploits clustering.
  • Post exploitation capabilities.
  • Deception detection.
  • Ransomware simulation capabilities.
  • Automated data exfiltration.
  • Vulnerability scanning mode.
  • Heuristic mode support for recommending exploits.
  • Windows, Linux, and macOS support for agents.
  • Scriptable attack method within the post-exploitation phase.
  • Exploits suggestions for Kernel exploits.
  • Out-of-Band technique testing for exploitation checks.
  • Automated exfiltration of important data on compromised servers.
  • Reporting capabilities.
  • Coverage for 40+ TTPs within the MITRE ATT&CK Framework.
  • Supports multi-input targets.

Why are we solving this problem with AI?

The problem should be solved by a hash tree without using "AI", however, the HITB Cyber Week AI Challenge required the project to find ways to solve it through AI.

Note

This project is a security experiment.

Legal Disclaimer

This project is made for educational and ethical testing purposes only. Usage of Shennina for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program.

Authors



Legitify - Detect And Remediate Misconfigurations And Security Risks Across All Your GitHub Assets


Strengthen the security posture of your GitHub organization!
Detect and remediate misconfigurations, security and compliance issues across all your GitHub assets with ease

Β 

Installation

  1. You can download the latest legitify release from https://github.com/Legit-Labs/legitify/releases, each archive contains:
  • Legitify binary for the desired platform
  • Built-in policies provided by Legit Security
  1. From source with the following steps:
git clone git@github.com:Legit-Labs/legitify.git
go run main.go analyze ...

Provenance

To enhance the software supply chain security of legitify's users, as of v0.1.6, every legitify release contains a SLSA Level 3 Provenacne document.
The provenance document refers to all artifacts in the release, as well as the generated docker image.
You can use SLSA framework's official verifier to verify the provenance.
Example of usage for the darwin_arm64 architecture for the v0.1.6 release:

VERSION=0.1.6
ARCH=darwin_arm64
./slsa-verifier verify-artifact --source-branch main --builder-id 'https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@refs/tags/v1.2.2' --source-uri "git+https://github.com/Legit-Labs/legitify" --provenance-path multiple.intoto.jsonl ./legitify_${VERSION}_${ARCH}.tar.gz

Requirements

  1. To get the most out of legitify, you need to be an owner of at least one GitHub organization. Otherwise, you can still use the tool if you're an admin of at least one repository inside an organization, in which case you'll be able to see only repository-related policies results.
  2. legitify requires a GitHub personal access token (PAT) to analyze your resources successfully, which can be either provided as an argument (-t) or as an environment variable ($GITHUB_ENV). The PAT needs the following scopes for full analysis:
admin:org, read:enterprise, admin:org_hook, read:org, repo, read:repo_hook

See Creating a Personal Access Token for more information.
Fine-grained personal access tokens are currently not supported because they do not support GitHub's GraphQL (https://github.blog/2022-10-18-introducing-fine-grained-personal-access-tokens-for-github/)

Usage

LEGITIFY_TOKEN=<your_token> legitify analyze

By default, legitify will check the policies against all your resources (organizations, repositories, members, actions).

You can control which resources will be analyzed with command-line flags namespace and org:

  • --namespace (-n): will analyze policies that relate to the specified resources
  • --org: will limit the analysis to the specified organizations
LEGITIFY_TOKEN=<your_token> legitify analyze --org org1,org2 --namespace organization,member

The above command will test organization and member policies against org1 and org2.

GitHub Enterprise Support

You can run legitify against a GitHub Enterprise instance if you set the endpoint URL in the environment variable SERVER_URL:

export SERVER_URL="https://github.example.com/"
LEGITIFY_TOKEN=<your_token> legitify analyze --org org1,org2 --namespace organization,member

GitLab Cloud/Server Support

To run legitify against GitLab Cloud set the scm flag to gitlab --scm gitlab, to run against GitLab Server you need to provide also SERVER_URL:

export SERVER_URL="https://gitlab.example.com/"
LEGITIFY_TOKEN=<your_token> legitify analyze --namespace organization --scm gitlab

Namespaces

Namespaces in legitify are resources that are collected and run against the policies. Currently, the following namespaces are supported:

  1. organization - organization level policies (e.g., "Two-Factor Authentication Is Not Enforced for the Organization")
  2. actions - organization GitHub Actions policies (e.g., "GitHub Actions Runs Are Not Limited To Verified Actions")
  3. member - organization members policies (e.g., "Stale Admin Found")
  4. repository - repository level policies (e.g., "Code Review By At Least Two Reviewers Is Not Enforced")
  5. runner_group - runner group policies (e.g, "runner can be used by public repositories")

By default, legitify will analyze all namespaces. You can limit only to selected ones with the --namespace flag, and then a comma separated list of the selected namespaces.

Output Options

By default, legitify will output the results in a human-readable format. This includes the list of policy violations listed by severity, as well as a summary table that is sorted by namespace.

Output Formats

Using the --output-format (-f) flag, legitify supports outputting the results in the following formats:

  1. human-readable - Human-readable text (default).
  2. json - Standard JSON.

Output Schemes

Using the --output-scheme flag, legitify supports outputting the results in different grouping schemes. Note: --output-format=json must be specified to output non-default schemes.

  1. flattened - No grouping; A flat listing of the policies, each with its violations (default).
  2. group-by-namespace - Group the policies by their namespace.
  3. group-by-resource - Group the policies by their resource e.g. specific organization/repository.
  4. group-by-severity - Group the policies by their severity.

Output Destinations

  • --output-file - full path of the output file (default: no output file, prints to stdout).
  • --error-file - full path of the error logs (default: ./error.log).

Coloring

When outputting in a human-readable format, legitify support the conventional --color[=when] flag, which has the following options:

  • auto - colored output if stdout is a terminal, uncolored otherwise (default).
  • always - colored output regardless of the output destination.
  • none - uncolored output regardless of the output destination.

Misc

  • Use the --failed-only flag to filter-out passed/skipped checks from the result.

Scorecard Support

scorecard is an OSSF's open-source project:

Scorecards is an automated tool that assesses a number of important heuristics ("checks") associated with software security and assigns each check a score of 0-10. You can use these scores to understand specific areas to improve in order to strengthen the security posture of your project. You can also assess the risks that dependencies introduce, and make informed decisions about accepting these risks, evaluating alternative solutions, or working with the maintainers to make improvements.

legitify supports running scorecard for all of the organization's repositories, enforcing score policies and showing the results using the --scorecard flag:

  • no - do not run scorecard (default).
  • yes - run scorecard and employ a policy that alerts on each repo score below 7.0.
  • verbose - run scorecard, employ a policy that alerts on each repo score below 7.0, and embed its output to legitify's output.

legitify runs the following scorecard checks:

Check Public Repository Private Repository
Security-Policy V
CII-Best-Practices V
Fuzzing V
License V
Signed-Releases V
Branch-Protection V V
Code-Review V V
Contributors V V
Dangerous-Workflow V V
Dependency-Update-Tool V V
Maintained V V
Pinned-Dependencies V V
SAST V V
Token-Permissions V V
Vulnerabilities V V
Webhooks V V

Policies

legitify comes with a set of policies in the policies/github directory. These policies are documented here.

In addition, you can use the --policies-path (-p) flag to specify a custom directory for OPA policies.

Contribution

Thank you for considering contributing to Legitify! We encourage and appreciate any kind of contribution. Here are some resources to help you get started:



R4Ven - Track Ip And GPS Location

Track User's Smartphone/Pc Ip And Gps Location.

The tool hosts a fake website which uses an iframe to display a legit website and, if the target allows it, it will fetch the Gps location (latitude and longitude) of the target along with IP Address and Device Information.

This tool is a Proof of Concept and is for Educational Purposes Only.

Using this tool, you can find out what information a malicious website can gather about you and your devices and why you shouldn't click on random links or grant permissions like Location to them.


On link click

+ it wil automatically fetch ip address and device information
! if location permission allowed, it will fetch exact location of target.

Limitation

browsers that block javascript, # or if the user is mocking the GPS location. " dir="auto">
- It will not work on laptops or phones that have broken GPS, 
# browsers that block javascript,
# or if the user is mocking the GPS location.

IP location vs GPS location

- Geographic location based on IP address is NOT accurate,
# Does not provide the location of the target.
# Instead, it provides the approximate location of the ISP (Internet service provider)
longitude and latitude coordinates. @@ Once location permission is granted @@ # accurate location information is recieved to within 20 to 30 meters of the user's location. # (it's almost exact location)" dir="auto">
+ GPS fetch almost exact location because it uses longitude and latitude coordinates.
@@ Once location permission is granted @@
# accurate location information is recieved to within 20 to 30 meters of the user's location.
# (it's almost exact location)

Installation

git clone https://github.com/spyboy-productions/r4ven.git
cd r4ven
pip3 install -r requirements.txt
python3 r4ven.py

enter your discord webhook url (set up a channel in your discord server with webhook integration)

https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks

if not have discord account and sever make one, it's free.

https://discord.com/


Track info data will be sent to your discord webhook channel.
  • why discord webhook? Conveniently, you will receive a notification when someone clicks on the link.

To chnage website template

  • open file index.html on line 12 and replace the src in the iframe. (Note: not every website support iframe)


To port forward install ngrok or use ssh

  • For ngrok port forward type: ngrok http 8000
  • For ssh port forwarding type: ssh -R 80:localhost:8000 ssh.localhost.run

Snapshots



DomainDouche - OSINT Tool to Abuse SecurityTrails Domain Suggestion API To Find Potentially Related Domains By Keyword And Brute Force


Abusing SecurityTrails domain suggestion API to find potentially related domains by keyword and brute force.

Use it while it still works


(Also, hmu on Mastodon: @n0kovo@infosec.exchange)


Usage:

usage: domaindouche.py [-h] [-n N] -c COOKIE -a USER_AGENT [-w NUM] [-o OUTFILE] keyword

Abuses SecurityTrails API to find related domains by keyword.
Go to https://securitytrails.com/dns-trails, solve any CAPTCHA you might encounter,
copy the raw value of your Cookie and User-Agent headers and use them with the -c and -a arguments.

positional arguments:
keyword keyword to append brute force string to

options:
-h, --help show this help message and exit
-n N, --num N number of characters to brute force (default: 2)
-c COOKIE, --cookie COOKIE
raw cookie string
-a USER_AGENT, --useragent USER_AGENT
user-agent string (must match the browser where the cookies are from)
-w NUM, --workers NUM
number of workers (default: 5)
-o OUTFILE, --output OUTFILE
output file path


D4TA-HUNTER - GUI Osint Framework With Kali Linux


D4TA-HUNTER is a tool created in order to automate the collection of information about the employees of a company that is going to be audited for ethical hacking.

In addition, in this tool we can find in the "search company" section by inserting the domain of a company, emails of employees, subdomains and IP's of servers.


GET API KEY

Register on https://rapidapi.com/rohan-patra/api/breachdirectory

Install

git clone https://github.com/micro-joan/D4TA-HUNTER
cd D4TA-HUNTER/
chmod +x run.sh
./run.sh

After executing the application launcher you need to have all the components installed, the launcher will check one by one, and in the case of not having any component installed it will show you the statement that you must enter to install it:



Use

First you must have a free or paid api-key from BreachDirectory.org, if you don't have one and do a search D4TA-HUNTER provides you with a guide on how to get one.

Once you have the api-key you will be able to search for emails, with the advantage of showing you a list of all the password hashes ready for you to copy and paste into one of the online resources provided by D4TA-HUNTER to crack passwords 100 % free.


Β 

You can also insert a domain of a company and D4TA-HUNTER will search for employee emails, subdomains that may be of interest together with IP's of machines found:


Β 

Apis and tools

Service Functions Status
BreachDirectory.org Email, phone or nick leaks
βœ…
ο”‘
(free plan)
TheHarvester Domains and emails of company
βœ…
Free
Kalitorify Tor search
βœ…
Free


Video Demo:Β https://darkhacking.es/d4ta-hunter-framework-osint-para-kali-linux
My website:Β https://microjoan.com
My blog:Β https://darkhacking.es/
Buy me a coffee:Β https://www.buymeacoffee.com/microjoan

DISCLAIMER

ThisΒ toolkitΒ contains materials that can be potentially damaging or dangerous for social media. Refer to the laws in your province/country before accessing, using,or in any other way utilizing this in a wrong way.

This Tool is made for educational purposes only. Do not attempt to violate the law with anything contained here. If this is your intention, then Get the hell out of here!




Appshark - Static Taint Analysis Platform To Scan Vulnerabilities In An Android App


Appshark is a static taint analysis platform to scan vulnerabilities in an Android app.

Prerequisites

Appshark requires a specific version of JDK -- JDK 11. After testing, it does not work on other LTS versions, JDK 8 and JDK 16, due to the dependency compatibility issue.


Building/Compiling AppShark

We assume that you are working in the root directory of the project repo. You can build the whole project with the gradle tool.

$ ./gradlew build  -x test 

After executing the above command, you will see an artifact file AppShark-0.1.1-all.jar in the directory build/libs.

Running AppShark

Like the previous step, we assume that you are still in the root folder of the project. You can run the tool with

$ java -jar build/libs/AppShark-0.1.1-all.jar  config/config.json5

The config.json5 has the following configuration contents.

{
"apkPath": "/Users/apks/app1.apk",
"out": "out",
"rules": "unZipSlip.json",
"maxPointerAnalyzeTime": 600
}

Each JSON field is explained below.

  • apkPath: the path of the apk file to analyze
  • out: the path of the output directory
  • rules: the path(s) of the rule file(s), can be more than 1 rules
  • maxPointerAnalyzeTime: the timeout duration in seconds set for the analysis started from an entry point
  • debugRule: specify the rule name that enables logging for debugging

If you provide a configuration JSON file which sets the output path as out in the project root directory, you will find the result file out/results.json after running the analysis.

Interpreting the Results

Below is an example of the results.json.

{
"AppInfo": {
"AppName": "test",
"PackageName": "net.bytedance.security.app",
"min_sdk": 17,
"target_sdk": 28,
"versionCode": 1000,
"versionName": "1.0.0"
},
"SecurityInfo": {
"FileRisk": {
"unZipSlip": {
"category": "FileRisk",
"detail": "",
"model": "2",
"name": "unZipSlip",
"possibility": "4",
"vulners": [
{
"details": {
"position": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>",
"Sink": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>->$r31",
"entryMethod": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void f()>",
"Source": "<net.byte dance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>->$r3",
"url": "/Volumes/dev/zijie/appshark-opensource/out/vuln/1-unZipSlip.html",
"target": [
"<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>->$r3",
"pf{obj{<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>:35=>java.lang.StringBuilder}(unknown)->@data}",
"<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>->$r11",
"<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>->$r31"
]
},
"hash": "ec57a2a3190677ffe78a0c8aaf58ba5aee4d 2247",
"possibility": "4"
},
{
"details": {
"position": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>",
"Sink": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>->$r34",
"entryMethod": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void f()>",
"Source": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>->$r3",
"url": "/Volumes/dev/zijie/appshark-opensource/out/vuln/2-unZipSlip.html",
"target": [
"<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>->$r3",
"pf{obj{<net.bytedance.security.a pp.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>:33=>java.lang.StringBuilder}(unknown)->@data}",
"<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>->$r14",
"<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>->$r34"
]
},
"hash": "26c6d6ee704c59949cfef78350a1d9aef04c29ad",
"possibility": "4"
}
],
"wiki": "",
"deobfApk": "/Volumes/dev/zijie/appshark-opensource/app.apk"
}
}
},
"DeepLinkInfo": {
},
"HTTP_API": [
],
"JsBridgeInfo": [
],
"BasicInfo": {
"ComponentsInfo": {
},
"JSNativeInterface": [
]
},
"UsePermissions": [
],
"DefinePermis sions": {
},
"Profile": "/Volumes/dev/zijie/appshark-opensource/out/vuln/3-profiler.json"
}


Bomber - Scans Software Bill Of Materials (SBOMs) For Security Vulnerabilities


bomber is an application that scans SBOMs for security vulnerabilities.

Overview

So you've asked a vendor for an Software Bill of Materials (SBOM) for one of their closed source products, and they provided one to you in a JSON file... now what?

The first thing you're going to want to do is see if any of the components listed inside the SBOM have security vulnerabilities, and what kind of licenses these components have. This will help you identify what kind of risk you will be taking on by using the product. Finding security vulnerabilities and license information for components identified in an SBOM is exactly what bomber is meant to do. bomber can read any JSON or XML based CycloneDX format, or a JSON SPDX or Syft formatted SBOM, and tell you pretty quickly if there are any vulnerabilities.


What SBOM formats are supported?

There are quite a few SBOM formats available today. bomber supports the following:

Providers

bomber supports multiple sources for vulnerability information. We call these providers. Currently, bomber uses OSV as the default provider, but you can also use the Sonatype OSS Index.

Please note that each provider supports different ecosystems, so if you're not seeing any vulnerabilities in one, try another. It is also important to understand that each provider may report different vulnerabilities. If in doubt, look at a few of them.

If bomber does not find any vulnerabilities, it doesn't mean that there aren't any. All it means is that the provider being used didn't detect any, or it doesn't support the ecosystem. Some providers have vulnerabilities that come back with no Severity information. In this case, the Severity will be listed as "UNDEFINED"

What is an ecosystem?

An ecosystem is simply the package manager, or type of package. Examples include rpm, npm, gems, etc. Each provider supports different ecosystems.

OSV

OSV is the default provider for bomber. It is an open, precise, and distributed approach to producing and consuming vulnerability information for open source.

You don't need to register for any service, get a password, or a token. Just use bomber without a provider flag and away you go like this:

bomber scan test.cyclonedx.json

Supported ecosystems

At this time, the OSV supports the following ecosystems:

  • Android
  • crates.io
  • Debian
  • Go
  • Maven
  • NPM
  • NuGet
  • Packagist
  • PyPI
  • RubyGems

and others...

OSV Notes

The OSV provider is pretty slow right now when processing large SBOMs. At the time of this writing, their batch endpoint is not functioning, so bomber needs to call their API one package at a time.

Additionally, there are cases where OSV does not return a Severity, or a CVE/CWE. In these rare cases, bomber will output "UNSPECIFIED", and "UNDEFINED" respectively.

Sonatype OSS Index

In order to use bomber with the Sonatype OSS Index you need to get an account. Head over to the site, and create a free account, and make note of your username (this will be the email that you registered with).

Once you log in, you'll want to navigate to your settings and make note of your API token. Please don't share your token with anyone.

Supported ecosystems

At this time, the Sonatype OSS Index supports the following ecosystems:

  • Maven
  • NPM
  • Go
  • PyPi
  • Nuget
  • RubyGems
  • Cargo
  • CocoaPods
  • Composer
  • Conan
  • Conda
  • CRAN
  • RPM
  • Swift

Installation

Mac

You can use Homebrew to install bomber using the following:

brew tap devops-kung-fu/homebrew-tap
brew install devops-kung-fu/homebrew-tap/bomber

If you do not have Homebrew, you can still download the latest release (ex: bomber_0.1.0_darwin_all.tar.gz), extract the files from the archive, and use the bomber binary.

If you wish, you can move the bomber binary to your /usr/local/bin directory or anywhere on your path.

Linux

To install bomber, download the latest release for your platform and install locally. For example, install bomber on Ubuntu:

dpkg -i bomber_0.1.0_linux_arm64.deb

Using bomber

You can scan either an entire folder of SBOMs or an individual SBOM with bomber. bomber doesn't care if you have multiple formats in a single folder. It'll sort everything out for you.

Note that the default output for bomber is to STDOUT. Options to output in HTML or JSON are described later in this document.

Single SBOM scan

credentials (ossindex) bomber scan --provider=xxx --username=xxx --token=xxx spdx-sbom.json" dir="auto">
# Using OSV (the default provider) which does not require any credentials
bomber scan spdx.sbom.json

# Using a provider that requires credentials (ossindex)
bomber scan --provider=xxx --username=xxx --token=xxx spdx-sbom.json

If the provider finds vulnerabilities you'll see an output similar to the following:

If the provider doesn't return any vulnerabilities you'll see something like the following:

Entire folder scan

This is good for when you receive multiple SBOMs from a vendor for the same product. Or, maybe you want to find out what vulnerabilities you have in your entire organization. A folder scan will find all components, de-duplicate them, and then scan them for vulnerabilities.

# scan a folder of SBOMs (the following command will scan a folder in your current folder named "sboms")
bomber scan --username=xxx --token=xxx ./sboms

You'll see a similar result to what a Single SBOM scan will provide.

Output to HTML

If you would like a readable report generated with detailed vulnerability information, you can utilized the --output flag to save a report to an HTML file.

Example command:

bomber scan bad-bom.json --output=html

This will save a file in your current folder in the format "YYYY-MM-DD-HH-MM-SS-bomber-results.html". If you open this file in a web browser, you'll see output like the following:

Output to JSON

bomber can output vulnerability data in JSON format using the --output flag. The default output is to STDOUT. There is a ton of more information in the JSON output than what gets displayed in the terminal. You'll be able to see a package description and what it's purpose is, what the vulnerability name is, a summary of the vulnerability, and more.

Example command:

bomber scan bad-bom.json --output=json

Advanced stuff

If you wish, you can set two environment variables to store your credentials, and not have to type them on the command line. Check out the Environment Variables information later in this README.

Environment Variables

If you don't want to enter credentials all the time, you can add the following to your .bashrc or .bash_profile

export BOMBER_PROVIDER_USERNAME={{your OSS Index user name}}
export BOMBER_PROVIDER_TOKEN={{your OSS Index API Token}}

Messing around

If you want to kick the tires on bomber you'll find a selection of test SBOMs in the test folder.

Notes

  • It's pretty rare to see SBOMs with license information. Most of the time, the generators like Syft need a flag like --license. If you need license info, make sure you ask for it with the SBOM.
  • Hate to say it, but SPDX is wonky. If you don't get any results on an SPDX file, try using a CycloneDX file. In general you should always try to get CycloneDX SBOMs from your vendors.
  • OSV. It's great, but the API is also wonky. They have a batch endpoint that would make it a ton quicker to get information back, but it doesn't work. bomber needs to send one PURL at a time to get vulnerabilities back, so in a big SBOM it will take some time. We'll keep an eye on that.
  • OSV has another issue where the ecosystem doesn't always return vulnerabilities when you pass it to their API. We had to remove passing this to the API to get anything to return. They also don't echo back the ecosystem so we can't check to ensure that if we pass one ecosystem to it, that we are getting a vulnerability for the same one back.

Contributing

If you would like to contribute to the development of bomber please refer to the CONTRIBUTING.md file in this repository. Please read the CODE_OF_CONDUCT.md file before contributing.

Software Bill of Materials

bomber uses Syft to generate a Software Bill of Materials every time a developer commits code to this repository (as long as Hookzis being used and is has been initialized in the working directory). More information for CycloneDX is available here.

The current CycloneDX SBOM for bomber is available here.

Credits

A big thank-you to our friends at Smashicons for the bomber logo.

Big kudos to our OSS homies at Sonatype for providing a wicked tool like the Sonatype OSS Index.



Aura - Python Source Code Auditing And Static Analysis On A Large Scale

Source code auditing and static code analysis

Aura is a static analysis framework developed as a response to the ever-increasing threat of malicious packages and vulnerable code published on PyPI.

Project goals:

  • provide an automated monitoring system over uploaded packages to PyPI, alert on anomalies that can either indicate an ongoing attack or vulnerabilities in the code
  • enable an organization to conduct automated security audits of the source code and implement secure coding practices with a focus on auditing 3rd party code such as python package dependencies
  • allow researches to scan code repositories on a large scale, create datasets and perform analysis to further advance research in the area of vulnerable and malicious code dependencies

Feature list:

  • Suitable for analyzing malware with a guarantee of a zero-code execution
  • Advanced deobfuscation mechanisms by rewriting the AST tree - constant propagations, code unrolling, and other dirty tricks
  • Recursive scanning automatically unpacks archives such as zips, wheels, etc.. and scans the content
  • Support scanning also non-python files - plugins can work in a β€œraw-file” mode such as the built-in Yara integration
  • Scan for hardcoded secrets, passwords, and other sensitive information
  • Custom diff engine - you can compare changes between different data sources such as typosquatting PyPI packages to what changes were made
  • Works for both Python 2.x and Python 3.x source code
  • High performance, designed to scan the whole PyPI repository
  • Output in numerous formats such as pretty plain text, JSON, SQLite, SARIF, etc…
  • Tested on over 4TB of compressed python source code
  • Aura is able to report on code behavior such as network communication, file access, or system command execution
  • Compute the β€œAura score” telling you how trustworthy the source code/input data is
  • and much much more…

Didn't find what you are looking for? Aura's architecture is based on a robust plugin system, where you can customize almost anything, ranging from a set of data analyzers, transport protocols to custom out formats.


Installation

# Via pip:
pip install aura-security[full]
# or build from source/git
poetry install --no-dev -E full

Or just use a prebuild docker image sourcecodeai/aura:dev

Running Aura

docker run -ti --rm sourcecodeai/aura:dev scan pypi://requests -v

Aura uses a so-called URIs to identify the protocol and location to scan, if no protocol is used, the scan argument is treated as a path to the file or directory on a local system.

Diff packages:

docker run -ti --rm sourcecodeai/aura:dev diff pypi://requests pypi://requests2

Find most popular typosquatted packages (you need to call aura update to download the dataset first):

aura find-typosquatting --max-distance 2 --limit 10
Python source code auditing and static analysis on a large scale (10)

Why Aura?

While there are other tools with functionality that overlaps with Aura such as Bandit, dlint, semgrep etc. the focus of these alternatives is different which impacts the functionality and how they are being used. These alternatives are mainly intended to be used in a similar way to linters, integrated into IDEs, frequently run during the development which makes it important to minimize false positives and reporting with clear actionable explanations in ideal cases.

Aura on the other hand reports on ** behavior of the code**, anomalies, and vulnerabilities with as much information as possible at the cost of false positive. There are a lot of things reported by aura that are not necessarily actionable by a user but they tell you a lot about the behavior of the code such as doing network communication, accessing sensitive files, or using mechanisms associated with obfuscation indicating a possible malicious code. By collecting this kind of data and aggregating it together, Aura can be compared in functionality to other security systems such as antivirus, IDS, or firewalls that are essentially doing the same analysis but on a different kind of data (network communication, running processes, etc).

Here is a quick overview of differences between Aura and other similar linters and SAST tools:

  • input data:
    • Other SAST tools - usually restricted to only python (target) source code and python version under which the tool is installed.
    • Aura can analyze both binary (or non-python code) and python source code as well. Able to analyze a mixture of python code compatible with different python versions (py2k & py3k) using the same Aura installation.
  • reporting:
    • Other SAST tools - Aims at integrating well with other systems such as IDEs, CI systems with actionable results while trying to minimize false positives to prevent overwhelming users with too many non-significant alerts.
    • Aura - reports as much information as possible that is not immediately actionable such as behavioral and anomaly analysis. The output format is designed for easy machine processing and aggregation rather than human readable.
  • configuration:
    • Other SAST tools - The tools are fine-tuned to the target project by customizing the signatures to target specific technologies used by the target project. The overriding configuration is often possible by inserting comments inside the source code such as # nosec that will suppress the alert at that position
    • Aura - it is expected that there is little to no knowledge in advance about the technologies used by code that is being scanned such as auditing a new python package for approval to be used as a dependency in a project. In most cases, it is not even possible to modify the scanned source code such as using comments to indicate to linter or aura to skip detection at that location because it is scanning a copy of that code that is hosted at some remote location.

Authors & Contributors

Donate

LICENSE

Aura framework is licensed under the GPL-3.0. Datasets produced from global scans using Aura are released under the CC BY-NC 4.0 license. Use the following citation when using Aura or data produced by Aura in research:

@misc{Carnogursky2019thesis,
AUTHOR = "CARNOGURSKY, Martin",
TITLE = "Attacks on package managers [online]",
YEAR = "2019 [cit. 2020-11-02]",
TYPE = "Bachelor Thesis",
SCHOOL = "Masaryk University, Faculty of Informatics, Brno",
SUPERVISOR = "Vit Bukac",
URL = "Available at WWW <https://is.muni.cz/th/y41ft/>",
}


dnsReaper - Subdomain Takeover Tool For Attackers, Bug Bounty Hunters And The Blue Team!


DNS Reaper is yet another sub-domain takeover tool, but with an emphasis on accuracy, speed and the number of signatures in our arsenal!

We can scan around 50 subdomains per second, testing each one with over 50 takeover signatures. This means most organisations can scan their entire DNS estate in less than 10 seconds.


You can use DNS Reaper as an attacker or bug hunter!

You can run it by providing a list of domains in a file, or a single domain on the command line. DNS Reaper will then scan the domains with all of its signatures, producing a CSV file.

You can use DNS Reaper as a defender!

You can run it by letting it fetch your DNS records for you! Yes that's right, you can run it with credentials and test all your domain config quickly and easily. DNS Reaper will connect to the DNS provider and fetch all your records, and then test them.

We currently support AWS Route53, Cloudflare, and Azure. Documentation on adding your own provider can be found here

You can use DNS Reaper as a DevSecOps Pro!

Punk Security are a DevSecOps company, and DNS Reaper has its roots in modern security best practice.

You can run DNS Reaper in a pipeline, feeding it a list of domains that you intend to provision, and it will exit Non-Zero if it detects a takeover is possible. You can prevent takeovers before they are even possible!

Usage

To run DNS Reaper, you can use the docker image or run it with python 3.10.

Findings are returned in the output and more detail is provided in a local "results.csv" file. We also support json output as an option.

Run it with docker

docker run punksecurity/dnsreaper --help

Run it with python

pip install -r requirements.txt
python main.py --help

Common commands

  • Scan AWS account:

    docker run punksecurity/dnsreaper aws --aws-access-key-id <key> --aws-access-key-secret <secret>

    For more information, see the documentation for the aws provider

  • Scan all domains from file:

    docker run -v $(pwd):/etc/dnsreaper punksecurity/dnsreaper file --filename /etc/dnsreaper/<filename>

  • Scan single domain

    docker run punksecurity/dnsreaper single --domain <domain>

  • Scan single domain and output to stdout:

    You should either redirect the stderr output or save stdout output with >

    docker run punksecurity/dnsreaper single --domain <domain> --out stdout --out-format=json > output

Full usage

          ____              __   _____                      _ __
/ __ \__ ______ / /__/ ___/___ _______ _______(_) /___ __
/ /_/ / / / / __ \/ //_/\__ \/ _ \/ ___/ / / / ___/ / __/ / / /
/ ____/ /_/ / / / / ,< ___/ / __/ /__/ /_/ / / / / /_/ /_/ /
/_/ \__,_/_/ /_/_/|_|/____/\___/\___/\__,_/_/ /_/\__/\__, /
PRESENTS /____/
DNS Reaper ☠️

Scan all your DNS records for subdomain takeovers!

usage:
.\main.py provider [options]

output:
findings output to screen and (by default) results.csv

help:
.\main.py --help

providers:
> aws - Scan multiple domains by fetching them from AWS Route53
> azure - Scan multiple domains by fetching t hem from Azure DNS services
> bind - Read domains from a dns BIND zone file, or path to multiple
> cloudflare - Scan multiple domains by fetching them from Cloudflare
> file - Read domains from a file, one per line
> single - Scan a single domain by providing a domain on the commandline
> zonetransfer - Scan multiple domains by fetching records via DNS zone transfer

positional arguments:
{aws,azure,bind,cloudflare,file,single,zonetransfer}

options:
-h, --help Show this help message and exit
--out OUT Output file (default: results) - use 'stdout' to stream out
--out-format {csv,json}
--resolver RESOLVER
Provide a custom DNS resolver (or multiple seperated by commas)
--parallelism PARALLELISM
Number of domains to test in parallel - too high and you may see odd DNS results (default: 30)
--disable-probable Do not check for probable conditions
--enable-unlikely Check for more conditions, but with a high false positive rate
--signature SIGNATURE
Only scan with this signature (multiple accepted)
--exclude-signature EXCLUDE_SIGNATURE
Do not scan with this signature (multiple accepted)
--pipeline Exit Non-Zero on detection (used to fail a pipeline)
-v, --verbose -v for verbose, -vv for extra verbose
--nocolour Turns off coloured text

aws:
Scan multiple domains by fetching them from AWS Route53

--aws-access-key-id AWS_ACCESS_KEY_ID
Optional
--aws-access-key-secret AWS_ACCESS_KEY_SECRET
Optional

azure:
Scan multiple domains by fetching them from Azure DNS services

--az-subscription-id AZ_SUBSCRIPTION_ID
Required
--az-tenant-id AZ_TENANT_ID
Required
--az-client-id AZ_CLIENT_ID
Required
--az-client-secret AZ_CLIENT_SECRET
Required

bind:
Read domains from a dns BIND zone file, or path to multiple

--bind-zone-file BIND_ZONE_FILE
Required

cloudflare:
Scan multiple domains by fetching them from Cloudflare

--cloudflare-token CLOUDFLARE_TOKEN
Required

file:
Read domains from a file, one per line

--filename FILENAME Required

single:
Scan a single domain by providing a domain on the commandline

--domain DOMAIN Required

zonetransfer:
Scan multiple domains by fetching records via DNS zone transfer

--zonetransfer-nameserver ZONE TRANSFER_NAMESERVER
Required
--zonetransfer-domain ZONETRANSFER_DOMAIN
Required


Packj - Large-Scale Security Analysis Platform To Detect Malicious/Risky Open-Source Packages


Packj (pronounced package) is a command line (CLI) tool to vet open-source software packages for "risky" attributes that make them vulnerable to supply chain attacks. This is the tool behind our large-scale security analysis platform Packj.dev that continuously vets packages and provides free reports.


How to use

Packj accepts two input args:

  • name of the registry or package manager, pypi, npm, or rubygems.
  • name of the package to be vetted

Packj supports vetting of PyPI, NPM, and RubyGems packages. It performs static code analysis and checks for several metadata attributes such as release timestamps, author email, downloads, dependencies. Packages with expired email domains, large release time gap, sensitive APIs, etc. are flagged as risky for security reasons.

Packj also analyzes public repo code as well as metadata (e.g., stars, forks). By comparing the repo description and package title, you can be sure if the package indeed has been created from the repo to mitigate any starjacking attacks.

Containerized

The best way to use Packj is to run it inside Docker (or Podman) container. You can pull our latest image from DockerHub to get started.

docker pull ossillate/packj:latest

$ docker run --mount type=bind,source=/tmp,target=/tmp ossillate/packj:latest npm browserify
[+] Fetching 'browserify' from npm...OK [ver 17.0.0]
[+] Checking version...ALERT [598 days old]
[+] Checking release history...OK [484 version(s)]
[+] Checking release time gap...OK [68 days since last release]
[+] Checking author...OK [mail@substack.net]
[+] Checking email/domain validity...ALERT [expired author email domain]
[+] Checking readme...OK [26838 bytes]
[+] Checking homepage...OK [https://github.com/browserify/browserify#readme]
[+] Checking downloads...OK [2.2M weekly]
[+] Checking repo_url URL...OK [https://github.com/browserify/browserify]
[+] Checking repo data...OK [stars: 14077, forks: 1236]
[+] Checking repo activity...OK [commits: 2290, contributors: 207, tags: 413]
[+] Checking for CVEs...OK [none found]
[+] Checking dependencies...ALERT [48 found]
[+] Downloading package 'browserify' (ver 17. 0.0) from npm...OK [163.83 KB]
[+] Analyzing code...ALERT [needs 3 perms: process,file,codegen]
[+] Checking files/funcs...OK [429 files (383 .js), 744 funcs, LoC: 9.7K]
=============================================
[+] 5 risk(s) found, package is undesirable!
=> Complete report: /tmp/npm-browserify-17.0.0.json
{
"undesirable": [
"old package: 598 days old",
"invalid or no author email: expired author email domain",
"generates new code at runtime",
"reads files and dirs",
"forks or exits OS processes",
]
}

Specific package versions to be vetted could be specified using ==. Please refer to the example below

$ docker run --mount type=bind,source=/tmp,target=/tmp ossillate/packj:latest pypi requests==2.18.4
[+] Fetching 'requests' from pypi...OK [ver 2.18.4]
[+] Checking version...ALERT [1750 days old]
[+] Checking release history...OK [142 version(s)]
[+] Checking release time gap...OK [14 days since last release]
[+] Checking author...OK [me@kennethreitz.org]
[+] Checking email/domain validity...OK [me@kennethreitz.org]
[+] Checking readme...OK [49006 bytes]
[+] Checking homepage...OK [http://python-requests.org]
[+] Checking downloads...OK [50M weekly]
[+] Checking repo_url URL...OK [https://github.com/psf/requests]
[+] Checking repo data...OK [stars: 47547, forks: 8758]
[+] Checking repo activity...OK [commits: 6112, contributors: 725, tags: 144]
[+] Checking for CVEs...ALERT [2 found]
[+] Checking dependencies...OK [9 direct]
[+] Downloading package 'requests' (ver 2.18.4) from pypi...OK [123.27 KB]
[+ ] Analyzing code...ALERT [needs 4 perms: codegen,process,file,network]
[+] Checking files/funcs...OK [47 files (33 .py), 578 funcs, LoC: 13.9K]
=============================================
[+] 6 risk(s) found, package is undesirable, vulnerable!
{
"undesirable": [
"old package: 1744 days old",
"invalid or no homepage: insecure webpage",
"generates new code at runtime",
"fetches data over the network",
"reads files and dirs",
],
"vulnerable": [
"contains CVE-2018-18074,CVE-2018-18074"
]
}
=> Complete report: /tmp/pypi-requests-2.18.4.json
=> View pre-vetted package report at https://packj.dev/package/PyPi/requests/2.18.4

Non-containerized

Alternatively, you can install Python/Ruby dependencies locally and test it.

NOTE

  • Packj has only been tested on Linux.
  • Requires Python3 and Ruby. API analysis will fail if used with Python2.
  • You will have to install Python and Ruby dependencies before using the tool:
    • pip install -r requirements.txt
    • gem install google-protobuf:3.21.2 rubocop:1.31.1
$ python3 main.py npm eslint
[+] Fetching 'eslint' from npm...OK [ver 8.16.0]
[+] Checking version...OK [10 days old]
[+] Checking release history...OK [305 version(s)]
[+] Checking release time gap...OK [15 days since last release]
[+] Checking author...OK [nicholas+npm@nczconsulting.com]
[+] Checking email/domain validity...OK [nicholas+npm@nczconsulting.com]
[+] Checking readme...OK [18234 bytes]
[+] Checking homepage...OK [https://eslint.org]
[+] Checking downloads...OK [23.8M weekly]
[+] Checking repo_url URL...OK [https://github.com/eslint/eslint]
[+] Checking repo data...OK [stars: 20669, forks: 3689]
[+] Checking repo activity...OK [commits: 8447, contributors: 1013, tags: 302]
[+] Checking for CVEs...OK [none found]
[+] Checking dependencies...ALERT [35 found]
[+] Downloading package 'eslint' (ver 8.16.0) from npm...OK [490.14 KB]
[+] Analyzing code...ALERT [needs 2 perms: codegen,file]
[+ ] Checking files/funcs...OK [395 files (390 .js), 1022 funcs, LoC: 76.3K]
=============================================
[+] 2 risk(s) found, package is undesirable!
{
"undesirable": [
"generates new code at runtime",
"reads files and dirs: ['package/lib/cli-engine/load-rules.js:37', 'package/lib/cli-engine/file-enumerator.js:142']"
]
}
=> Complete report: /tmp/npm-eslint-8.16.0.json

How it works

  • It first downloads the metadata from the registry using their APIs and analyze it for "risky" attributes.
  • To perform API analysis, the package is downloaded from the registry using their APIs into a temp dir. Then, packj performs static code analysis to detect API usage. API analysis is based on MalOSS, a research project from our group at Georgia Tech.
  • Vulnerabilities (CVEs) are checked by pulling info from OSV database at OSV
  • Python PyPI and NPM package downloads are fetched from pypistats and npmjs
  • All risks detected are aggregated and reported

Risky attributes

The design of Packj is guided by our study of 651 malware samples of documented open-source software supply chain attacks. Specifically, we have empirically identified a number of risky code and metadata attributes that make a package vulnerable to supply chain attacks.

For instance, we flag inactive or unmaintained packages that no longer receive security fixes. Inspired by Android app runtime permissions, Packj uses a permission-based security model to offer control and code transparency to developers. Packages that invoke sensitive operating system functionality such as file accesses and remote network communication are flagged as risky as this functionality could leak sensitive data.

Some of the attributes we vet for, include

Attribute Type Description Reason
Release date Metadata Version release date to flag old or abandonded packages Old or unmaintained packages do not receive security fixes
OS or lang APIs Code Use of sensitive APIs, such as exec and eval Malware uses APIs from the operating system or language runtime to perform sensitive operations (e.g., read SSH keys)
Contributors' email Metadata Email addresses of the contributors Incorrect or invalid of email addresses suggest lack of 2FA
Source repo Metadata Presence and validity of public source repo Absence of a public repo means no easy way to audit or review the source code publicly

Full list of the attributes we track can be viewed at threats.csv

These attributes have been identified as risky by several other researchers [1, 2, 3] as well.

How to customize

Packj has been developed with a goal to assist developers in identifying and reviewing potential supply chain risks in packages.

However, since the degree of perceived security risk from an untrusted package depends on the specific security requirements, Packj can be customized according to your threat model. For instance, a package with no 2FA may be perceived to pose greater security risks to some developers, compared to others who may be more willing to use such packages for the functionality offered. Given the volatile nature of the problem, providing customized and granular risk measurement is one of our goals.

Packj can be customized to minimize noise and reduce alert fatigue by simply commenting out unwanted attributes in threats.csv

Malware found

We found over 40 malicious packages on PyPI using this tool. A number of them been taken down. Refer to an example below:

$ python3 main.py pypi krisqian
[+] Fetching 'krisqian' from pypi...OK [ver 0.0.7]
[+] Checking version...OK [256 days old]
[+] Checking release history...OK [7 version(s)]
[+] Checking release time gap...OK [1 days since last release]
[+] Checking author...OK [KrisWuQian@baidu.com]
[+] Checking email/domain validity...OK [KrisWuQian@baidu.com]
[+] Checking readme...ALERT [no readme]
[+] Checking homepage...OK [https://www.bilibili.com/bangumi/media/md140632]
[+] Checking downloads...OK [13 weekly]
[+] Checking repo_url URL...OK [None]
[+] Checking for CVEs...OK [none found]
[+] Checking dependencies...OK [none found]
[+] Downloading package 'KrisQian' (ver 0.0.7) from pypi...OK [1.94 KB]
[+] Analyzing code...ALERT [needs 3 perms: process,network,file]
[+] Checking files/funcs...OK [9 files (2 .py), 6 funcs, LoC: 184]
=============================================
[+] 6 risk(s) found, package is undes irable!
{
"undesirable": [
"no readme",
"only 45 weekly downloads",
"no source repo found",
"generates new code at runtime",
"fetches data over the network: ['KrisQian-0.0.7/setup.py:40', 'KrisQian-0.0.7/setup.py:50']",
"reads files and dirs: ['KrisQian-0.0.7/setup.py:59', 'KrisQian-0.0.7/setup.py:70']"
]
}
=> Complete report: pypi-KrisQian-0.0.7.json
=> View pre-vetted package report at https://packj.dev/package/PyPi/KrisQian/0.0.7

Packj flagged KrisQian (v0.0.7) as suspicious due to absence of source repo and use of sensitive APIs (network, code generation) during package installation time (in setup.py). We decided to take a deeper look, and found the package malicious. Please find our detailed analysis at https://packj.dev/malware/krisqian.

More examples of malware we found are listed at https://packj.dev/malware Please reach out to us at oss@ossillate.com for full list.

Resources

To learn more about Packj tool or open-source software supply chain attacks, refer to our

The vetting tool <g-emoji alias=rocket class=g-emoji fallback-src=https://github.githubassets.com/images/icons/emoji/unicode/1f680.png>&#128640;</g-emoji> behind our large-scale security analysis platform to detect malicious/risky open-source packages (7)

Upcoming talks

Feature roadmap

  • Add support for other language ecosystems. Rust is a work in progress, and will be available in July '22 (last week).
  • Add functionality to detect several other "risky" code as well as metadata attributes.
  • Packj currently only performs static code analysis, we are working on adding support for dynamic analysis (WIP, ETA: end of summer)

Team

Packj has been developed by Cybersecurity researchers at Ossillate Inc. and external collaborators to help developers mitigate risks of supply chain attacks when sourcing untrusted third-party open-source software dependencies. We thank our developers and collaborators.

We welcome code contributions. Join our discord community for discussion and feature requests.

FAQ

  • What Package Managers (Registries) are supported?

Packj can currently vet NPM, PyPI, and RubyGems packages for "risky" attributes. We are adding support for Rust.

  • Does it work on obfuscated calls? For example, a base 64 encrypted string that gets decrypted and then passed to a shell?

This is a very common malicious behavior. Packj detects code obfuscation as well as spawning of shell commands (exec system call). For example, Packj can flag use of getattr() and eval() API as they indicate "runtime code generation"; a developer can go and take a deeper look then. See main.py for details.

  • Does this work at the system call level, where it would detect e.g. any attempt to open ~/.aws/credentials, or does it rely on heuristic analysis of the code itself, which will always be able to be "coded around" by the malware authors?

Packj currently uses static code analysis to derive permissions (e.g., file/network accesses). Therefore, it can detect open() calls if used by the malware directly (e.g., not obfuscated in a base64 encoded string). But, Packj can also point out such base64 decode calls. Fortunately, malware has to use these APIs (read, open, decode, eval, etc.) for their functionality -- there's no getting around. Having said that, a sophisticated malware can hide itself better, so dynamic analysis must be performed for completeness. We are incorporating strace-based dynamic analysis (containerized) to collect system calls. See roadmap for details.



LiveTargetsFinder - Generates Lists Of Live Hosts And URLs For Targeting, Automating The Usage Of MassDNS, Masscan And Nmap To Filter Out Unreachable Hosts And Gather Service Information


Generates lists of live hosts and URLs for targeting, automating the usage of Massdns, Masscan and nmap to filter out unreachable hosts

Given an input file of domain names, this script will automate the usage of MassDNS to filter out unresolvable hosts, and then pass the results on to Masscan to confirm that the hosts are reachable and on which ports. The script will then generate a list of full URLs to be used for further targeting (passing into tools like gobuster or dirsearch, or making HTTP requests), a list of reachable domain names, and a list of reachable IP addresses. As an optional last step, you can run an nmap version scan on this reduced host list, verifying that the earlier reachable hosts are up, and gathering service information from their open ports.


Overview

This script is especially useful for large domain sets, such as subdomain enumerations gathered from an apex domain with thousands of subdomains. With these large lists, an nmap scan would simply take too long. The goal here is to first use the less accurate, but much faster, MassDNS to quickly reduce the size of your input list by removing unresolvable domains. Then, Masscan will be able to take the output from MassDNS, and further confirm that the hosts are reachable, and on which ports. The script will then parse these results and generate lists of the live hosts discovered.

Now, the list of hosts should be reduced enough to be suitable for further scanning/testing. If you want to go a step further, you can tell the script to run an nmap scan on the list of reachable hosts, which should take more reasonable amount of time with the shorter list of hosts. After running nmap, any false positives given from Masscan will be filtered out. Raw nmap output will be stored in the regular nmap XML format, and additional information from the version detection will be added to a SQLite database.

Installation

If using the nmap scan option, this tool assumes that you already have nmap installed

Note: Running the install script is only needed if you do not already have MassDNS and Masscan installed, or if you would like to reinstall them inside this repo. If you do not run the script, you can provide the paths to the respective executables as arguments. The script additionally expects that the resolvers list included with MassDNS be located at {massDNS_directory}/lists/resolvers.txt.

git clone https://github.com/allyomalley/LiveTargetsFinder.git
cd LiveTargetsFinder
sudo pip3 install -r requirements.txt

(OPTIONAL)

chmod +x install_deps.sh
./install_deps.sh

If you do not already have MassDNS and Masscan installed, and would prefer to install them yourself, see the documentation for instructions:

MassDNS

Masscan

I have only tested this script on macOS and Linux - the python script itself should work on a Windows machine, though I believe the installation for MassDNS and Masscan will differ.

Usage

python3 liveTargetsFinder.py [domainList] [options]
Flag Description Default Required
Β  Β  Β  Β  Β  Β  Β  Β  --target-list Β  Β  Β  Β  Β  Β  Β  Β  Input file containing list of domains, e.g google.com Yes
Β  --massdns-path Β  Path to the MassDNS executable, if non-default ./massdns/bin/massdns No
Β  --masscan-path Β  Path to the Masscan executable, if non-default ./masscan/bin/masscan No
Β  --nmap Β  Run an nmap version detection scan on the gathered live hosts Disabled No
Β  --db-path Β  If using the --nmap option, supply the path to the database you would like to append to (will be created if does not exist) output/liveTargetsFinder.sqlite3 No
  • Note that the Masscan and MassDNS settings are hardcoded inside liveTargetsFinder.py. Feel free to edit them (lines 87 + 97).
  • Since this tool was designed with very large lists in mind, I tweaked many of the settings to try to balance speed, accuracy, and network constraints - these can all be adjusted to suit your needs and bandwith.
  • Default settings for Masscan only scans ports 80 and 443.
    • -s, (--hashmap-size) in particular was chosen for performance reasons - you will likely be able to increase this.
    • Full MassDNS arguments:
      • -c 25 -o J -r ./massdns/lists/resolvers.txt -s 100 -w massdnsOutput -t A targetHosts
      • Documentation
  • Another setting of note is the --max-rate argument for Masscan - you will likely want to adjust this.
    • Full Masscan arguments:
      • -iL ipFile -oD masscanOutput --open-only --max-rate 5000 -p80,443 --max-retries 10
      • Documentation
  • Default nmap settings only scans ports 80 and 443, with timing -T4 and a few NSE scripts.
    • Full nmap arguments:
      • --script http-server-header.nse,http-devframework.nse,http-headers -sV -T4 -p80,443 -oX {output.xml}

Example

Did run install script:

python3 liveTargetsFinder.py --target-list victim_domains.txt

Did NOT run the install script:

python3 liveTargetsFinder.py --target-list victim_domains.txt --massdns-path ../massdns/bin/massdns --masscan-path ../masscan/bin/masscan 

Perform an nmap scan and write to/append to the default DB path (liveTargetsFinder.sqlite3)

python3 liveTargetsFinder.py --target-list victim_domains.txt --nmap

Perform an nmap scan and write to/append to the specified database

python3 liveTargetsFinder.py --target-list victim_domains.txt --nmap --db-path serviceinfo_victim.sqlite3

Output

Input: victimDomains.txt

File Description Examples
output/victimDomains_targetUrls.txt List of reachable, live URLs https://github.com, http://github.com
output/victimDomains_domains_alive.txt List of live domain names github.com, google.com
output/victimDomains_ips_alive.txt List of live IP addresses 10.1.0.200, 52.3.1.166
Supplied or default DB Path SQLite database storing live hosts and information about their services running
output/victimDomains_massdns.txt The raw output from MassDNS, in ndjson format
output/victimDomains_masscan.txt The raw output from Masscan, in ndjson format
output/victimDomains_nmap.txt The raw output from nmap, in XML format


❌