Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.
Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:
python3 -m pip install porch-pirate
The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.
Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.
--globals
--collections
--requests
--urls
--dump
--raw
--curl
porch-pirate -s "coca-cola.com"
By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w
argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8
When an interesting result has been found with a simple search, we can provide the workspace ID to the -w
argument with the --dump
command to begin extracting information from the workspace and its collections.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump
Porch Pirate can be supplied a simple search term, following the --globals
argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.
porch-pirate -s "shopify" --globals
Porch Pirate can be supplied a simple search term, following the --dump
argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.
porch-pirate -s "coca-cola.com" --dump
A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls
Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.
porch-pirate -s "coca-cola.com" --urls
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw
porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME
Porch Pirate can build curl requests when provided with a request ID for easier testing.
porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl
porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080
p = porchpirate()
print(p.search('coca-cola.com'))
p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)
p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Other library usage examples can be located in the examples
directory, which contains the following examples:
dump_workspace.py
format_search_results.py
format_workspace_collections.py
format_workspace_globals.py
get_collection.py
get_collections.py
get_profile.py
get_request.py
get_statistics.py
get_team.py
get_user.py
get_workspace.py
recursive_globals_from_search.py
request_to_curl.py
search.py
search_by_page.py
workspace_collections.py
Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.
Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:
python3 -m pip install porch-pirate
The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.
Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.
--globals
--collections
--requests
--urls
--dump
--raw
--curl
porch-pirate -s "coca-cola.com"
By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w
argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8
When an interesting result has been found with a simple search, we can provide the workspace ID to the -w
argument with the --dump
command to begin extracting information from the workspace and its collections.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump
Porch Pirate can be supplied a simple search term, following the --globals
argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.
porch-pirate -s "shopify" --globals
Porch Pirate can be supplied a simple search term, following the --dump
argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.
porch-pirate -s "coca-cola.com" --dump
A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls
Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.
porch-pirate -s "coca-cola.com" --urls
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw
porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME
Porch Pirate can build curl requests when provided with a request ID for easier testing.
porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl
porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080
p = porchpirate()
print(p.search('coca-cola.com'))
p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)
p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Other library usage examples can be located in the examples
directory, which contains the following examples:
dump_workspace.py
format_search_results.py
format_workspace_collections.py
format_workspace_globals.py
get_collection.py
get_collections.py
get_profile.py
get_request.py
get_statistics.py
get_team.py
get_user.py
get_workspace.py
recursive_globals_from_search.py
request_to_curl.py
search.py
search_by_page.py
workspace_collections.py
Service that scans your Infrastructure as Code for common vulnerabilities.
Aspect | Information |
---|---|
Tool name | IaC Scan Runner |
Docker image | xscanner/runner |
PyPI package | iac-scan-runner |
Documentation | docs |
Contact us | xopera@xlab.si |
The IaC Scan Runner is a REST API service used to scan IaC (Infrastructure as Code) package and perform various code checks in order to find possible vulnerabilities and improvements. Explore the docs for more info.
This section explains how to run the REST API.
You can run the REST API using a public xscanner/runner Docker image as follows:
# run IaC Scan Runner REST API in a Docker container and
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 xscanner/runner
Or you can build the image locally and run it as follows:
# build Docker container (it will take some time)
$ docker build -t iac-scan-runner .
# run IaC Scan Runner REST API in a Docker container and
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 iac-scan-runner
To run using the IaC Scan Runner CLI:
# install the CLI
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install iac-scan-runner
# print OpenAPI specification
(.venv) $ iac-scan-runner openapi
# install prerequisites
(.venv) $ iac-scan-runner install
# run IaC Scan Runner REST API
(.venv) $ iac-scan-runner run
To run locally from source:
# Export env variables
export MONGODB_CONNECTION_STRING=mongodb://localhost:27017
export SCAN_PERSISTENCE=enabled
export USER_MANAGEMENT=enabled
# Setup MongoDB
$ docker run --name mongodb -p 27017:27017 mongo
# install prerequisites
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install -r requirements.txt
(.venv) $ ./install-checks.sh
# run IaC Scan Runner REST API (add --reload flag to apply code changes on the way)
(.venv) $ uvicorn src.iac_scan_runner.api:app
This part will show one of the possible deployments and short examples on how to use API calls.
Firstly we will clone the iac scan runner repository and run the API.
$ git clone https://github.com/xlab-si/iac-scan-runner.git
$ docker compose up
After this is done you can use different API endpoints by calling localhost:8000. You can also navigate to localhost:8000/swagger or localhost:8000/redoc and test all the API endpoints there. In this example, we will use curl for calling API endpoints.
curl -X 'POST' \
'http://0.0.0.0/project?creator_id=test' \
-H 'accept: application/json' \
-d ''
project id will be returned to us. For this example project id is 1e7b2a91-2896-40fd-8d53-83db56088026.
curl -X 'PUT' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/checks/ansible-lint/disable' \
-H 'accept: application/json'
curl -X 'POST' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/scan?scan_response_type=json' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'iac=@YOUR.zip;type=application/zip'
That is it.
At certain point, it might be required to include new check tools within the scan workflow, with aim to provide wider coverage of IaC standards and project types. Therefore, in this subsection, a sequence of required steps for that purpose is identified and described. However, the steps have to be performed manually as it will be described, but it is planned to automatize this procedure in future via API and provide user-friendly interface that will aid the user while importing new tools that will become part of the available catalogue that makes the scan workflow. Figure 16 depicts the required steps which have to be taken in order to extend the scan workflow with a new tool.
Step 1 β Adding tool-specific class to checks directory First, it is required to add a new tool-specific Python class to the checks directory inside IaC Scan Runnerβs source code: iac-scan-runner/src/iac_scan_runner/checks/new_tool.py
The class of a new tool inherits the existing Check class, which provides generalization of scan workflow tools. Moreover, it is necessary to provide implementation of the following methods:
Step 2 β Adding the check tool class instance within ScanRunner constructor Once the new class derived from Check is added to the IaC Scan Runnerβs source code, it is also required to modify the source code of its main class, called ScanRunner. When it comes to modifications of this class, it is required first to import the tool-specific class, create a new check tool-specific class instance and adding it to the dictionary of IaC checks inside def init_checks(self). A. Importing the check tool class from iac_scan_runner.checks.tfsec import TfsecCheck B. Creating new instance of check tool object inside init_checks """Initiate predefined check objects""" new_tool = NewToolCheck() C. Adding it to self.iac_checks dictionary inside init_checks
self.iac_checks = {
new_tool.name: new_tool,
β¦
}
Step 3 β Adding the check tool to the compatibility matrix inside Compatibility class On the other side, inside file src/iac_scan_runner/compatibility.py, the dictionary which represents compatibility matrix should be extended as well. There are two possible cases: a) new file type should be added as a key, together with list of relevant tools as value b) new tool should be added to the compatibility list for the existing file type.
compatibility_matrix = {
"new_type": ["new_tool_1", "new_tool_2"],
β¦
"old_typeK": ["tool_1", β¦ "tool_N", "new_tool_3"]
}
Step 4 β Providing the support for result summarization Finally, the last step in sequence of required modifications for scan workflow extension is to modify class ResultsSummary (src/iac_scan_runner/results_summary.py). Precisely, it is required to append a part of the code to its method summarize_outcome that will look for specific strings which are tool-specific and can be used to identify whether the check passed or failed. Inside the loop that traverses the compatible checks, for each new tool the following structure of if-else should be included:
if check == "new_tool":
if outcome.find("Check pass string") > -1:
self.outcomes[check]["status"] = "Passed"
return "Passed"
else:
self.outcomes[check]["status"] = "Problems"
return "Problems"
You can contact the xOpera team by sending an email to xopera@xlab.si.
This project has received funding from the European Unionβs Horizon 2020 research and innovation programme under Grant Agreement No. 101000162 (PIACERE).
SecuSphere is a comprehensive DevSecOps platform designed to streamline and enhance your organization's security posture throughout the software development life cycle. Our platform serves as a centralized hub for vulnerability management, security assessments, CI/CD pipeline integration, and fostering DevSecOps practices and culture.
At the heart of SecuSphere is a powerful vulnerability management system. Our platform collects, processes, and prioritizes vulnerabilities, integrating with a wide array of vulnerability scanners and security testing tools. Risk-based prioritization and automated assignment of vulnerabilities streamline the remediation process, ensuring that your teams tackle the most critical issues first. Additionally, our platform offers robust dashboards and reporting capabilities, allowing you to track and monitor vulnerability status in real-time.
SecuSphere integrates seamlessly with your existing CI/CD pipelines, providing real-time security feedback throughout your development process. Our platform enables automated triggering of security scans and assessments at various stages of your pipeline. Furthermore, SecuSphere enforces security gates to prevent vulnerable code from progressing to production, ensuring that security is built into your applications from the ground up. This continuous feedback loop empowers developers to identify and fix vulnerabilities early in the development cycle.
SecuSphere offers a robust framework for consuming and analyzing security assessment reports from various CI/CD pipeline stages. Our platform automates the aggregation, normalization, and correlation of security findings, providing a holistic view of your application's security landscape. Intelligent deduplication and false-positive elimination reduce noise in the vulnerability data, ensuring that your teams focus on real threats. Furthermore, SecuSphere integrates with ticketing systems to facilitate the creation and management of remediation tasks.
SecuSphere goes beyond tools and technology to help you drive and accelerate the adoption of DevSecOps principles and practices within your organization. Our platform provides security training and awareness for developers, security, and operations teams, helping to embed security within your development and operations processes. SecuSphere aids in establishing secure coding guidelines and best practices and fosters collaboration and communication between security, development, and operations teams. With SecuSphere, you'll create a culture of shared responsibility for security, enabling you to build more secure, reliable software.
Embrace the power of integrated DevSecOps with SecuSphere β secure your software development, from code to cloud.
SecuSphere offers built-in dashboards and reporting capabilities that allow you to easily track and monitor the status of vulnerabilities. With our risk-based prioritization and automated assignment features, vulnerabilities are efficiently managed and sent to the relevant teams for remediation.
SecuSphere provides a comprehensive REST API and Web Console. This allows for greater flexibility and control over your security operations, ensuring you can automate and integrate SecuSphere into your existing systems and workflows as seamlessly as possible.
For more information please refer to our Official Rest API Documentation
SecuSphere integrates with popular ticketing systems, enabling the creation and management of remediation tasks directly within the platform. This helps streamline your security operations and ensure faster resolution of identified vulnerabilities.
SecuSphere is not just a tool, it's a comprehensive solution that drives and accelerates the adoption of DevSecOps principles and practices. We provide security training and awareness for developers, security, and operations teams, and aid in establishing secure coding guidelines and best practices.
Get started with SecuSphere using our comprehensive user guide.
You can install SecuSphere by cloning the repository, setting up locally, or using Docker.
$ git clone https://github.com/SecurityUniversalOrg/SecuSphere.git
Navigate to the source directory and run the Python file:
$ cd src/
$ python run.py
Build and run the Dockerfile in the cicd directory:
$ # From repository root
$ docker build -t secusphere:latest .
$ docker run secusphere:latest
Use Docker Compose in the ci_cd/iac/
directory:
$ cd ci_cd/iac/
$ docker-compose -f secusphere.yml up
Pull the latest version of SecuSphere from Docker Hub and run it:
$ docker pull securityuniversal/secusphere:latest
$ docker run -p 8081:80 -d secusphere:latest
We value your feedback and are committed to providing the best possible experience with SecuSphere. If you encounter any issues or have suggestions for improvement, please create an issue in this repository or contact our support team.
We welcome contributions to SecuSphere. If you're interested in improving SecuSphere or adding new features, please read our contributing guide.
Language | Framework | URL | Method | Param | Header | WS |
---|---|---|---|---|---|---|
Go | Echo | β
| β | X | X | X |
Python | Django | β
| X | X | X | X |
Python | Flask | β | X | X | X | X |
Ruby | Rails | β
| β
| β | X | X |
Ruby | Sinatra | β
| β
| β
| X | X |
Php | β
| β
| β
| X | X | |
Java | Spring | β
| β
| X | X | X |
Java | Jsp | X | X | X | X | X |
Crystal | Kemal | β
| β
| β | X | β
|
JS | Express | β
| β
| X | X | X |
JS | Next | X | X | X | X | X |
Specification | Format | URL | Method | Param | Header | WS |
---|---|---|---|---|---|---|
Swagger | JSON | β
| β | β
| X | X |
Swagger | YAML | β
| β
| β
| X | X |
brew tap hahwul/noir
brew install noir
# Install Crystal-lang
# https://crystal-lang.org/install/
# Clone this repo
git clone https://github.com/hahwul/noir
cd noir
# Install Dependencies
shards install
# Build
shards build --release --no-debug
# Copy binary
cp ./bin/noir /usr/bin/
docker pull ghcr.io/hahwul/noir:main
Usage: noir <flags>
Basic:
-b PATH, --base-path ./app (Required) Set base path
-u URL, --url http://.. Set base url for endpoints
-s SCOPE, --scope url,param Set scope for detection
Output:
-f FORMAT, --format json Set output format [plain/json/markdown-table/curl/httpie]
-o PATH, --output out.txt Write result to file
--set-pvalue VALUE Specifies the value of the identified parameter
--no-color Disable color output
--no-log Displaying only the results
Deliver:
--send-req Send the results to the web request
--send-proxy http://proxy.. Send the results to the web request via http proxy
Technologies:
-t TECHS, --techs rails,php Set technologies to use
--exclude-techs rails,php Specify the technologies to be excluded
--list-techs Show all technologies
Others:
-d, --debug Show debug messages
-v, --version Show version
-h, --help Show help
Example
noir -b . -u https://testapp.internal.domains
JSON Result
noir -b . -u https://testapp.internal.domains -f json
[
...
{
"headers": [],
"method": "POST",
"params": [
{
"name": "article_slug",
"param_type": "json",
"value": ""
},
{
"name": "body",
"param_type": "json",
"value": ""
},
{
"name": "id",
"param_type": "json",
"value": ""
}
],
"protocol": "http",
"url": "https://testapp.internal.domains/comments"
}
]
Discover, prioritize, and remediate your risks in the cloud.
git clone --recurse-submodules git@github.com:Zeus-Labs/ZeusCloud.git
cd ZeusCloud && make quick-deploy
Check out our Get Started guide for more details.
A cloud-hosted version is available on special request - email founders@zeuscloud.io to get access!
Play around with our sandbox environment to see how ZeusCloud identifies, prioritizes, and remediates risks in the cloud!
Cloud usage continues to grow. Companies are shifting more of their workloads from on-prem to the cloud and both adding and expanding new and existing workloads in the cloud. Cloud providers keep increasing their offerings and their complexity. Companies are having trouble keeping track of their security risks as their cloud environment scales and grows more complex. Several high profile attacks have occurred in recent times. Capital One had an S3 bucket breached, Amazon had an unprotected Prime Video server breached, Microsoft had an Azure DevOps server breached, Puma was the victim of ransomware, etc.
We had to take action.
We love contributions of all sizes. What would be most helpful first:
Run containers in development mode:
cd frontend && yarn && cd -
docker-compose down && docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --build
Reset neo4j and/or postgres data with the following:
rm -rf .compose/neo4j
rm -rf .compose/postgres
To develop on frontend, make the the code changes and save.
To develop on backend, run
docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --no-deps --build backend
To access the UI, go to: http://localhost:80.
Please do not run ZeusCloud exposed to the public internet. Use the latest versions of ZeusCloud to get all security related patches. Report any security vulnerabilities to founders@zeuscloud.io.
This repo is freely available under the Apache 2.0 license.
We're working on a cloud-hosted solution which handles deployment and infra management. Contact us at founders@zeuscloud.io for more information!
Special thanks to the amazing Cartography project, which ZeusCloud uses for its asset inventory. Credit to PostHog and Airbyte for inspiration around public-facing materials - like this README!
Bearer provides built-in rules against a common set of security risks and vulnerabilities, known as OWASP Top 10. Here are some practical examples of what those rules look for:
And many more.
Bearer is Open Source (see license) and fully customizable, from creating your own rules to component detection (database, API) and data classification.
Bearer also powers our commercial offering, Bearer Cloud, allowing security teams to scale and monitor their application security program using the same engine.
Discover your most critical security risks and vulnerabilities in only a few minutes. In this guide, you will install Bearer, run a scan on a local project, and view the results. Let's get started!
The quickest way to install Bearer is with the install script. It will auto-select the best build for your architecture. Defaults installation to ./bin
and to the latest release version:
curl -sfL https://raw.githubusercontent.com/Bearer/bearer/main/contrib/install.sh | sh
Using Bearer's official Homebrew tap:
brew install bearer/tap/bearer
$ sudo apt-get install apt-transport-https
$ echo "deb [trusted=yes] https://apt.fury.io/bearer/ /" | sudo tee -a /etc/apt/sources.list.d/fury.list
$ sudo apt-get update
$ sudo apt-get install bearer
Add repository setting:
$ sudo vim /etc/yum.repos.d/fury.repo
[fury]
name=Gemfury Private Repo
baseurl=https://yum.fury.io/bearer/
enabled=1
gpgcheck=0
Then install with yum:
$ sudo yum -y update
$ sudo yum -y install bearer
Bearer is also available as a Docker image on Docker Hub and ghcr.io.
With docker installed, you can run the following command with the appropriate paths in place of the examples.
docker run --rm -v /path/to/repo:/tmp/scan bearer/bearer:latest-amd64 scan /tmp/scan
Additionally, you can use docker compose. Add the following to your docker-compose.yml
file and replace the volumes with the appropriate paths for your project:
version: "3"
services:
bearer:
platform: linux/amd64
image: bearer/bearer:latest-amd64
volumes:
- /path/to/repo:/tmp/scan
Then, run the docker compose run
command to run Bearer with any specified flags:
docker compose run bearer scan /tmp/scan --debug
Download the archive file for your operating system/architecture from here.
Unpack the archive, and put the binary somewhere in your $PATH (on UNIX-y systems, /usr/local/bin or the like). Make sure it has permission to execute.
The easiest way to try out Bearer is with our example project, Bear Publishing. It simulates a realistic Ruby application with common security flaws. Clone or download it to a convenient location to get started.
git clone https://github.com/Bearer/bear-publishing.git
Now, run the scan command with bearer scan
on the project directory:
bearer scan bear-publishing
A progress bar will display the status of the scan.
Once the scan is complete, Bearer will output a security report with details of any rule failures, as well as where in the codebase the infractions happened and why.
By default the scan
command use the SAST scanner, other scanner types are available.
The security report is an easily digestible view of the security issues detected by Bearer. A report is made up of:
The Bear Publishing example application will trigger rule failures and output a full report. Here's a section of the output:
...
CRITICAL: Only communicate using SFTP connections.
https://docs.bearer.com/reference/rules/ruby_lang_insecure_ftp
File: bear-publishing/app/services/marketing_export.rb:34
34 Net::FTP.open(
35 'marketing.example.com',
36 'marketing',
37 'password123'
...
41 end
=====================================
56 checks, 10 failures, 6 warnings
CRITICAL: 7
HIGH: 0
MEDIUM: 0
LOW: 3
WARNING: 6
The security report is just one report type available in Bearer.
Additional options for using and configuring the scan
command can be found in the scan documentation.
For additional guides and usage tips, view the docs.
When you run Bearer on your codebase, it discovers and classifies data by identifying patterns in the source code. Specifically, it looks for data types and matches against them. Most importantly, it never views the actual values (it just canβt)βbut only the code itself.
Bearer assesses 120+ data types from sensitive data categories such as Personal Data (PD), Sensitive PD, Personally identifiable information (PII), and Personal Health Information (PHI). You can view the full list in the supported data types documentation.
In a nutshell, our static code analysis is performed on two levels: Analyzing class names, methods, functions, variables, properties, and attributes. It then ties those together to detected data structures. It does variable reconciliation etc. Analyzing data structure definitions files such as OpenAPI, SQL, GraphQL, and Protobuf.
Bearer then passes this over to the classification engine we built to support this very particular discovery process.
If you want to learn more, here is the longer explanation.
We recommend running Bearer in your CI to check new PR automatically for security issues, so your development team has a direct feedback loop to fix issues immediately.
You can also integrate Bearer in your CD, though we recommend to only make it fail on high criticality issues only, as the impact for your organization might be important.
In addition, running Bearer on a scheduled job is a great way to keep track of your security posture and make sure new security issues are found even in projects with low activity.
Bearer currently supports JavaScript and Ruby and their associated most used frameworks and libraries. More languages will follow.
SAST tools are known to bury security teams and developers under hundreds of issues with little context and no sense of priority, often requiring security analysts to triage issues. Not Bearer.
The most vulnerable asset today is sensitive data, so we start there and prioritize application security risks and vulnerabilities by assessing sensitive data flows in your code to highlight what is urgent, and what is not.
We believe that by linking security issues with a clear business impact and risk of a data breach, or data leak, we can build better and more robust software, at no extra cost.
In addition, by being Open Source, extendable by design, and built with a great developer UX in mind, we bet you will see the difference for yourself.
It depends on the size of your applications. It can take as little as 20 seconds, up to a few minutes for an extremely large code base. Weβve added an internal caching layer that only looks at delta changes to allow quick, subsequent scans.
Running Bearer should not take more time than running your test suite.
If youβre familiar with other SAST tools, false positives are always a possibility.
By using the most modern static code analysis techniques and providing a native filtering and prioritizing solution on the most important issues, we believe this problem wonβt be a concern when using Bearer.
Thanks for using Bearer. Still have questions?
Interested in contributing? We're here for it! For details on how to contribute, setting up your development environment, and our processes, review the contribution guide.
Everyone interacting with this project is expected to follow the guidelines of our code of conduct.
To report a vulnerability or suspected vulnerability, see our security policy. For any questions, concerns or other security matters, feel free to open an issue or join the Discord Community.
Nosey Parker is a command-line tool that finds secrets and sensitive information in textual data. It is useful both for offensive and defensive security testing.
Key features:
This open-source version of Nosey Parker is a reimplementation of the internal version that is regularly used in offensive security engagements at Praetorian. The internal version has additional capabilities for false positive suppression and an alternative machine learning-based detection engine. Read more in blog posts here and here.
1. (On x86_64) Install the Hyperscan library and headers for your system
On macOS using Homebrew:
brew install hyperscan pkg-config
On Ubuntu 22.04:
apt install libhyperscan-dev pkg-config
1. (On non-x86_64) Build Vectorscan from source
You will need several dependencies, including cmake
, boost
, ragel
, and pkg-config
.
Download and extract the source for the 5.4.8 release of Vectorscan:
wget https://github.com/VectorCamp/vectorscan/archive/refs/tags/vectorscan/5.4.8.tar.gz && tar xfz 5.4.8.tar.gz
Build with cmake:
cd vectorscan-vectorscan-5.4.8 && cmake -B build -DCMAKE_BUILD_TYPE=Release . && cmake --build build
Set the HYPERSCAN_ROOT
environment variable so that Nosey Parker builds against your from-source build of Vectorscan:
export HYPERSCAN_ROOT="$PWD/build"
Note: The Nosey Parker Dockerfile
builds Vectorscan from source and links against that.
2. Install the Rust toolchain
Recommended approach: install from https://rustup.rs
3. Build using Cargo
cargo build --release
This will produce a binary at target/release/noseyparker
.
A prebuilt Docker image is available for the latest release for x86_64:
docker pull ghcr.io/praetorian-inc/noseyparker:latest
A prebuilt Docker image is available for the most recent commit for x86_64:
docker pull ghcr.io/praetorian-inc/noseyparker:edge
For other architectures (e.g., ARM) you will need to build the Docker image yourself:
docker build -t noseyparker .
Run the Docker image with a mounted volume:
docker run -v "$PWD":/opt/ noseyparker
Note: The Docker image runs noticeably slower than a native binary, particularly on macOS.
Most Nosey Parker commands use a datastore. This is a special directory that Nosey Parker uses to record its findings and maintain its internal state. A datastore will be implicitly created by the scan
command if needed. You can also create a datastore explicitly using the datastore init -d PATH
command.
Nosey Parker has built-in support for scanning files, recursively scanning directories, and scanning the entire history of Git repositories.
For example, if you have a Git clone of CPython locally at cpython.git
, you can scan its entire history with the scan
command. Nosey Parker will create a new datastore at np.cpython
and saves its findings there.
$ noseyparker scan --datastore np.cpython cpython.git
Found 28.30 GiB from 18 plain files and 427,712 blobs from 1 Git repos [00:00:04]
Scanning content ββββββββββββββββββββ 100% 28.30 GiB/28.30 GiB [00:00:53]
Scanned 28.30 GiB from 427,730 blobs in 54 seconds (538.46 MiB/s); 4,904/4,904 new matches
Rule Distinct Groups Total Matches
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
PEM-Encoded Private Key 1,076 1,1 92
Generic Secret 331 478
netrc Credentials 42 3,201
Generic API Key 2 31
md5crypt Hash 1 2
Run the `report` command next to show finding details.
Nosey Parker can also scan Git repos that have not already been cloned to the local filesystem. The --git-url URL
, --github-user NAME
, and --github-org NAME
options to scan
allow you to specify repositories of interest.
For example, to scan the Nosey Parker repo itself:
$ noseyparker scan --datastore np.noseyparker --git-url https://github.com/praetorian-inc/noseyparker
For example, to scan accessible repositories belonging to octocat
:
$ noseyparker scan --datastore np.noseyparker --github-user octocat
These input specifiers will use an optional GitHub token if available in the NP_GITHUB_TOKEN
environment variable. Providing an access token gives a higher API rate limit and may make additional repositories accessible to you.
See noseyparker help scan
for more details.
Nosey Parker prints out a summary of its findings when it finishes scanning. You can also run this step separately:
$ noseyparker summarize --datastore np.cpython
Rule Distinct Groups Total Matches
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
PEM-Encoded Private Key 1,076 1,192
Generic Secret 331 478
netrc Credentials 42 3,201
Generic API Key 2 31
md5crypt Hash 1 2
Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT
option.
To see details of Nosey Parker's findings, use the report
command. This prints out a text-based report designed for human consumption:
--format=FORMAT
option. To list URLs for repositories belonging to GitHub users or organizations, use the github repos list
command. This command uses the GitHub REST API to enumerate repositories belonging to one or more users or organizations. For example:
$ noseyparker github repos list --user octocat
https://github.com/octocat/Hello-World.git
https://github.com/octocat/Spoon-Knife.git
https://github.com/octocat/boysenberry-repo-1.git
https://github.com/octocat/git-consortium.git
https://github.com/octocat/hello-worId.git
https://github.com/octocat/linguist.git
https://github.com/octocat/octocat.github.io.git
https://github.com/octocat/test-repo1.git
An optional GitHub Personal Access Token can be provided via the NP_GITHUB_TOKEN
environment variable. Providing an access token gives a higher API rate limit and may make additional repositories accessible to you.
Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT
option.
See noseyparker help github
for more details.
Running the noseyparker
binary without arguments prints top-level help and exits. You can get abbreviated help for a particular command by running noseyparker COMMAND -h
.
Tip: More detailed help is available with the help
command or long-form --help
option.
Contributions are welcome, particularly new regex rules. Developing new regex rules is detailed in a separate document.
If you are considering making significant code changes, please open an issue first to start discussion.
Nosey Parker is licensed under the Apache License, Version 2.0.
Any contribution intentionally submitted for inclusion in Nosey Parker by you, as defined in the Apache 2.0 license, shall be licensed as above, without any additional terms or conditions.