RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.
With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:
We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.
The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:
Possible usages for Raven:
This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.
In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear β the model in which security is delegated to developers has failed. This has been proven several times in our previous content:
Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality β each exploitation can impact millions of victims.
It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.
To get started with Raven, follow these installation instructions:
Step 1: Install the Raven package
pip3 install raven-cycode
Step 2: Setup a local Redis server and Neo4j database
docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12
docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1
Another way to setup the environment is by running our provided docker compose file:
git clone https://github.com/CycodeLabs/raven.git
cd raven
make setup
Step 3: Run Raven Downloader
Org mode:
raven download org --token $GITHUB_TOKEN --org-name RavenDemo
Crawl mode:
raven download crawl --token $GITHUB_TOKEN --min-stars 1000
Step 4: Run Raven Indexer
raven index
Step 5: Inspect the results through the reporter
raven report --format raw
At this point, it is possible to inspect the data in the Neo4j database, by connecting http://localhost:7474/browser/.
Raven is using two primary docker containers: Redis and Neo4j. make setup
will run a docker compose
command to prepare that environment.
The tool contains three main functionalities, download
and index
and report
.
usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME
options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--org-name ORG_NAME Organization name to download the workflows
usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS]
options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--max-stars MAX_STARS
Maximum number of stars for a repository
--min-stars MIN_STARS
Minimum number of stars for a repository, default : 1000
usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS]
[--clean-neo4j] [--debug]
options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index f rom scratch, default: False
--debug Whether to print debug statements, default: False
usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI]
[--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j]
[--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}]
[--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}]
{slack} ...
positional arguments:
{slack}
slack Send report to slack channel
options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False
--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain}
Filter queries with specific tag
--severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical}
Filter queries by severity level (default: info)
--queries-path QUERIES_PATH, -dp QUERIES_PATH
Queries folder (default: library)
--format {raw,json}, -f {raw,json}
Report format (default: raw)
Retrieve all workflows and actions associated with the organization.
raven download org --token $GITHUB_TOKEN --org-name microsoft --org-name google --debug
Scrape all publicly accessible GitHub repositories.
raven download crawl --token $GITHUB_TOKEN --min-stars 100 --max-stars 1000 --debug
After finishing the download process or if interrupted using Ctrl+C, proceed to index all workflows and actions into the Neo4j database.
raven index --debug
Now, we can generate a report using our query library.
raven report --severity high --tag injection --tag unauthenticated
For effective rate limiting, you should supply a Github token. For authenticated users, the next rate limiting applies:
Dockerfile
(without action.yml
). Currently, this behavior isn't supported.docker://...
URL. Currently, this behavior isn't supported.data
. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}
, which creates a path for a code execution.GITHUB_ENV
. This may utilize the previous taint analysis as well.actions/github-script
has an interesting threat landscape. If it is, it can be modeled in the graph.If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.
If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.
SecuSphere is a comprehensive DevSecOps platform designed to streamline and enhance your organization's security posture throughout the software development life cycle. Our platform serves as a centralized hub for vulnerability management, security assessments, CI/CD pipeline integration, and fostering DevSecOps practices and culture.
At the heart of SecuSphere is a powerful vulnerability management system. Our platform collects, processes, and prioritizes vulnerabilities, integrating with a wide array of vulnerability scanners and security testing tools. Risk-based prioritization and automated assignment of vulnerabilities streamline the remediation process, ensuring that your teams tackle the most critical issues first. Additionally, our platform offers robust dashboards and reporting capabilities, allowing you to track and monitor vulnerability status in real-time.
SecuSphere integrates seamlessly with your existing CI/CD pipelines, providing real-time security feedback throughout your development process. Our platform enables automated triggering of security scans and assessments at various stages of your pipeline. Furthermore, SecuSphere enforces security gates to prevent vulnerable code from progressing to production, ensuring that security is built into your applications from the ground up. This continuous feedback loop empowers developers to identify and fix vulnerabilities early in the development cycle.
SecuSphere offers a robust framework for consuming and analyzing security assessment reports from various CI/CD pipeline stages. Our platform automates the aggregation, normalization, and correlation of security findings, providing a holistic view of your application's security landscape. Intelligent deduplication and false-positive elimination reduce noise in the vulnerability data, ensuring that your teams focus on real threats. Furthermore, SecuSphere integrates with ticketing systems to facilitate the creation and management of remediation tasks.
SecuSphere goes beyond tools and technology to help you drive and accelerate the adoption of DevSecOps principles and practices within your organization. Our platform provides security training and awareness for developers, security, and operations teams, helping to embed security within your development and operations processes. SecuSphere aids in establishing secure coding guidelines and best practices and fosters collaboration and communication between security, development, and operations teams. With SecuSphere, you'll create a culture of shared responsibility for security, enabling you to build more secure, reliable software.
Embrace the power of integrated DevSecOps with SecuSphere β secure your software development, from code to cloud.
SecuSphere offers built-in dashboards and reporting capabilities that allow you to easily track and monitor the status of vulnerabilities. With our risk-based prioritization and automated assignment features, vulnerabilities are efficiently managed and sent to the relevant teams for remediation.
SecuSphere provides a comprehensive REST API and Web Console. This allows for greater flexibility and control over your security operations, ensuring you can automate and integrate SecuSphere into your existing systems and workflows as seamlessly as possible.
For more information please refer to our Official Rest API Documentation
SecuSphere integrates with popular ticketing systems, enabling the creation and management of remediation tasks directly within the platform. This helps streamline your security operations and ensure faster resolution of identified vulnerabilities.
SecuSphere is not just a tool, it's a comprehensive solution that drives and accelerates the adoption of DevSecOps principles and practices. We provide security training and awareness for developers, security, and operations teams, and aid in establishing secure coding guidelines and best practices.
Get started with SecuSphere using our comprehensive user guide.
You can install SecuSphere by cloning the repository, setting up locally, or using Docker.
$ git clone https://github.com/SecurityUniversalOrg/SecuSphere.git
Navigate to the source directory and run the Python file:
$ cd src/
$ python run.py
Build and run the Dockerfile in the cicd directory:
$ # From repository root
$ docker build -t secusphere:latest .
$ docker run secusphere:latest
Use Docker Compose in the ci_cd/iac/
directory:
$ cd ci_cd/iac/
$ docker-compose -f secusphere.yml up
Pull the latest version of SecuSphere from Docker Hub and run it:
$ docker pull securityuniversal/secusphere:latest
$ docker run -p 8081:80 -d secusphere:latest
We value your feedback and are committed to providing the best possible experience with SecuSphere. If you encounter any issues or have suggestions for improvement, please create an issue in this repository or contact our support team.
We welcome contributions to SecuSphere. If you're interested in improving SecuSphere or adding new features, please read our contributing guide.
burpgpt
leverages the power of AI
to detect security vulnerabilities that traditional scanners might miss. It sends web traffic to an OpenAI
model
specified by the user, enabling sophisticated analysis within the passive scanner. This extension offers customisable prompts
that enable tailored web traffic analysis to meet the specific needs of each user. Check out the Example Use Cases section for inspiration.
The extension generates an automated security report that summarises potential security issues based on the user's prompt
and real-time data from Burp
-issued requests. By leveraging AI
and natural language processing, the extension streamlines the security assessment process and provides security professionals with a higher-level overview of the scanned application or endpoint. This enables them to more easily identify potential security issues and prioritise their analysis, while also covering a larger potential attack surface.
[!WARNING] Data traffic is sent to
OpenAI
for analysis. If you have concerns about this or are using the extension for security-critical applications, it is important to carefully consider this and review OpenAI's Privacy Policy for further information.
[!WARNING] While the report is automated, it still requires triaging and post-processing by security professionals, as it may contain false positives.
[!WARNING] The effectiveness of this extension is heavily reliant on the quality and precision of the prompts created by the user for the selected
GPT
model. This targeted approach will help ensure theGPT model
generates accurate and valuable results for your security analysis.
Β
passive scan check
, allowing users to submit HTTP
data to an OpenAI
-controlled GPT model
for analysis through a placeholder
system.OpenAI's GPT models
to conduct comprehensive traffic analysis, enabling detection of various issues beyond just security vulnerabilities in scanned applications.GPT tokens
used in the analysis by allowing for precise adjustments of the maximum prompt length
.OpenAI models
to choose from, allowing them to select the one that best suits their needs.prompts
and unleash limitless possibilities for interacting with OpenAI models
. Browse through the Example Use Cases for inspiration.Burp Suite
, providing all native features for pre- and post-processing, including displaying analysis results directly within the Burp UI for efficient analysis.Burp Event Log
, enabling users to quickly resolve communication issues with the OpenAI API
.Operating System: Compatible with Linux
, macOS
, and Windows
operating systems.
Java Development Kit (JDK): Version 11
or later.
Burp Suite Professional or Community Edition: Version 2023.3.2
or later.
[!IMPORTANT] Please note that using any version lower than
2023.3.2
may result in a java.lang.NoSuchMethodError. It is crucial to use the specified version or a more recent one to avoid this issue.
Version 6.9
or later (recommended). The build.gradle file is provided in the project repository.JAVA_HOME
environment variable to point to the JDK installation directory.Please ensure that all system requirements, including a compatible version of Burp Suite
, are met before building and running the project. Note that the project's external dependencies will be automatically managed and installed by Gradle
during the build process. Adhering to the requirements will help avoid potential issues and reduce the need for opening new issues in the project repository.
Ensure you have Gradle installed and configured.
Download the burpgpt
repository:
git clone https://github.com/aress31/burpgpt
cd .\burpgpt\
Build the standalone jar
:
./gradlew shadowJar
Burp Suite
To install burpgpt
in Burp Suite
, first go to the Extensions
tab and click on the Add
button. Then, select the burpgpt-all
jar file located in the .\lib\build\libs
folder to load the extension.
To start using burpgpt, users need to complete the following steps in the Settings panel, which can be accessed from the Burp Suite menu bar:
OpenAI API key
.model
.max prompt size
. This field controls the maximum prompt
length sent to OpenAI
to avoid exceeding the maxTokens
of GPT
models (typically around 2048
for GPT-3
).Once configured as outlined above, the Burp passive scanner
sends each request to the chosen OpenAI model
via the OpenAI API
for analysis, producing Informational
-level severity findings based on the results.
burpgpt
enables users to tailor the prompt
for traffic analysis using a placeholder
system. To include relevant information, we recommend using these placeholders
, which the extension handles directly, allowing dynamic insertion of specific values into the prompt
:
Placeholder | Description |
---|---|
{REQUEST} | The scanned request. |
{URL} | The URL of the scanned request. |
{METHOD} | The HTTP request method used in the scanned request. |
{REQUEST_HEADERS} | The headers of the scanned request. |
{REQUEST_BODY} | The body of the scanned request. |
{RESPONSE} | The scanned response. |
{RESPONSE_HEADERS} | The headers of the scanned response. |
{RESPONSE_BODY} | The body of the scanned response. |
{IS_TRUNCATED_PROMPT} | A boolean value that is programmatically set to true or false to indicate whether the prompt was truncated to the Maximum Prompt Size defined in the Settings . |
These placeholders
can be used in the custom prompt
to dynamically generate a request/response analysis prompt
that is specific to the scanned request.
[!NOTE] >
Burp Suite
provides the capability to support arbitraryplaceholders
through the use of Session handling rules or extensions such as Custom Parameter Handler, allowing for even greater customisation of theprompts
.
The following list of example use cases showcases the bespoke and highly customisable nature of burpgpt
, which enables users to tailor their web traffic analysis to meet their specific needs.
Identifying potential vulnerabilities in web applications that use a crypto library affected by a specific CVE:
Analyse the request and response data for potential security vulnerabilities related to the {CRYPTO_LIBRARY_NAME} crypto library affected by CVE-{CVE_NUMBER}:
Web Application URL: {URL}
Crypto Library Name: {CRYPTO_LIBRARY_NAME}
CVE Number: CVE-{CVE_NUMBER}
Request Headers: {REQUEST_HEADERS}
Response Headers: {RESPONSE_HEADERS}
Request Body: {REQUEST_BODY}
Response Body: {RESPONSE_BODY}
Identify any potential vulnerabilities related to the {CRYPTO_LIBRARY_NAME} crypto library affected by CVE-{CVE_NUMBER} in the request and response data and report them.
Scanning for vulnerabilities in web applications that use biometric authentication by analysing request and response data related to the authentication process:
Analyse the request and response data for potential security vulnerabilities related to the biometric authentication process:
Web Application URL: {URL}
Biometric Authentication Request Headers: {REQUEST_HEADERS}
Biometric Authentication Response Headers: {RESPONSE_HEADERS}
Biometric Authentication Request Body: {REQUEST_BODY}
Biometric Authentication Response Body: {RESPONSE_BODY}
Identify any potential vulnerabilities related to the biometric authentication process in the request and response data and report them.
Analysing the request and response data exchanged between serverless functions for potential security vulnerabilities:
Analyse the request and response data exchanged between serverless functions for potential security vulnerabilities:
Serverless Function A URL: {URL}
Serverless Function B URL: {URL}
Serverless Function A Request Headers: {REQUEST_HEADERS}
Serverless Function B Response Headers: {RESPONSE_HEADERS}
Serverless Function A Request Body: {REQUEST_BODY}
Serverless Function B Response Body: {RESPONSE_BODY}
Identify any potential vulnerabilities in the data exchanged between the two serverless functions and report them.
Analysing the request and response data for potential security vulnerabilities specific to a Single-Page Application (SPA) framework:
Analyse the request and response data for potential security vulnerabilities specific to the {SPA_FRAMEWORK_NAME} SPA framework:
Web Application URL: {URL}
SPA Framework Name: {SPA_FRAMEWORK_NAME}
Request Headers: {REQUEST_HEADERS}
Response Headers: {RESPONSE_HEADERS}
Request Body: {REQUEST_BODY}
Response Body: {RESPONSE_BODY}
Identify any potential vulnerabilities related to the {SPA_FRAMEWORK_NAME} SPA framework in the request and response data and report them.
Settings
panel that allows users to set the maxTokens
limit for requests, thereby limiting the request size.AI model
, allowing users to run and interact with the model on their local machines, potentially improving response times and data privacy.maxTokens
value for each model
to transmit the maximum allowable data and obtain the most extensive GPT
response possible.Burp Suite
restarts.GPT
responses into the Vulnerability model
for improved reporting.The extension is currently under development and we welcome feedback, comments, and contributions to make it even better.
If this extension has saved you time and hassle during a security assessment, consider showing some love by sponsoring a cup of coffee
for the developer. It's the fuel that powers development, after all. Just hit that shiny Sponsor button at the top of the page or click here to contribute and keep the caffeine flowing.Did you find a bug? Well, don't just let it crawl around! Let's squash it together like a couple of bug whisperers!
Please report any issues on the GitHub issues tracker. Together, we'll make this extension as reliable as a cockroach surviving a nuclear apocalypse!
Looking to make a splash with your mad coding skills?
Awesome! Contributions are welcome and greatly appreciated. Please submit all PRs on the GitHub pull requests tracker. Together we can make this extension even more amazing!
See LICENSE.