Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.
- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers
~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt
A detailed usage guide is available on Usage section of the Wiki.
But Some index of options is given below:
Ashok can be launched using a lightweight Python3.8-Alpine Docker image.
$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help
HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)
Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.
python3 HardeningMeter.py -f /bin/cp -s
Before installing HardeningMeter, make sure your machine has the following: 1. readelf
and file
commands 2. python version 3 3. pip 4. tabulate
pip install tabulate
The very latest developments can be obtained via git.
Clone or download the project files (no compilation nor installation is required)
git clone https://github.com/OfriOuzan/HardeningMeter
Specify the files you want to scan, the argument can get more than one file seperated by spaces.
Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.
Specify whether you want to add external checks (False by default).
Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.
Specify if you want to scan the system hardening methods.
Specify if you want to save the results to csv file (results are printed as a table to stdout by default).
HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.
When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.
Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.
Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:
python3 -m pip install porch-pirate
The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.
Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.
--globals
--collections
--requests
--urls
--dump
--raw
--curl
porch-pirate -s "coca-cola.com"
By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w
argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8
When an interesting result has been found with a simple search, we can provide the workspace ID to the -w
argument with the --dump
command to begin extracting information from the workspace and its collections.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump
Porch Pirate can be supplied a simple search term, following the --globals
argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.
porch-pirate -s "shopify" --globals
Porch Pirate can be supplied a simple search term, following the --dump
argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.
porch-pirate -s "coca-cola.com" --dump
A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls
Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.
porch-pirate -s "coca-cola.com" --urls
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw
porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME
Porch Pirate can build curl requests when provided with a request ID for easier testing.
porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl
porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080
p = porchpirate()
print(p.search('coca-cola.com'))
p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)
p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Other library usage examples can be located in the examples
directory, which contains the following examples:
dump_workspace.py
format_search_results.py
format_workspace_collections.py
format_workspace_globals.py
get_collection.py
get_collections.py
get_profile.py
get_request.py
get_statistics.py
get_team.py
get_user.py
get_workspace.py
recursive_globals_from_search.py
request_to_curl.py
search.py
search_by_page.py
workspace_collections.py
Permiso: https://permiso.io
Read our release blog: https://permiso.io/blog/cloudgrappler-a-powerful-open-source-threat-detection-tool-for-cloud-environments
CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.
To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.
bash pip3 install -r requirements.txt
To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.
bash chmod +x clone.sh ./clone.sh
This tool offers a CLI (Command Line Interface). As such, here we review its use:
Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:
Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.
{
"AWS": [
{
"bucket": "cloudtrail-logs-00000000-ffffff",
"prefix": [
"testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03",
"testTrails/AWSLogs/00000000/CloudTrail/us-west-1/2024/03/04"
]
},
{
"bucket": "aws-kosova-us-east-1-00000000"
}
],
"AZURE": [
{
"accountname": "logs",
"container": [
"cloudgrappler"
]
}
]
}
Run command
python3 main.py
python3 main.py -p
[+] Running GetFileDownloadUrls.*secrets_ for AWS
[+] Threat Actor: LUCR3
[+] Severity: MEDIUM
[+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.
python3 main.py -p -jo
reports
βββ json
βββ AWS
βΒ Β βββ 2024-03-04 01:01 AM
βΒ Β βββ cloudtrail-logs-00000000-ffffff--
βΒ Β βββ testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03
βΒ Β βββ GetFileDownloadUrls.*secrets_.json
βββ AZURE
βββ 2024-03-04 01:01 AM
βββ logs
βββ cloudgrappler
βββ okta_key.json
python3 main.py -p -sd 2024-02-15 -ed 2024-02-16
python3 main.py -q "GetFileDownloadUrls.*secret", "UpdateAccessKey" -s '*'
python3 main.py -f new_file.json
Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.
If you run on an EC2 instance in the same region as the S3 bucket with a VPC endpoint for S3 you can avoid egress charges. You can authenticate in a number of ways.
The simplest way to authenticate with Azure is to first run:
az login
This will open a browser window and prompt you to login to Azure.
This tool takes a scanning tool's output file, and converts it to a tabular format (CSV, XLSX, or text table). This tool can process output from the following tools:
This tool can offer a human-readable, tabular format which you can tie to any observations you have drafted in your report. Why? Because then your reviewers can tell that you, the pentester, investigated all found open ports, and looked at all scanning reports.
Using Pip:
pip install --user sr2t
You can use sr2t
in two ways:
sr2t --help
.python -m src.sr2t --help
$ sr2t --help
usage: sr2t [-h] [--nessus NESSUS [NESSUS ...]] [--nmap NMAP [NMAP ...]]
[--nikto NIKTO [NIKTO ...]] [--dirble DIRBLE [DIRBLE ...]]
[--testssl TESTSSL [TESTSSL ...]]
[--fortify FORTIFY [FORTIFY ...]] [--nmap-state NMAP_STATE]
[--nmap-services] [--no-nessus-autoclassify]
[--nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE]
[--nessus-tls-file NESSUS_TLS_FILE]
[--nessus-x509-file NESSUS_X509_FILE]
[--nessus-http-file NESSUS_HTTP_FILE]
[--nessus-smb-file NESSUS_SMB_FILE]
[--nessus-rdp-file NESSUS_RDP_FILE]
[--nessus-ssh-file NESSUS_SSH_FILE]
[--nessus-min-severity NESSUS_MIN_SEVERITY]
[--nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH]
[--nessus-sort-by NESSUS_SORT_BY]
[--nikto-description-width NIKTO_DESCRIPTION_WIDTH]< br/> [--fortify-details] [--annotation-width ANNOTATION_WIDTH]
[-oC OUTPUT_CSV] [-oT OUTPUT_TXT] [-oX OUTPUT_XLSX]
[-oA OUTPUT_ALL]
Converting scanning reports to a tabular format
optional arguments:
-h, --help show this help message and exit
--nmap-state NMAP_STATE
Specify the desired state to filter (e.g.
open|filtered).
--nmap-services Specify to ouput a supplemental list of detected
services.
--no-nessus-autoclassify
Specify to not autoclassify Nessus results.
--nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE
Specify to override a custom Nessus autoclassify YAML
file.
--nessus-tls-file NESSUS_TLS_FILE
Specify to override a custom Nessus TLS findings YAML
file.
--nessus-x509-file NESSUS_X509_FILE
Specify to override a custom Nessus X.509 findings
YAML file.
--nessus-http-file NESSUS_HTTP_FILE
Specify to override a custom Nessus HTTP findings YAML
file.
--nessus-smb-file NESSUS_SMB_FILE
Specify to override a custom Nessus SMB findings YAML
file.
--nessus-rdp-file NESSUS_RDP_FILE
Specify to override a custom Nessus RDP findings YAML
file.
--nessus-ssh-file NESSUS_SSH_FILE
Specify to override a custom Nessus SSH findings YAML
file.
--nessus-min-severity NESSUS_MIN_SEVERITY
Specify the minimum severity to output (e.g. 1).
--nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH
Specify the width of the pluginid column (e.g. 30).
--nessus-sort-by NESSUS_SORT_BY
Specify to sort output by ip-address, port, plugin-id,
plugin-name or severity.
--nikto-description-width NIKTO_DESCRIPTION_WIDTH
Specify the width of the description column (e.g. 30).
--fortify-details Specify to include the Fortify abstracts, explanations
and recommendations for each vulnerability.
--annotation-width ANNOTATION_WIDTH
Specify the width of the annotation column (e.g. 30).
-oC OUTPUT_CSV, --output-csv OUTPUT_CSV
Specify the output CSV basename (e.g. output).
-oT OUTPUT_TXT, --output-txt OUTPUT_TXT
Specify the output TXT file (e.g. output.txt).
-oX OUTPUT_XLSX, --output-xlsx OUTPUT_XLSX
Specify the outpu t XLSX file (e.g. output.xlsx). Only
for Nessus at the moment
-oA OUTPUT_ALL, --output-all OUTPUT_ALL
Specify the output basename to output to all formats
(e.g. output).
specify at least one:
--nessus NESSUS [NESSUS ...]
Specify (multiple) Nessus XML files.
--nmap NMAP [NMAP ...]
Specify (multiple) Nmap XML files.
--nikto NIKTO [NIKTO ...]
Specify (multiple) Nikto XML files.
--dirble DIRBLE [DIRBLE ...]
Specify (multiple) Dirble XML files.
--testssl TESTSSL [TESTSSL ...]
Specify (multiple) Testssl JSON files.
--fortify FORTIFY [FORTIFY ...]
Specify (multiple) HP Fortify FPR files.
A few examples
To produce an XLSX format:
$ sr2t --nessus example/nessus.nessus --no-nessus-autoclassify -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --nessus example/nessus.nessus
+---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
| host | port | plugin id | plugin name | severity | annotations |
+---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
| 192.168.142.4 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.4 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.4 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
| 192.168.142.4 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
| 192.168.142.4 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
| 192.168.142.4 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
| 192.168.142.4 | 3389 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.4 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.4 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
| 192.168.142.4 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
| 192.168.142.4 | 3389 | 51192 | SSL Certificate Can not Be Trusted | 2 | X |
| 192.168.142.2 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.2 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.2 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
| 192.168.142.2 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
| 192.168.142.2 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
| 192.168.142.2 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
| 192.168.142.2 | 3389 | 45411 | S SL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.2 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.2 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
| 192.168.142.2 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
| 192.168.142.2 | 3389 | 51192 | SSL Certificate Cannot Be Trusted | 2 | X |
| 192.168.142.2 | 445 | 57608 | SMB Signing not required | 2 | X |
+---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
Or to output a CSV file:
$ sr2t --nessus example/nessus.nessus -oC example
$ cat example_nessus.csv
host,port,plugin id,plugin name,severity,annotations
192.168.142.4,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.4,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.4,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
192.168.142.4,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
192.168.142.4,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
192.168.142.4,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
192.168.142.4,3389,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.4,443,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.4,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
192.168.142.4,3389,57582,SSL Self-Signed Certificate,2,X
192.168.142.4,3389,51192,SSL Certificate Cannot Be Trusted,2,X
192.168.142.2,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.2,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.2,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
192.168.142.2,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
192.168.142.2,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
192.168.142.2,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
192.168.142.2,3389,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.2,443,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.2,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
192.168.142.2,3389,57582,SSL Self-Signed Certificate,2,X
192.168.142.2,3389,51192,SSL Certificate Cannot Be Trusted,2,X
192.168.142.2,44 5,57608,SMB Signing not required,2,X
To produce an XLSX format:
$ sr2t --nmap example/nmap.xml -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --nmap example/nmap.xml --nmap-services
Nmap TCP:
+-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
| | 53 | 80 | 88 | 135 | 139 | 389 | 445 | 3389 | 5800 | 5900 |
+-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
| 192.168.23.78 | X | | X | X | X | X | X | X | | |
| 192.168.27.243 | | | | X | X | | X | X | X | X |
| 192.168.99.164 | | | | X | X | | X | X | X | X |
| 192.168.228.211 | | X | | | | | | | | |
| 192.168.171.74 | | | | X | X | | X | X | X | X |
+-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
Nmap Services:
+-----------------+------+-------+---------------+-------+
| ip address | port | proto | service | state |
+--------------- --+------+-------+---------------+-------+
| 192.168.23.78 | 53 | tcp | domain | open |
| 192.168.23.78 | 88 | tcp | kerberos-sec | open |
| 192.168.23.78 | 135 | tcp | msrpc | open |
| 192.168.23.78 | 139 | tcp | netbios-ssn | open |
| 192.168.23.78 | 389 | tcp | ldap | open |
| 192.168.23.78 | 445 | tcp | microsoft-ds | open |
| 192.168.23.78 | 3389 | tcp | ms-wbt-server | open |
| 192.168.27.243 | 135 | tcp | msrpc | open |
| 192.168.27.243 | 139 | tcp | netbios-ssn | open |
| 192.168.27.243 | 445 | tcp | microsoft-ds | open |
| 192.168.27.243 | 3389 | tcp | ms-wbt-server | open |
| 192.168.27.243 | 5800 | tcp | vnc-http | open |
| 192.168.27.243 | 5900 | tcp | vnc | open |
| 192.168.99.164 | 135 | tcp | msrpc | open |
| 192.168.99.164 | 139 | tcp | netbios-ssn | open |
| 192 .168.99.164 | 445 | tcp | microsoft-ds | open |
| 192.168.99.164 | 3389 | tcp | ms-wbt-server | open |
| 192.168.99.164 | 5800 | tcp | vnc-http | open |
| 192.168.99.164 | 5900 | tcp | vnc | open |
| 192.168.228.211 | 80 | tcp | http | open |
| 192.168.171.74 | 135 | tcp | msrpc | open |
| 192.168.171.74 | 139 | tcp | netbios-ssn | open |
| 192.168.171.74 | 445 | tcp | microsoft-ds | open |
| 192.168.171.74 | 3389 | tcp | ms-wbt-server | open |
| 192.168.171.74 | 5800 | tcp | vnc-http | open |
| 192.168.171.74 | 5900 | tcp | vnc | open |
+-----------------+------+-------+---------------+-------+
Or to output a CSV file:
$ sr2t --nmap example/nmap.xml -oC example
$ cat example_nmap_tcp.csv
ip address,53,80,88,135,139,389,445,3389,5800,5900
192.168.23.78,X,,X,X,X,X,X,X,,
192.168.27.243,,,,X,X,,X,X,X,X
192.168.99.164,,,,X,X,,X,X,X,X
192.168.228.211,,X,,,,,,,,
192.168.171.74,,,,X,X,,X,X,X,X
To produce an XLSX format:
$ sr2t --nikto example/nikto.xml -oX example/nikto.xlsx
To produce an text tabular format to stdout:
$ sr2t --nikto example/nikto.xml
+----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
| target ip | target hostname | target port | description | annotations |
+----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
| 192.168.178.10 | 192.168.178.10 | 80 | The anti-clickjacking X-Frame-Options header is not present. | X |
| 192.168.178.10 | 192.168.178.10 | 80 | The X-XSS-Protection header is not defined. This header can hint to the user | X |
| | | | agent to protect against some forms of XSS | |
| 192.168.178.10 | 192.168.178.10 | 8 0 | The X-Content-Type-Options header is not set. This could allow the user agent to | X |
| | | | render the content of the site in a different fashion to the MIME type | |
+----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
Or to output a CSV file:
$ sr2t --nikto example/nikto.xml -oC example
$ cat example_nikto.csv
target ip,target hostname,target port,description,annotations
192.168.178.10,192.168.178.10,80,The anti-clickjacking X-Frame-Options header is not present.,X
192.168.178.10,192.168.178.10,80,"The X-XSS-Protection header is not defined. This header can hint to the user
agent to protect against some forms of XSS",X
192.168.178.10,192.168.178.10,80,"The X-Content-Type-Options header is not set. This could allow the user agent to
render the content of the site in a different fashion to the MIME type",X
To produce an XLSX format:
$ sr2t --dirble example/dirble.xml -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --dirble example/dirble.xml
+-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
| url | code | content len | is directory | is listable | found from listable | redirect url | annotations |
+-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
| http://example.org/flv | 0 | 0 | false | false | false | | X |
| http://example.org/hire | 0 | 0 | false | false | false | | X |
| http://example.org/phpSQLiteAdmin | 0 | 0 | false | false | false | | X |
| http://example.org/print_order | 0 | 0 | false | false | fa lse | | X |
| http://example.org/putty | 0 | 0 | false | false | false | | X |
| http://example.org/receipts | 0 | 0 | false | false | false | | X |
+-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
Or to output a CSV file:
$ sr2t --dirble example/dirble.xml -oC example
$ cat example_dirble.csv
url,code,content len,is directory,is listable,found from listable,redirect url,annotations
http://example.org/flv,0,0,false,false,false,,X
http://example.org/hire,0,0,false,false,false,,X
http://example.org/phpSQLiteAdmin,0,0,false,false,false,,X
http://example.org/print_order,0,0,false,false,false,,X
http://example.org/putty,0,0,false,false,false,,X
http://example.org/receipts,0,0,false,false,false,,X
To produce an XLSX format:
$ sr2t --testssl example/testssl.json -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --testssl example/testssl.json
+-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
| ip address | port | BREACH | No HSTS | No PFS | No TLSv1.3 | RC4 | TLSv1.0 | TLSv1.1 | Wildcard |
+-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
| rc4-md5.badssl.com/104.154.89.105 | 443 | X | X | X | X | X | X | X | X |
+-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
Or to output a CSV file:
$ sr2t --testssl example/testssl.json -oC example
$ cat example_testssl.csv
ip address,port,BREACH,No HSTS,No PFS,No TLSv1.3,RC4,TLSv1.0,TLSv1.1,Wildcard
rc4-md5.badssl.com/104.154.89.105,443,X,X,X,X,X,X,X,X
To produce an XLSX format:
$ sr2t --fortify example/fortify.fpr -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --fortify example/fortify.fpr
+--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
| | type | subtype | severity | confidence | annotations |
+--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
| example1/web.xml:135:135 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
| example2/web.xml:150:150 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
| example3/web.xml:109:109 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
| example4/web.xml:108:108 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
| example5/web.xml:166:166 | J2EE Misconfiguration | Inse cure Transport | 3.0 | 5.0 | X |
| example6/web.xml:2:2 | J2EE Misconfiguration | Excessive Session Timeout | 3.0 | 5.0 | X |
| example7/web.xml:162:162 | J2EE Misconfiguration | Missing Authentication Method | 3.0 | 5.0 | X |
+--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
Or to output a CSV file:
$ sr2t --fortify example/fortify.fpr -oC example
$ cat example_fortify.csv
,type,subtype,severity,confidence,annotations
example1/web.xml:135:135,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
example2/web.xml:150:150,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
example3/web.xml:109:109,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
example4/web.xml:108:108,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
example5/web.xml:166:166,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
example6/web.xml:2:2,J2EE Misconfiguration,Excessive Session Timeout,3.0,5.0,X
example7/web.xml:162:162,J2EE Misconfiguration,Missing Authentication Method,3.0,5.0,X
WW4L3VCX11zWgKPX51TRw2RENe8STkbCkh5wTV4GuQnbZ1fKYmPFobZhEfS1G9G3vwjBhzioi3vx8JgBx2xLxe4N1gtJee8Mp
Exploitation and scanning tool specifically designed for Jenkins versions <= 2.441 & <= LTS 2.426.2
. It leverages CVE-2024-23897
to assess and exploit vulnerabilities in Jenkins instances.
Ensure you have the necessary permissions to scan and exploit the target systems. Use this tool responsibly and ethically.
python CVE-2024-23897.py -t <target> -p <port> -f <file>
or
python CVE-2024-23897.py -i <input_file> -f <file>
Parameters: - -t
or --target
: Specify the target IP(s). Supports single IP, IP range, comma-separated list, or CIDR block. - -i
or --input-file
: Path to input file containing hosts in the format of http://1.2.3.4:8080/
(one per line). - -o
or --output-file
: Export results to file (optional). - -p
or --port
: Specify the port number. Default is 8080 (optional). - -f
or --file
: Specify the file to read on the target system.
-i INPUT_FILE
). -o OUTPUT_FILE
).Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.
Alexander Hagenah - URL - Twitter
This tool is meant for educational and professional purposes only. Unauthorized scanning and exploiting of systems is illegal and unethical. Always ensure you have explicit permission to test and exploit any systems you target.
RepoReaper is a precision tool designed to automate the identification of exposed .git
repositories across a list of domains and subdomains. By processing a user-provided text file with domain names, RepoReaper systematically checks each for publicly accessible .git
files. This enables rapid assessment and protection against information leaks, making RepoReaper an essential resource for security teams and web developers.
.git
repositories.Clone the repository and install the required dependencies:
git clone https://github.com/YourUsername/RepoReaper.git
cd RepoReaper
pip install -r requirements.txt
chmod +x RepoReaper.py
RepoReaper is executed from the command line and will prompt for the path to a file containing a list of domains or subdomains to be scanned.
To start RepoReaper, simply run:
./RepoReaper.py
or
python3 RepoReaper.py
Upon execution, RepoReaper will ask for the path to the file containing the domains or subdomains: Enter the path of the file containing domains
Provide the path to your text file when prompted. The file should contain one domain or subdomain per line, like so:
example.com
subdomain.example.com
anotherdomain.com
RepoReaper will then proceed to scan the provided domains or subdomains for exposed .git repositories and report its findings.Β
This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.
SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.
Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.
SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.
Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:
Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.
Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.
Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.
Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.
Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.
Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.
By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.
SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.
To use SwaggerSpy, follow these steps:
git clone https://github.com/UndeadSec/SwaggerSpy.git
cd SwaggerSpy
pip install -r requirements.txt
python swaggerspy.py searchterm
SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.
Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.
SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)
I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.
SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.
Special thanks to @Liodeus for providing project inspiration through swaggerHole.
AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.
AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.
With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.
During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.
β AzSubEnum git:(main) β python3 azsubenum.py --help
usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]
Azure Subdomain Enumeration
options:
-h, --help show this help message and exit
-b BASE, --base BASE Base name to use
-v, --verbose Show verbose output
-t THREADS, --threads THREADS
Number of threads for concurrent execution
-p PERMUTATIONS, --permutations PERMUTATIONS
File containing permutations
Basic enumeration:
python3 azsubenum.py -b retailcorp --thread 10
Using permutation wordlists:
python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt
With verbose output:
python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose
SqliSniper is a robust Python tool designed to detect time-based blind SQL injections in HTTP request headers. It enhances the security assessment process by rapidly scanning and identifying potential vulnerabilities using multi-threaded, ensuring speed and efficiency. Unlike other scanners, SqliSniper is designed to eliminates false positives through and send alerts upon detection, with the built-in Discord notification functionality.
git clone https://github.com/danialhalo/SqliSniper.git
cd SqliSniper
chmod +x sqlisniper.py
pip3 install -r requirements.txt
This will display help for the tool. Here are all the options it supports.
ubuntu:~/sqlisniper$ ./sqlisniper.py -h
ββββββββ βββββββ βββ βββ ββββββββββββ βββββββββββββ βββββββββββββββ
ββββββββββββββββββββ βββ βββββββββββββ ββββββββββββββββββββββββββββββ
ββββββββββ ββββββ βββ ββββββββββββββ ββββββββββββββββββββ ββββββββ
βββββββββββββ ββββββ βββ ββββββββββββββββββββββββββββ ββββββ ββββββββ
βββββββββββ ββββββββββββββββ βββββββββββ ββββββββββββ βββββββββββ βββ
ββββββββ βββββββ βββββββββββ βββββββββββ βββββββββββ βββββββββββ βββ
-: By Muhammad Danial :-
usage: sqlisniper.py [-h] [-u URL] [-r URLS_FILE] [-p] [--proxy PROXY] [--payload PA YLOAD] [--single-payload SINGLE_PAYLOAD] [--discord DISCORD] [--headers HEADERS]
[--threads THREADS]
Detect SQL injection by sending malicious queries
options:
-h, --help show this help message and exit
-u URL, --url URL Single URL for the target
-r URLS_FILE, --urls_file URLS_FILE
File containing a list of URLs
-p, --pipeline Read from pipeline
--proxy PROXY Proxy for intercepting requests (e.g., http://127.0.0.1:8080)
--payload PAYLOAD File containing malicious payloads (default is payloads.txt)
--single-payload SINGLE_PAYLOAD
Single payload for testing
--discord DISCORD Discord Webhook URL
--headers HEADERS File containing headers (default is headers.txt)
--threads THREADS Number of threads
The url can be provided with -u flag
for single site scan
./sqlisniper.py -u http://example.com
The -r flag
allows SqliSniper to read a file containing multiple URLs for simultaneous scanning.
./sqlisniper.py -r url.txt
The SqliSniper can also worked with the pipeline input with -p flag
cat url.txt | ./sqlisniper.py -p
The pipeline feature facilitates seamless integration with other tools. For instance, you can utilize tools like subfinder and httpx, and then pipe their output to SqliSniper for mass scanning.
subfinder -silent -d google.com | sort -u | httpx -silent | ./sqlisniper.py -p
By default the SqliSniper use the payloads.txt file. However --payload flag
can be used for providing custom payloads file.
./sqlisniper.py -u http://example.com --payload mssql_payloads.txt
While using the custom payloads file, ensure that you substitute the sleep time with %__TIME_OUT__%
. SqliSniper dynamically adjusts the sleep time iteratively to mitigate potential false positives. The payloads file should look like this.
ubuntu:~/sqlisniper$ cat payloads.txt
0\"XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR\"Z
"0"XOR(if(now()=sysdate()%2Csleep(%__TIME_OUT__%)%2C0))XOR"Z"
0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z
If you want to only test with the single payload --single-payload flag
can be used. Make sure to replace the sleep time with %__TIME_OUT__%
./sqlisniper.py -r url.txt --single-payload "0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z"
Headers are saved in the file headers.txt for scanning custom header save the custom HTTP Request Header in headers.txt file.
ubuntu:~/sqlisniper$ cat headers.txt
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
X-Forwarded-For: 127.0.0.1
SqliSniper also offers Discord alert notifications, enhancing its functionality by providing real-time alerts through Discord webhooks. This feature proves invaluable during large-scale scans, allowing prompt notifications upon detection.
./sqlisniper.py -r url.txt --discord <web_hookurl>
Threads can be defined with --threads flag
./sqlisniper.py -r url.txt --threads 10
Note: It is crucial to consider that employing a higher number of threads might lead to potential false positives or overlooking valid issues. Due to the nature of time-based SQL injection it is recommended to use lower thread for more accurate detection.
SqliSniper
is made inΒ pythonΒ with lots of <3 by @Muhammad Danial.
To know more about our Attack Surface
Management platform, check out NVADR.
CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentesterβs skill) .
CATSploit automatically performs penetration tests in the following sequence:
Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.
Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.
Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.
Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.
CATSploit has the following prerequisites:
For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.
To install the latest version of CATSploit, please use the following commands:
$ git clone https://github.com/catsploit/catsploit.git
$ cd catsploit
$ git clone https://github.com/catsploit/cats-helper.git
$ sudo ./setup.sh
CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json
, the following fields should be modified for your environment.
(*) Adjust the number according to the specs of your machine.
To start the server, execute the following command:
$ python cats_server.py -c [CONFIG_FILE]
Next, prepare another console, start the client program, and initiate a connection to the server.
$ python catsploit.py -s [SOCKET_PATH]
After successfully connecting to the server and initializing it, the session will start.
_________ ___________ __ _ __
/ ____/ |/_ __/ ___/____ / /___ (_) /_
/ / / /| | / / \__ \/ __ \/ / __ \/ / __/
/ /___/ ___ |/ / ___/ / /_/ / / /_/ / / /_
\____/_/ |_/_/ /____/ .___/_/\____/_/\__/
/_/
[*] Connecting to cats-server
[*] Done.
[*] Initializing server
[*] Done.
catsploit>
The client can execute a variety of commands. Each command can be executed with -h
option to display the format of its arguments.
usage: [-h] {host,scenario,scan,plan,attack,post,reset,help,exit} ...
positional arguments:
{host,scenario,scan,plan,attack,post,reset,help,exit}
options:
-h, --help show this help message and exit
I've posted the commands and options below as well for reference.
host list:
show information about the hosts
usage: host list [-h]
options:
-h, --help show this help message and exit
host detail:
show more information about one host
usage: host detail [-h] host_id
positional arguments:
host_id ID of the host for which you want to show information
options:
-h, --help show this help message and exit
scenario list:
show information about the scenarios
usage: scenario list [-h]
options:
-h, --help show this help message and exit
scenario detail:
show more information about one scenario
usage: scenario detail [-h] scenario_id
positional arguments:
scenario_id ID of the scenario for which you want to show information
options:
-h, --help show this help message and exit
scan:
run network-scan and security-scan
usage: scan [-h] [--port PORT] targe t_host [target_host ...]
positional arguments:
target_host IP address to be scanned
options:
-h, --help show this help message and exit
--port PORT ports to be scanned
plan:
planning attack scenarios
usage: plan [-h] src_host_id dst_host_id
positional arguments:
src_host_id originating host
dst_host_id target host
options:
-h, --help show this help message and exit
attack:
execute attack scenario
usage: attack [-h] scenario_id
positional arguments:
scenario_id ID of the scenario you want to execute
options:
-h, --help show this help message and exit
post find-secret:
find confidential information files that can be performed on the pwned host
usage: post find-secret [-h] host_id
positional arguments:
host_id ID of the host for which you want to find confidential information
op tions:
-h, --help show this help message and exit
reset:
reset data on the server
usage: reset [-h] {system} ...
positional arguments:
{system} reset system
options:
-h, --help show this help message and exit
exit:
exit CATSploit
usage: exit [-h]
options:
-h, --help show this help message and exit
In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.
catsploit> scan 192.168.0.0/24
Network Scanning ... 100%
[*] Total 2 hosts were discovered.
Vulnerability Scanning ... 100%
[*] Total 14 vulnerabilities were discovered.
catsploit> host list
ββββββββββββ³βββββββββββββββββ³βββββββββββ³βββββββββββββββββββββββββββββββββββ³ββββββββ
β hostID β IP β Hostname β Platform β Pwned β
β‘ββββββ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β attacker β 0.0.0.0 β kali β kali 2022.4 β True β
β h_exbiy6 β 192.168.0.10 β β Linux 3.10 - 4.11 β False β
β h_nhqyfq β 192.168.0.20 β β Microsoft Windows 7 SP1 β False β
ββββββββββββ΄ ββββββββββββββββ΄βββββββββββ΄βββββββββββββββββββββββββββββββββββ΄ββββββββ
catsploit> host detail h_exbiy6
ββββββββββββ³βββββββββββββββ³βββββββββββ³βββββββββββββββ³ββββββββ
β hostID β IP β Hostname β Platform β Pwned β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β h_exbiy6 β 192.168.0.10 β ubuntu β ubuntu 14.04 β False β
ββββββββββββ΄βββββββββββββββ΄βββββββββββ΄βββββββββββββββ΄β ββββββ
[IP address]
ββββββββββββββββ³βββββββββββ³βββββββ³βββββββββββββ
β ipv4 β ipv4mask β ipv6 β ipv6prefix β
β‘ββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β β β β
βββββββββββββ ββ΄βββββββββββ΄βββββββ΄βββββββββββββ
[Open ports]
ββββββββββββββββ³ββββββββ³βββββββ³ββββββββββββββ³βββββββββββββββ³βββββββββββββββββββββββββββββ
β ip β proto β port β service β product β version β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β tcp β 21 β ftp β ProFTPD β 1.3.5 β
β 192.168.0.10 β tcp β 22 β ssh β OpenSSH β 6.6.1p1 Ubuntu 2ubuntu2.10 β
β 192.168.0.10 β tcp β 80 β http β Apache httpd β 2.4.7 β
β 192.168.0.10 β tcp β 445 β netbios-ssn β Samba smbd β 3.X - 4.X β
β 192.168.0.10 β tcp β 631 β ipp β CUPS β 1.7 β
ββββββββββββββββ΄ββββββββ΄βββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββββββ
[Vulnerabilities]
ββββββββββββββββ³ββββββββ³βββββββ³ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ³βββββββββββββββββ
β ip β proto β port β vuln_name β cve β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β tcp β 0 β TCP Timestamps Information Disclosure β N/A β
β 192.168.0.10 β tcp β 21 β FTP Unencrypted Cleartext Login β N/A β
β 192.168.0.10 β tcp β 22 β Weak MAC Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Encryption Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Host Key Algorithm(s) (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Key Exchange (KEX) Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 80 β Test HTTP dangerous methods β N/A β
β 192.168.0.10 β tcp β 80 β Drupal Core SQLi Vulnerability (SA-CORE-2014-005) - Active Check β CVE-2014-3704 β
β 192.168.0.10 β tcp β 80 β Drupal Coder RCE Vulnerability (SA-CONTRIB-2016-039) - Active Check β N/A β
β 192.168.0.10 β tcp β 80 β Sensitive File Disclosure (HTTP) β N/A β
β 192.168.0.10 β tcp β 80 β Unprotected Web App / Device Installers (HTTP) β N/A β
β 192.168.0.10 β tcp β 80 β Cleartext Transmission of Sensitive Information via HTTP β N/A β
β 192.168.0.10 β tcp β 80 β jQuery < 1.9.0 XSS Vulnerability β CVE-2012-6708 β
β 192.168.0.10 β tcp β 80 β jQuery < 1.6.3 XSS Vulnerability β CVE-2011-4969 β
β 192.168.0.10 β tcp β 80 β Drupal 7.0 Information Disclosure Vulnerability - Active Check β CVE-2011-3730 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2016-2183 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2016-6329 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2020-12872 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β CVE-2011-3389 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β CVE-2015-0204 β
ββββββββββββββββ΄ββββββββ΄βββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββ& #9472;βββββββββββββ
[Users]
βββββββββββββ³ββββββββ
β user name β group β
β‘ββββββββββββββββββββ©
βββββββββββββ΄ββββββββ
catsploit> plan attacker h_exbiy6
Planning attack scenario...100%
[*] Done. 15 scenarios was planned.
[*] To check each scenario, try 'scenario list' and/or 'scenario detail'.
catsploit> scenario list
βββββββββββββββ³βββββ ββββββββ³βββββββββββββββββ³ββββββββ³ββββββββ³ββββββββ³ββββββββββββββββββββββββββββββββ
β scenario id β src host ip β target host ip β eVc β eVd β steps β first attack step β
β‘ββββββββββββββββββββββββββββββββββββγ 3;ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 3d3ivc β 0.0.0.0 β 192.168.0.10 β 1.0 β 32.0 β 1 β exploit/multi/http/jenkins_sβ¦ β
β 5gnsvh β 0.0.0.0 β 192.168.0.10 β 1.0 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 6nlxyc β 0.0.0.0 β 192.168.0.10 β 0.0 β 48.32 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 8jos4z β 0.0.0.0 β 192.168.0.1 0 β 0.7 β 72.8 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 8kmmts β 0.0.0.0 β 192.168.0.10 β 0.0 β 32.0 β 1 β exploit/multi/elasticsearch/β¦ β
β agjmma β 0.0.0.0 β 192.168.0.10 β 0.0 β 24.0 β 1 β exploit/windows/http/manageeβ¦ β
β joglhf β 0.0.0.0 β 192.168.0.10 β 70.0 β 60.0 β 1 β auxiliary/scanner/ssh/ssh_loβ¦ β
β rmgrof β 0.0.0.0 β 192.168.0.10 β 100.0 β 32.0 β 1 β exploit/multi/http/drupal_drβ¦ β
β xuowzk β 0.0.0.0 β 192.168.0.10 β 0.0 β 24.0 β 1 β exploit/multi/http/struts_dmβ¦ β
β yttv51 β 0.0.0.0 β 192.168.0.10 β 0.01 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
β znv76x β 0.0.0.0 β 192.168.0.10 β 0.01 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
βββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββββββββββββββββββββββ
catsploit> scenario detail rmgrof
βββββββββββββββ³βββββββββββββββββ³ββββββββ³βββββββ
β src host ip β target host ip β eVc β eVd β
β‘ββββββββββββββββββββββββββββββββββββββββββββββ©
β 0.0.0.0 β 192.168.0.10 β 100.0 β 32.0 β
βββββββββββββββ΄ββββββββ ββββββββ΄ββββββββ΄βββββββ
[Steps]
βββββ³ββββββββββββββββββββββββββββββββββββββββ³ββββββββββββββββββββββββ
β # β step β params β
β‘βββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββββββ©
β 1 β exploit/multi/http/drupal_drupageddon β RHOSTS: 192.168.0.10 β
β β β LHOST: 192.168.10.100 β
βββββ΄ββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββ
catsploit> attack rmgrof
> ~> ~
> Metasploit Console Log
> ~
> ~
[+] Attack scenario succeeded!
catsploit> exit
Bye.
All informations and codes are provided solely for educational purposes and/or testing your own systems.
For any inquiry, please contact the email address as follows:
catsploit@nk.MitsubishiElectric.co.jp
C2 solution that communicates directly over Bluetooth-Low-Energy with your Bash Bunny Mark II.
Send your Bash Bunny all the instructions it needs just over the air.
pip install pygatt "pygatt[GATTTOOL]"
Make sure BlueZ is installed and gatttool
is usable
sudo apt install bluez
git clone https://github.com/90N45-d3v/BlueBunny
cd BlueBunny/C2
sudo python c2-server.py
BlueBunny/payload.txt
).localhost:1472
and connect your Bash Bunny (Your Bash Bunny will light up green when it's ready to pair).You can use BlueBunny's BLE backend and communicate with your Bash Bunny manually.
# Import the backend (BlueBunny/C2/BunnyLE.py)
import BunnyLE
# Define the data to send
data = "QUACK STRING I love my Bash Bunny"
# Define the type of the data to send ("cmd" or "payload") (payload data will be temporary written to a file, to execute multiple commands like in a payload script file)
d_type = "cmd"
# Initialize BunnyLE
BunnyLE.init()
# Connect to your Bash Bunny
bb = BunnyLE.connect()
# Send the data and let it execute
BunnyLE.send(bb, data, d_type)
The Bluetooth stack used is well known, but also very buggy. If starting the connection with your Bash Bunny does not work, it is probably a temporary problem due to BlueZ. Here are some kind of errors that can be caused by temporary bugs. These usually disappear at the latest after rebooting the C2's operating system, so don't be surprised and calm down if they show up.
As I said, BlueZ, the base for the bluetooth part used in BlueBunny, is somewhat bug prone. If you encounter any non-temporary bugs when connecting to Bash Bunny as well as any other bugs/difficulties in the whole BlueBunny project, you are always welcome to contact me. Be it a problem, an idea/solution or just a nice feedback.
Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.
Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:
python3 -m pip install porch-pirate
The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.
Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.
--globals
--collections
--requests
--urls
--dump
--raw
--curl
porch-pirate -s "coca-cola.com"
By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w
argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8
When an interesting result has been found with a simple search, we can provide the workspace ID to the -w
argument with the --dump
command to begin extracting information from the workspace and its collections.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump
Porch Pirate can be supplied a simple search term, following the --globals
argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.
porch-pirate -s "shopify" --globals
Porch Pirate can be supplied a simple search term, following the --dump
argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.
porch-pirate -s "coca-cola.com" --dump
A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls
Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.
porch-pirate -s "coca-cola.com" --urls
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw
porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME
Porch Pirate can build curl requests when provided with a request ID for easier testing.
porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl
porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080
p = porchpirate()
print(p.search('coca-cola.com'))
p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)
p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Other library usage examples can be located in the examples
directory, which contains the following examples:
dump_workspace.py
format_search_results.py
format_workspace_collections.py
format_workspace_globals.py
get_collection.py
get_collections.py
get_profile.py
get_request.py
get_statistics.py
get_team.py
get_user.py
get_workspace.py
recursive_globals_from_search.py
request_to_curl.py
search.py
search_by_page.py
workspace_collections.py
Service that scans your Infrastructure as Code for common vulnerabilities.
Aspect | Information |
---|---|
Tool name | IaC Scan Runner |
Docker image | xscanner/runner |
PyPI package | iac-scan-runner |
Documentation | docs |
Contact us | xopera@xlab.si |
The IaC Scan Runner is a REST API service used to scan IaC (Infrastructure as Code) package and perform various code checks in order to find possible vulnerabilities and improvements. Explore the docs for more info.
This section explains how to run the REST API.
You can run the REST API using a public xscanner/runner Docker image as follows:
# run IaC Scan Runner REST API in a Docker container and
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 xscanner/runner
Or you can build the image locally and run it as follows:
# build Docker container (it will take some time)
$ docker build -t iac-scan-runner .
# run IaC Scan Runner REST API in a Docker container and
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 iac-scan-runner
To run using the IaC Scan Runner CLI:
# install the CLI
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install iac-scan-runner
# print OpenAPI specification
(.venv) $ iac-scan-runner openapi
# install prerequisites
(.venv) $ iac-scan-runner install
# run IaC Scan Runner REST API
(.venv) $ iac-scan-runner run
To run locally from source:
# Export env variables
export MONGODB_CONNECTION_STRING=mongodb://localhost:27017
export SCAN_PERSISTENCE=enabled
export USER_MANAGEMENT=enabled
# Setup MongoDB
$ docker run --name mongodb -p 27017:27017 mongo
# install prerequisites
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install -r requirements.txt
(.venv) $ ./install-checks.sh
# run IaC Scan Runner REST API (add --reload flag to apply code changes on the way)
(.venv) $ uvicorn src.iac_scan_runner.api:app
This part will show one of the possible deployments and short examples on how to use API calls.
Firstly we will clone the iac scan runner repository and run the API.
$ git clone https://github.com/xlab-si/iac-scan-runner.git
$ docker compose up
After this is done you can use different API endpoints by calling localhost:8000. You can also navigate to localhost:8000/swagger or localhost:8000/redoc and test all the API endpoints there. In this example, we will use curl for calling API endpoints.
curl -X 'POST' \
'http://0.0.0.0/project?creator_id=test' \
-H 'accept: application/json' \
-d ''
project id will be returned to us. For this example project id is 1e7b2a91-2896-40fd-8d53-83db56088026.
curl -X 'PUT' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/checks/ansible-lint/disable' \
-H 'accept: application/json'
curl -X 'POST' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/scan?scan_response_type=json' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'iac=@YOUR.zip;type=application/zip'
That is it.
At certain point, it might be required to include new check tools within the scan workflow, with aim to provide wider coverage of IaC standards and project types. Therefore, in this subsection, a sequence of required steps for that purpose is identified and described. However, the steps have to be performed manually as it will be described, but it is planned to automatize this procedure in future via API and provide user-friendly interface that will aid the user while importing new tools that will become part of the available catalogue that makes the scan workflow. Figure 16 depicts the required steps which have to be taken in order to extend the scan workflow with a new tool.
Step 1 β Adding tool-specific class to checks directory First, it is required to add a new tool-specific Python class to the checks directory inside IaC Scan Runnerβs source code: iac-scan-runner/src/iac_scan_runner/checks/new_tool.py
The class of a new tool inherits the existing Check class, which provides generalization of scan workflow tools. Moreover, it is necessary to provide implementation of the following methods:
Step 2 β Adding the check tool class instance within ScanRunner constructor Once the new class derived from Check is added to the IaC Scan Runnerβs source code, it is also required to modify the source code of its main class, called ScanRunner. When it comes to modifications of this class, it is required first to import the tool-specific class, create a new check tool-specific class instance and adding it to the dictionary of IaC checks inside def init_checks(self). A. Importing the check tool class from iac_scan_runner.checks.tfsec import TfsecCheck B. Creating new instance of check tool object inside init_checks """Initiate predefined check objects""" new_tool = NewToolCheck() C. Adding it to self.iac_checks dictionary inside init_checks
self.iac_checks = {
new_tool.name: new_tool,
β¦
}
Step 3 β Adding the check tool to the compatibility matrix inside Compatibility class On the other side, inside file src/iac_scan_runner/compatibility.py, the dictionary which represents compatibility matrix should be extended as well. There are two possible cases: a) new file type should be added as a key, together with list of relevant tools as value b) new tool should be added to the compatibility list for the existing file type.
compatibility_matrix = {
"new_type": ["new_tool_1", "new_tool_2"],
β¦
"old_typeK": ["tool_1", β¦ "tool_N", "new_tool_3"]
}
Step 4 β Providing the support for result summarization Finally, the last step in sequence of required modifications for scan workflow extension is to modify class ResultsSummary (src/iac_scan_runner/results_summary.py). Precisely, it is required to append a part of the code to its method summarize_outcome that will look for specific strings which are tool-specific and can be used to identify whether the check passed or failed. Inside the loop that traverses the compatible checks, for each new tool the following structure of if-else should be included:
if check == "new_tool":
if outcome.find("Check pass string") > -1:
self.outcomes[check]["status"] = "Passed"
return "Passed"
else:
self.outcomes[check]["status"] = "Problems"
return "Problems"
You can contact the xOpera team by sending an email to xopera@xlab.si.
This project has received funding from the European Unionβs Horizon 2020 research and innovation programme under Grant Agreement No. 101000162 (PIACERE).
Existing tools don't really "understand" code. Instead, they mostly parse texts.
DeepSecrets expands classic regex-search approaches with semantic analysis, dangerous variable detection, and more efficient usage of entropy analysis. Code understanding supports 500+ languages and formats and is achieved by lexing and parsing - techniques commonly used in SAST tools.
DeepSecrets also introduces a new way to find secrets: just use hashed values of your known secrets and get them found plain in your code.
Under the hood story is in articles here: https://hackernoon.com/modernizing-secrets-scanning-part-1-the-problem
Pff, is it still regex-based?
Yes and no. Of course, it uses regexes and finds typed secrets like any other tool. But language understanding (the lexing stage) and variable detection also use regexes under the hood. So regexes is an instrument, not a problem.
Why don't you build true abstract syntax trees? It's academically more correct!
DeepSecrets tries to keep a balance between complexity and effectiveness. Building a true AST is a pretty complex thing and simply an overkill for our specific task. So the tool still follows the generic SAST-way of code analysis but optimizes the AST part using a different approach.
I'd like to build my own semantic rules. How do I do that?
Only through the code by the moment. Formalizing the rules and moving them into a flexible and user-controlled ruleset is in the plans.
I still have a question
Feel free to communicate with the maintainer
From Github via pip
$ pip install git+https://github.com/avito-tech/deepsecrets.git
From PyPi
$ pip install deepsecrets
The easiest way:
$ deepsecrets --target-dir /path/to/your/code --outfile report.json
This will run a scan against /path/to/your/code
using the default configuration:
Report will be saved to report.json
Run deepsecrets --help
for details.
Basically, you can use your own ruleset by specifying --regex-rules
. Paths to be excluded from scanning can be set via --excluded-paths
.
The built-in ruleset for regex checks is located in /deepsecrets/rules/regexes.json
. You're free to follow the format and create a custom ruleset.
Example ruleset for regex checks is located in /deepsecrets/rules/regexes.json
. You're free to follow the format and create a custom ruleset.
There are several core concepts:
File
Tokenizer
Token
Engine
Finding
ScanMode
Just a pythonic representation of a file with all needed methods for management.
A component able to break the content of a file into pieces - Tokens - by its logic. There are four types of tokenizers available:
FullContentTokenizer
: treats all content as a single token. Useful for regex-based search.PerWordTokenizer
: breaks given content by words and line breaks.LexerTokenizer
: uses language-specific smarts to break code into semantically correct pieces with additional context for each token.A string with additional information about its semantic role, corresponding file, and location inside it.
A component performing secrets search for a single token by its own logic. Returns a set of Findings. There are three engines available:
RegexEngine
: checks tokens' values through a special rulesetSemanticEngine
: checks tokens produced by the LexerTokenizer using additional context - variable names and valuesHashedSecretEngine
: checks tokens' values by hashing them and trying to find coinciding hashes inside a special rulesetThis is a data structure representing a problem detected inside code. Features information about the precise location inside a file and a rule that found it.
This component is responsible for the scan process.
PerFileAnalyzer
- the method called against each file, returning a list of findings. The primary usage is to initialize necessary engines, tokenizers, and rulesets.The current implementation has a CliScanMode
built by the user-provided config through the cli args.
The project is supposed to be developed using VSCode and 'Remote containers' feature.
Steps:
MemTracer is a tool that offers live memory analysis capabilities, allowing digital forensic practitioners to discover and investigate stealthy attack traces hidden in memory. The MemTracer is implemented in Python language, aiming to detect reflectively loaded native .NET framework Dynamic-Link Library (DLL). This is achieved by looking for the following abnormal memory regionβs characteristics:
The tool starts by scanning the running processes, and by analyzing the allocated memory regions characteristics to detect reflective DLL loading symptoms. Suspicious memory regions which are identified as DLL modules are dumped for further analysis and investigation.
Furthermore, the tool features the following options:
python.exe memScanner.py [-h] [-r] [-m MODULE]
-h, --help show this help message and exit
-r, --reflectiveScan Looking for reflective DLL loading
-m MODULE, --module MODULE Looking for spcefic loaded DLL
The script needs administrator privileges in order incepect all processes.
Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.
Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.
What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.
Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.
While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.
Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.
Identifies Areas of Interest in File Paths (Worldβs First): Recognises patterns in file paths to pinpoint relevant sections for review.
Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.
Automated Scientific Effort Estimation for Code Review (Worldβs First): Providing a measurable approach for estimating efforts required for a code review process.
Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.
Additionally, the tool offers the following functionalities:
Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki
Feel free to contribute towards updating or adding new rules and future development.
If you find any bugs, report them to d3basis.m0hanty@gmail.com.
Python3 and all the libraries listed in requirements.txt
$ pip install virtualenv
$ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
Example: virtualenv -p python3 venv
$ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
Example: source venv/bin/activate
After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $
You must run the below command after activating the virtual environment as mentioned in the previous steps.
pip install -r requirements.txt
Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.
$ python3 dakshscra.py -h // To view avaialble options and arguments
usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]
options:
-h, --help show this help message and exit
-r RULE_FILE Specify platform specific rule name
-f FILE_TYPES Specify file types to scan
-v Specify verbosity level {'-v', '-vv', '-vvv'}
-t TARGET_DIR Specify target directory path
-l {R,RF}, --list {R,RF}
List rules [R] OR rules and filetypes [RF]
-recon Detects platform, framework and programming language used
-estimate Estimate efforts required for code review
$ python3 dakshscra.py // To view tool usage along with examples
Examples:
# '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
dakshsca.py -r php -t /source_dir_path
# To override default settings, other filetypes can be specified with '-f' option.
dakshsca.py -r php -f dotnet -t /path_to_source_dir
dakshsca.py -r php -f custom -t /path_to_source_dir
# Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
dakshsca.py -recon -r php -t /path_to_source_dir
# Perform only reconnaissance if '-recon' used without the '-r' option.
dakshsca.py -recon -t /path_to_source_dir
# Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
dakshsca.py -r php -vv -t /path_to_source_dir
Supported RULE_FILE: dotnet, java, php, javascript
Supported FILE_TY PES: dotnet, php, java, custom, allfiles
The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.
Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.
Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.
Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.
Gold Digger is a simple tool used to help quickly discover sensitive information in files recursively. Originally written to assist in rapidly searching files obtained during a penetration test.
Gold Digger requires Python3.
virtualenv -p python3 .
source bin/activate
python dig.py --help
usage: dig.py [-h] [-e EXCLUDE] [-g GOLD] -d DIRECTORY [-r RECURSIVE] [-l LOG]
optional arguments:
-h, --help show this help message and exit
-e EXCLUDE, --exclude EXCLUDE
JSON file containing extension exclusions
-g GOLD, --gold GOLD JSON file containing the gold to search for
-d DIRECTORY, --directory DIRECTORY
Directory to search for gold
-r RECURSIVE, --recursive RECURSIVE
Search directory recursively?
-l LOG, --log LOG Log file to save output
Gold Digger will recursively go through all folders and files in search of content matching items listed in the gold.json
file. Additionally, you can leverage an exclusion file called exclusions.json
for skipping files matching specific extensions. Provide the root folder as the --directory
flag.
An example structure could be:
~/Engagements/CustomerName/data/randomfiles/
~/Engagements/CustomerName/data/randomfiles2/
~/Engagements/CustomerName/data/code/
You would provide the following command to parse all 3 account reports:
python dig.py --gold gold.json --exclude exclusions.json --directory ~/Engagements/CustomerName/data/ --log Customer_2022-123_gold.log
The tool will create a log file containg the scanning results. Due to the nature of using regular expressions, there may be numerous false positives. Despite this, the tool has been proven to increase productivity when processing thousands of files.
Shout out to @d1vious for releasing git-wild-hunt https://github.com/d1vious/git-wild-hunt! Most of the regex in GoldDigger was used from this amazing project.
Kubestroyer aims to exploit Kubernetes clusters misconfigurations and be the swiss army knife of your Kubernetes pentests
Kubestroyer is a Golang exploitation tool that aims to take advantage of Kubernetes clusters misconfigurations.
The tool is scanning known Kubernetes ports that can be exposed as well as exploiting them.
To get a local copy up and running, follow these simple example steps.
wget https://go.dev/dl/go1.19.4.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.19.4.linux-amd64.tar.gz
Use prebuilt binary
or
Using go install command :
$ go install github.com/Rolix44/Kubestroyer@latest
or
build from source:
$ git clone https://github.com/Rolix44/Kubestroyer.git
$ go build -o Kubestroyer cmd/kubestroyer/main.go
Parameter | Description | Mand/opt | Example |
---|---|---|---|
-t / --target | Target (IP, domain or file) | Mandatory | -t localhost,127.0.0.1 / -t ./domain.txt |
--node-scan | Enable node port scanning (port 30000 to 32767) | Optionnal | -t localhost --node-scan |
--anon-rce | RCE using Kubelet API anonymous auth | Optionnal | -t localhost --anon-rce |
-x | Command to execute when using RCE (display service account token by default) | Optionnal | -t localhost --anon-rce -x "ls -al" |
Target
Scanning
Vulnerabilities
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
git checkout -b feature/AmazingFeature
)git commit -m 'Add some AmazingFeature'
)git push origin feature/AmazingFeature
)Distributed under the MIT License. See LICENSE.txt
for more information.
Rolix - @Rolix_cy - rolixcy@protonmail.com
Project Link: https://github.com/Rolix44/Kubestroyer
~/.kube/config
) is properly configured for the target cluster.deploy/kubei.yaml
is used to deploy and configure Kubei on your cluster.IGNORE_NAMESPACES
env variable to ignore specific namespaces. Set TARGET_NAMESPACE
to scan a specific namespace, or leave empty to scan all namespaces.MAX_PARALLELISM
env variable for the maximum number of simultaneous scanners.SEVERITY_THRESHOLD
threshold will be reported. Supported levels are Unknown
, Negligible
, Low
, Medium
, High
, Critical
, Defcon1
. Default is Medium
.DELETE_JOB_POLICY
env variable to define whether or not to delete completed scanner jobs. Supported values are:All
- All jobs will be deleted.Successful
- Only successful jobs will be deleted (default).Never
- Jobs will never be deleted.kubectl apply -f https://raw.githubusercontent.com/Portshift/kubei/master/deploy/kubei.yaml
kubectl -n kubei get pod -lapp=kubei
kubectl -n kubei port-forward $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}') 8080
kubectl -n kubei logs $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}')
deploy/kubei.yaml
.exploit.json
to upload during exploitURI path
for exploitThis will display help for the CLI tool. Here are all the required arguments it supports.
FirebaseExploiter was built using go1.19. Make sure you use latest version of Go to install successfully. Run the following command to install the latest version:
go install -v github.com/securebinary/firebaseExploiter@latest
To scan a specific domain to check for Insecure Firebase DB.
To exploit a Firebase DB to write your own JSON document in it.
Create your own exploit.json
file in proper JSON format to exploit vulnerable Firebase DBs.
Checking the exploited URL to verify the vulnerability.
Adding custom path
for exploiting Firebase DBs.
Mass scanning for Insecure Firebase Databases from list of target hosts.
Exploiting vulnerable Firebase DBs from the list of target hosts.
FirebaseExploiter
is made with love by the SecureBinary
team. Any tweaks / community contribution are welcome.
Fast and lightweight, UDPX is a single-packet UDP scanner written in Go that supports the discovery of over 45 services with the ability to add custom ones. It is easy to use and portable, and can be run on Linux, Mac OS, and Windows. Unlike internet-wide scanners like zgrab2 and zmap, UDPX is designed for portability and ease of use.
Scanning UDP ports is very different than scanning TCP - you may, or may not get any result back from probing an UDP port as UDP is a connectionless protocol. UDPX implements a single-packet based approach. A protocol-specific packet is sent to the defined service (port) and waits for a response. The limit is set to 500 ms by default and can be changed by -w
flag. If the service sends a packet back within this time, it is certain that it is indeed listening on that port and is reported as open.
A typical technique is to send 0 byte UDP packets to each port on the target machine. If we receive an "ICMP Port Unreachable" message, then the port is closed. If an UDP response is received to the probe (unusual), the port is open. If we get no response at all, the state is open or filtered, meaning that the port is either open or packet filters are blocking the communication. This method is not implemented as there is no added value (UDPX tests only for specific protocols).
Concurrency: By default, concurrency is set to 32 connections only (so you don't crash anything). If you have a lot of hosts to scan, you can set it to 128 or 256 connections. Based on your hardware, connection stability, and ulimit (on *nix), you can run 512 or more concurrent connections, but this is not recommended.
To scan a single IP:
udpx -t 1.1.1.1
To scan a CIDR with maximum of 128 connections and timeout of 1000 ms:
udpx -t 1.2.3.4/24 -c 128 -w 1000
To scan targets from file with maximum of 128 connections for only specific service:
udpx -tf targets.txt -c 128 -s ipmi
Target can be:
IPv6 is supported.
If you want to store the results, use flag -o [filename]
. Output is in JSONL format, as can be seen bellow:
{"address":"45.33.32.156","hostname":"scanme.nmap.org","port":123,"service":"ntp","response_data":"JAME6QAAAEoAAA56LU9vp+d2ZPwOYIyDxU8jS3GxUvM="}
__ ______ ____ _ __
/ / / / __ \/ __ \ |/ /
/ / / / / / / /_/ / /
/ /_/ / /_/ / ____/ |
\____/_____/_/ /_/|_|
v1.0.2-beta, by @nullt3r
Usage of ./udpx-linux-amd64:
-c int
Maximum number of concurrent connections (default 32)
-nr
Do not randomize addresses
-o string
Output file to write results
-s string
Scan only for a specific service, one of: ard, bacnet, bacnet_rpm, chargen, citrix, coap, db, db, digi1, digi2, digi3, dns, ipmi, ldap, mdns, memcache, mssql, nat_port_mapping, natpmp, netbios, netis, ntp, ntp_monlist, openvpn, pca_nq, pca_st, pcanywhere, portmap, qotd, rdp, ripv, sentinel, sip, snmp1, snmp2, snmp3, ssdp, tftp, ubiquiti, ubiquiti_discovery_v1, ubiquiti_discovery_v2, upnp, valve, wdbrpc, wsd, wsd_malformed, xdmcp, kerberos, ike
-sp
Show received packets (only first 32 bytes)
-t string
IP/CIDR to scan
-tf string
File containing IPs/CIDRs to scan
-w int
Maximum time to wait for a response (socket timeout) in ms (default 500)
You can grab prebuilt binaries in the release section. If you want to build UDPX from source, follow these steps:
From git:
git clone https://github.com/nullt3r/udpx
cd udpx
go build ./cmd/udpx
You can find the binary in the current directory.
Or via go:
go install -v github.com/nullt3r/udpx/cmd/udpx@latest
After that, you can find the binary in $HOME/go/bin/udpx
. If you want, move binary to /usr/local/bin/
so you can call it directly.
The UDPX supports more then 45 services. The most interesting are:
The complete list of supported services:
Please send a feature request with protocol name and port and I will make it happen. Or add it on your own, the file pkg/probes/probes.go
contains all available payloads. Specify the protocol name, port and packet data (hex-encoded).
{
Name: "ike",
Payloads: []string{"5b5e64c03e99b51100000000000000000110020000000000000001500000013400000001000000010000012801010008030000240101"},
Port: []int{500, 4500},
},
I am not responsible for any damages. You are responsible for your own actions. Scanning or attacking targets without prior mutual consent can be illegal.
UDPX is distributed under MIT License.
CMLoot was created to easily find interesting files stored on System Center Configuration Manager (SCCM/CM) SMB shares. The shares are used for distributing software to Windows clients in Windows enterprise environments and can contains scripts/configuration files with passwords, certificates (pfx), etc. Most SCCM deployments are configured to allow all users to read the files on the shares, sometimes it is limited to computer accounts.
The Content Library of SCCM/CM have a "complex" (annoying) file structure which CMLoot will untangle for you: https://techcommunity.microsoft.com/t5/configuration-manager-archive/understanding-the-configuration-manager-content-library/ba-p/273349
Essentially the DataLib folder contains .INI files, the .INI file are named the original filename + .INI. The .INI file contains a hash of the file, and the file itself is stored in the FileLib in format of <folder name: 4 first chars of the hash>\fullhash.
It is possible to apply Access control to packages in CM. This however only protects the folder for the file descriptor (DataLib), not the actual file itself. CMLoot will during inventory record any package that it can't access (Access denied) to the file _noaccess.txt. Invoke-CMLootHunt can then use this file to enumerate the actual files that the access control is trying to protect.
Windows Defender for Endpoint (EDR) or other security mechanisms might trigger because the script parses a lot of files over SMB.
Find CM servers by searching for them in Active Directory or by fetching this reqistry key on a workstation with System Center installed:
(Get-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\SMS\DP -Name ManagementPoints).ManagementPoints
There may be multiple CM servers deployed and they can contain different files so be sure to find all of them.
Then you need to create an inventory file which is just a text file containing references to file descriptors (.INI). The following command will parse all .INI files on the SCCM server to create a list of files available.
PS> Invoke-CMLootInventory -SCCMHost sccm01.domain.local -Outfile sccmfiles.txt
Then use the inventory file created above to download files of interest:
Select files using GridView (Milage may vary with large inventory files):
PS> Invoke-CMLootDownload -InventoryFile .\sccmfiles.txt -GridSelect
Download a single file, by coping a line in the inventory text:
PS> Invoke-CMLootDownload -SingleFile \\sccm\SCCMContentLib$\DataLib\SC100001.1\x86\MigApp.xml
Download all files with a certain file extension:
PS> Invoke-CMLootDownload -InventoryFile .\sccmfiles.txt -Extension ps1
Files will by default download to CMLootOut in the folder from which you execute the script, can be changed with -OutFolder parameter. Files are saved in the format of (folder: filext)\(first 4 chars of hash>_original filename).
Hunt for files that CMLootInventory found inaccessible:
Invoke-CMLootHunt -SCCMHost sccm -NoAccessFile sccmfiles_noaccess.txt
Bulk extract MSI files:
Invoke-CMLootExtract -Path .\CMLootOut\msi
Run inventory, scanning available files:
Select files using GridSelect:
Hunt "inaccessible" files and MSI extract:
Tomas Rzepka / WithSecure
fingerprintx
is a utility similar to httpx that also supports fingerprinting services like as RDP, SSH, MySQL, PostgreSQL, Kafka, etc. fingerprintx
can be used alongside port scanners like Naabu to fingerprint a set of ports identified during a port scan. For example, an engineer may wish to scan an IP range and then rapidly fingerprint the service running on all the discovered ports.
SERVICE | TRANSPORT | SERVICE | TRANSPORT |
---|---|---|---|
HTTP | TCP | REDIS | TCP |
SSH | TCP | MQTT3 | TCP |
MODBUS | TCP | VNC | TCP |
TELNET | TCP | MQTT5 | TCP |
FTP | TCP | RSYNC | TCP |
SMB | TCP | RPC | TCP |
DNS | TCP | OracleDB | TCP |
SMTP | TCP | RTSP | TCP |
PostgreSQL | TCP | MQTT5 | TCP (TLS) |
RDP | TCP | HTTPS | TCP (TLS) |
POP3 | TCP | SMTPS | TCP (TLS) |
KAFKA | TCP | MQTT3 | TCP (TLS) |
MySQL | TCP | RDP | TCP (TLS) |
MSSQL | TCP | POP3S | TCP (TLS) |
LDAP | TCP | LDAPS | TCP (TLS) |
IMAP | TCP | IMAPS | TCP (TLS) |
SNMP | UDP | Kafka | TCP (TLS) |
OPENVPN | UDP | NETBIOS-NS | UDP |
IPSEC | UDP | DHCP | UDP |
STUN | UDP | NTP | UDP |
DNS | UDP |
From Github
go install github.com/praetorian-inc/fingerprintx/cmd/fingerprintx@latest
From source (go version > 1.18)
$ git clone git@github.com:praetorian-inc/fingerprintx.git
$ cd fingerprintx
# with go version > 1.18
$ go build ./cmd/fingerprintx
$ ./fingerprintx -h
Docker
$ git clone git@github.com:praetorian-inc/fingerprintx.git
$ cd fingerprintx
# build
docker build -t fingerprintx .
# and run it
docker run --rm fingerprintx -h
docker run --rm fingerprintx -t praetorian.com:80 --json
fingerprintx -h
The -h
option will display all of the supported flags for fingerprintx
.
Usage:
fingerprintx [flags]
TARGET SPECIFICATION:
Requires a host and port number or ip and port number. The port is assumed to be open.
HOST:PORT or IP:PORT
EXAMPLES:
fingerprintx -t praetorian.com:80
fingerprintx -l input-file.txt
fingerprintx --json -t praetorian.com:80,127.0.0.1:8000
Flags:
--csv output format in csv
-f, --fast fast mode
-h, --help help for fingerprintx
--json output format in json
-l, --list string input file containing targets
-o, --output string output file
-t, --targets strings target or comma separated target list
-w, --timeout int timeout (milliseconds) (default 500)
-U, --udp run UDP plugins
-v, --verbose verbose mode
The fast
mode will only attempt to fingerprint the default service associated with that port for each target. For example, if praetorian.com:8443
is the input, only the https
plugin would be run. If https
is not running on praetorian.com:8443
, there will be NO output. Why do this? It's a quick way to fingerprint most of the services in a large list of hosts (think the 80/20 rule).
With one target:
$ fingerprintx -t 127.0.0.1:8000
http://127.0.0.1:8000
By default, the output is in the form: SERVICE://HOST:PORT
. To get more detailed service output specify JSON with the --json
flag:
$ fingerprintx -t 127.0.0.1:8000 --json
{"ip":"127.0.0.1","port":8000,"service":"http","transport":"tcp","metadata":{"responseHeaders":{"Content-Length":["1154"],"Content-Type":["text/html; charset=utf-8"],"Date":["Mon, 19 Sep 2022 18:23:18 GMT"],"Server":["SimpleHTTP/0.6 Python/3.10.6"]},"status":"200 OK","statusCode":200,"version":"SimpleHTTP/0.6 Python/3.10.6"}}
Pipe in output from another program (like naabu):
$ naabu 127.0.0.1 -silent 2>/dev/null | fingerprintx
http://127.0.0.1:8000
ftp://127.0.0.1:21
Run with an input file:
$ cat input.txt | fingerprintx
http://praetorian.com:80
telnet://telehack.com:23
# or if you prefer
$ fingerprintx -l input.txt
http://praetorian.com:80
telnet://telehack.com:23
With more metadata output:
Nmap is the standard for network scanning. Why use fingerprintx
instead of nmap? The main two reasons are:
fingerprintx
works smarter, not harder: the first plugin run against a server with port 8080 open is the http plugin. The default service approach cuts down scanning time in the best case. Most of the time the services running on port 80, 443, 22 are http, https, and ssh -- so that's what fingerprintx
checks first.fingerprintx
supports json output with the --json
flag. Nmap supports numerous output options (normal, xml, grep), but they are often hard to parse and script appropriately. fingerprintx
supports json output which eases integration with other tools in processing pipelines.third_party
folder that imports the Go cryptography libraries? ssh
fingerprinting module identifies the various cryptographic options supported by the server when collecting metadata during the handshake process. This makes use of a few unexported functions, which is why the Go cryptography libraries are included here with an export.go file.target:port
input is open. If none of the ports are open there will be no output as there are no services running on the targets.zgrab2
command line usage (and use case) is slightly different than fingerprintx
. For zgrab2
, the protocol must be specified ahead of time: echo praetorian.com | zgrab2 http -p 8000
, which assumes you already know what is running there. For fingerprintx
, that is not the case: echo praetorian.com:8000 | fingerprintx
. The "application layer" protocol scanning approach is very similar.fingerprintx
is the work of a lot of people, including our great intern class of 2022. Here is a list of contributors so far:
Graphical interface for PortEx, a Portable Executable and Malware Analysis Library
I test this program on Linux and Windows. But it should work on any OS with JRE version 9 or higher.
I will be including more and more features that PortEx already provides.
These features include among others:
Some of these features are already provided by PortexAnalyzer CLI version, which you can find here: PortexAnalyzer CLI
I develop PortEx and PortexAnalyzer as a hobby in my free time. If you like it, please consider buying me a coffee: https://ko-fi.com/struppigel
Karsten Hahn
Twitter: @Struppigel
Mastodon: struppigel@infosec.exchange
Youtube: MalwareAnalysisForHedgehogs
The plugin is created to help automated scanning using Burp in the following scenarios:
Key advantages:
The inspiration for the plugin is from ExtendedMacro plugin: https://github.com/FrUh/ExtendedMacro
For usage with test application (Install this testing application (Tiredful application) from https://github.com/payatu/Tiredful-API)
Totally there are 4 different ways you can specify the error condition.
Idea : Record the Tiredful application request in BURP, configure the ATOR extender, check whether token is replaced by ATOR.
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
v1.0
Authors from Synopsys - Ashwath Reddy (@ka3hk) and Manikandan Rajappan (@rmanikdn)
This software is released by Synopsys under the MIT license.
UI Panel was splitted into 4 different configuration. Check out the code from v2 or use the executable from v2/bin.