The Damn Vulnerable Drone is an intentionally vulnerable drone hacking simulator based on the popular ArduPilot/MAVLink architecture, providing a realistic environment for hands-on drone hacking.
The Damn Vulnerable Drone is a virtually simulated environment designed for offensive security professionals to safely learn and practice drone hacking techniques. It simulates real-world ArduPilot & MAVLink drone architectures and vulnerabilities, offering a hands-on experience in exploiting drone systems.
The Damn Vulnerable Drone aims to enhance offensive security skills within a controlled environment, making it an invaluable tool for intermediate-level security professionals, pentesters, and hacking enthusiasts.
Similar to how pilots utilize flight simulators for training, we can use the Damn Vulnerable Drone simulator to gain in-depth knowledge of real-world drone systems, understand their vulnerabilities, and learn effective methods to exploit them.
The Damn Vulnerable Drone platform is open-source and available at no cost and was specifically designed to address the substantial expenses often linked with drone hardware, hacking tools, and maintenance. Its cost-free nature allows users to immerse themselves in drone hacking without financial concerns. This accessibility makes the Damn Vulnerable Drone a crucial resource for those in the fields of information security and penetration testing, promoting the development of offensive cybersecurity skills in a safe environment.
The Damn Vulnerable Drone platform operates on the principle of Software-in-the-Loop (SITL), a simulation technique that allows users to run drone software as if it were executing on an actual drone, thereby replicating authentic drone behaviors and responses.
ArduPilot's SITL allows for the execution of the drone's firmware within a virtual environment, mimicking the behavior of a real drone without the need for physical hardware. This simulation is further enhanced with Gazebo, a dynamic 3D robotics simulator, which provides a realistic environment and physics engine for the drone to interact with. Together, ArduPilot's SITL and Gazebo lay the foundation for a sophisticated and authentic drone simulation experience.
While the current Damn Vulnerable Drone setup doesn't mirror every drone architecture or configuration, the integrated tactics, techniques and scenarios are broadly applicable across various drone systems, models and communication protocols.
A proof-of-concept User-Defined Reflective Loader (UDRL) which aims to recreate, integrate, and enhance Cobalt Strike's evasion features!
Contributor | Notable Contributions | |
---|---|---|
Bobby Cooke | @0xBoku | Project original author and maintainer |
Santiago Pecin | @s4ntiago_p | Reflective Loader major enhancements |
Chris Spehn | @ConsciousHacker | Aggressor scripting |
Joshua Magri | @passthehashbrwn | IAT hooking |
Dylan Tran | @d_tranman | Reflective Call Stack Spoofing |
James Yeung | @5cript1diot | Indirect System Calls |
The built-in Cobalt Strike reflective loader is robust, handling all Malleable PE evasion features Cobalt Strike has to offer. The major disadvantage to using a custom UDRL is Malleable PE evasion features may or may not be supported out-of-the-box.
The objective of the public BokuLoader project is to assist red teams in creating their own in-house Cobalt Strike UDRL. The project aims to support all worthwhile CS Malleable PE evasion features. Some evasion features leverage CS integration, others have been recreated completely, and some are unsupported.
Before using this project, in any form, you should properly test the evasion features are working as intended. Between the C code and the Aggressor script, compilation with different versions of operating systems, compilers, and Java may return different results.
NtProtectVirtualMemory
obfuscate "true"
with custom UDRL Aggressor script implementation.0x1000
bytes will be nulls.XGetProcAddress
for resolving symbolsKernel32.GetProcAddress
xLoadLibrary
for resolving DLL's base address & DLL LoadingTEB->PEB->PEB_LDR_DATA->InMemoryOrderModuleList
Kernel32.LoadLibraryA
Command | Option(s) | Supported |
---|---|---|
allocator | HeapAlloc, MapViewOfFile, VirtualAlloc | All supported via BokuLoader implementation |
module_x64 | string (DLL Name) | Supported via BokuLoader implementation. Same DLL stomping requirements as CS implementation apply |
obfuscate | true/false | HTTP/S beacons supported via BokuLoader implementation. SMB/TCP is currently not supported for obfuscate true. Details in issue. Accepting help if you can fix :) |
entry_point | RVA as decimal number | Supported via BokuLoader implementation |
cleanup | true | Supported via CS integration |
userwx | true/false | Supported via BokuLoader implementation |
sleep_mask | (true/false) or (Sleepmask Kit+true) | Supported. When using default "sleepmask true" (without sleepmask kit) set "userwx true". When using sleepmask kit which supports RX beacon.text memory (src47/Ekko ) set "sleepmask true" && "userwx false". |
magic_mz_x64 | 4 char string | Supported via CS integration |
magic_pe | 2 char string | Supported via CS integration |
transform-x64 prepend | escaped hex string |
BokuLoader.cna Aggressor script modification |
transform-x64 strrep | string string |
BokuLoader.cna Aggressor script modification |
stomppe | true/false | Unsupported. BokuLoader does not copy beacon DLL headers over. First 0x1000 bytes of virtual beacon DLL are 0x00
|
checksum | number | Experimental. BokuLoader.cna Aggressor script modification |
compile_time | date-time string | Experimental. BokuLoader.cna Aggressor script modification |
image_size_x64 | decimal value | Unsupported |
name | string | Experimental. BokuLoader.cna Aggressor script modification |
rich_header | escaped hex string | Experimental. BokuLoader.cna Aggressor script modification |
stringw | string | Unsupported |
string | string | Unsupported |
make
BokuLoader.cna
Aggressor scriptUse the Script Console
to ensure BokuLoader was implemented in the beacon build
Does not support x86 option. The x86 bin is the original Reflective Loader object file.
RAW
beacons works out of the box. When using the Artifact Kit for the beacon loader, the stagesize
variable must be larger than the default.Original Cobalt Strike String | BokuLoader Cobalt Strike String |
---|---|
ReflectiveLoader | BokuLoader |
Microsoft Base Cryptographic Provider v1.0 | 12367321236742382543232341241261363163151d |
(admin) | (tomin) |
beacon | bacons |
Kernel32.LoadLibraryExA
is called to map the DLL from diskKernel32.LoadLibraryExA
is DONT_RESOLVE_DLL_REFERENCES (0x00000001)
RX
or RWX
memory will exist in the heap if sleepmask kit is not used.Kernel32.CreateFileMappingA
& Kernel32.MapViewOfFile
is called to allocate memory for the virtual beacon DLL.NtAllocateVirtualMemory
, NtProtectVirtualMemory
ntdll.dll
will not detect these systemcalls.mov eax, r11d; mov r11, r10; mov r10, rcx; jmp r11
assembly instructions within its executable memory.0x1000
bytes of the virtual beacon DLL are zeros.Thief Raccoon is a tool designed for educational purposes to demonstrate how phishing attacks can be conducted on various operating systems. This tool is intended to raise awareness about cybersecurity threats and help users understand the importance of security measures like 2FA and password management.
```bash git clone https://github.com/davenisc/thief_raccoon.git cd thief_raccoon
```bash apt install python3.11-venv
```bash python -m venv raccoon_venv source raccoon_venv/bin/activate
```bash pip install -r requirements.txt
Usage
```bash python app.py
After running the script, you will be presented with a menu to select the operating system. Enter the number corresponding to the OS you want to simulate.
If you are on the same local network (LAN), open your web browser and navigate to http://127.0.0.1:5000.
If you want to make the phishing page accessible over the internet, use ngrok.
Using ngrok
Download ngrok from ngrok.com and follow the installation instructions for your operating system.
Expose your local server to the internet:
Get the public URL:
After running the above command, ngrok will provide you with a public URL. Share this URL with your test subjects to access the phishing page over the internet.
How to install Ngrok on Linux?
```bash curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc \ | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null \ && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" \ | sudo tee /etc/apt/sources.list.d/ngrok.list \ && sudo apt update \ && sudo apt install ngrok
```bash ngrok config add-authtoken xxxxxxxxx--your-token-xxxxxxxxxxxxxx
Deploy your app online
Put your app online at ephemeral domain Forwarding to your upstream service. For example, if it is listening on port http://localhost:8080, run:
```bash ngrok http http://localhost:5000
Example
```bash python app.py
```bash Select the operating system for phishing: 1. Windows 10 2. Windows 11 3. Windows XP 4. Windows Server 5. Ubuntu 6. Ubuntu Server 7. macOS Enter the number of your choice: 2
Open your browser and go to http://127.0.0.1:5000 or the ngrok public URL.
Disclaimer
This tool is intended for educational purposes only. The author is not responsible for any misuse of this tool. Always obtain explicit permission from the owner of the system before conducting any phishing tests.
License
This project is licensed under the MIT License. See the LICENSE file for details.
ScreenShots
Credits
Developer: @davenisc Web: https://davenisc.com
The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.
C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.
Reverse shells support:
C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
Telegram C2: https://youtu.be/WLQtF4hbCKk
π Anywhere Access: Reach the C2 Cloud from any location.
π Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
π±οΈ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
π Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.
π οΈ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
π TCP Socket: Serving reverse TCP requests for enhanced functionality.
π Nginx: Effortlessly routing traffic between web and backend systems.
π¨ Redis PubSub: Serving as a robust message broker for seamless communication.
π Websockets: Delivering real-time updates to browser clients for enhanced user experience.
πΎ Postgres DB: Ensuring persistent storage for seamless continuity.
Reverse TCP port: 8888
Clone the repo
Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.
Distributed under the MIT License. See LICENSE for more information.
BackdoorSim
is a remote administration and monitoring tool designed for educational and testing purposes. It consists of two main components: ControlServer
and BackdoorClient
. The server controls the client, allowing for various operations like file transfer, system monitoring, and more.
This tool is intended for educational purposes only. Misuse of this software can violate privacy and security policies. The developers are not responsible for any misuse or damage caused by this software. Always ensure you have permission to use this tool in your intended environment.
To set up BackdoorSim
, you will need to install it on both the server and client machines.
shell $ git clone https://github.com/HalilDeniz/BackDoorSim.git
shell $ cd BackDoorSim
shell $ pip install -r requirements.txt
After starting both the server and client, you can use the following commands in the server's command prompt:
upload [file_path]
: Upload a file to the client.download [file_path]
: Download a file from the client.screenshot
: Capture a screenshot from the client.sysinfo
: Get system information from the client.securityinfo
: Get security software status from the client.camshot
: Capture an image from the client's webcam.notify [title] [message]
: Send a notification to the client.help
: Display the help menu.BackDoorSim is developed for educational purposes only. The creators of BackDoorSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.
If you are interested in tools like BackdoorSim, be sure to check out my recently released RansomwareSim tool
If you want to read our article about Backdoor
Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions. 1. Fork the repository. 2. Create a new branch for your feature or bug fix. 3. Make your changes and commit them. 4. Push your changes to your forked repository. 5. Open a pull request in the main repository.
For any inquiries or further information, you can reach me through the following channels:
With the rapidly increasing variety of attack techniques and a simultaneous rise in the number of detection rules offered by EDRs (Endpoint Detection and Response) and custom-created ones, the need for constant functional testing of detection rules has become evident. However, manually re-running these attacks and cross-referencing them with detection rules is a labor-intensive task which is worth automating.
To address this challenge, I developed "PurpleKeep," an open-source initiative designed to facilitate the automated testing of detection rules. Leveraging the capabilities of the Atomic Red Team project which allows to simulate attacks following MITRE TTPs (Tactics, Techniques, and Procedures). PurpleKeep enhances the simulation of these TTPs to serve as a starting point for the evaluation of the effectiveness of detection rules.
Automating the process of simulating one or multiple TTPs in a test environment comes with certain challenges, one of which is the contamination of the platform after multiple simulations. However, PurpleKeep aims to overcome this hurdle by streamlining the simulation process and facilitating the creation and instrumentation of the targeted platform.
Primarily developed as a proof of concept, PurpleKeep serves as an End-to-End Detection Rule Validation platform tailored for an Azure-based environment. It has been tested in combination with the automatic deployment of Microsoft Defender for Endpoint as the preferred EDR solution. PurpleKeep also provides support for security and audit policy configurations, allowing users to mimic the desired endpoint environment.
To facilitate analysis and monitoring, PurpleKeep integrates with Azure Monitor and Log Analytics services to store the simulation logs and allow further correlation with any events and/or alerts stored in the same platform.
TLDR: PurpleKeep provides an Attack Simulation platform to serve as a starting point for your End-to-End Detection Rule Validation in an Azure-based environment.
The project is based on Azure Pipelines and requires the following to be able to run:
You can provide a security and/or audit policy file that will be loaded to mimic your Group Policy configurations. Use the Secure File option of the Library in Azure DevOps to make it accessible to your pipelines.
Refer to the variables file for your configurable items.
Deploying the infrastructure uses the Azure Pipeline to perform the following steps:
Currently only the Atomics from the public repository are supported. The pipelines takes a Technique ID as input or a comma seperate list of techniques, for example:
The logs of the simulation are ingested into the AtomicLogs_CL table of the Log Analytics Workspace.
There are currently two ways to run the simulation:
This pipeline will deploy a fresh platform after the simulation of each TTP. The Log Analytic workspace will maintain the logs of each run.
Warning: this will onboard a large number of hosts into your EDR
A fresh infrastructure will be deployed only at the beginning of the pipeline. All TTP's will be simulated on this instance. This is the fastests way to simulate and prevents onboarding a large number of devices, however running a lot of simulations in a same environment has the risk of contaminating the environment and making the simulations less stable and predictable.
Rayder is a command-line tool designed to simplify the orchestration and execution of workflows. It allows you to define a series of modules in a YAML file, each consisting of commands to be executed. Rayder helps you automate complex processes, making it easy to streamline repetitive modules and execute them parallelly if the commands do not depend on each other.
To install Rayder, ensure you have Go (1.16 or higher) installed on your system. Then, run the following command:
go install github.com/devanshbatham/rayder@v0.0.4
Rayder offers a straightforward way to execute workflows defined in YAML files. Use the following command:
rayder -w path/to/workflow.yaml
A workflow is defined in a YAML file with the following structure:
vars:
VAR_NAME: value
# Add more variables...
parallel: true|false
modules:
- name: task-name
cmds:
- command-1
- command-2
# Add more commands...
silent: true|false
# Add more modules...
Rayder allows you to use variables in your workflow configuration, making it easy to parameterize your commands and achieve more flexibility. You can define variables in the vars
section of your workflow YAML file. These variables can then be referenced within your command strings using double curly braces ({{}}
).
To define variables, add them to the vars
section of your workflow YAML file:
vars:
VAR_NAME: value
ANOTHER_VAR: another_value
# Add more variables...
You can reference variables within your command strings using double curly braces ({{}}
). For example, if you defined a variable OUTPUT_DIR
, you can use it like this:
modules:
- name: example-task
cmds:
- echo "Output directory {{OUTPUT_DIR}}"
You can also supply values for variables via the command line when executing your workflow. Use the format VARIABLE_NAME=value
to provide values for specific variables. For example:
rayder -w path/to/workflow.yaml VAR_NAME=new_value ANOTHER_VAR=updated_value
If you don't provide values for variables via the command line, Rayder will automatically apply default values defined in the vars
section of your workflow YAML file.
Remember that variables supplied via the command line will override the default values defined in the YAML configuration.
Here's an example of how you can define, reference, and supply variables in your workflow configuration:
vars:
ORG: "example.org"
OUTPUT_DIR: "results"
modules:
- name: example-task
cmds:
- echo "Organization {{ORG}}"
- echo "Output directory {{OUTPUT_DIR}}"
When executing the workflow, you can provide values for ORG
and OUTPUT_DIR
via the command line like this:
rayder -w path/to/workflow.yaml ORG=custom_org OUTPUT_DIR=custom_results_dir
This will override the default values and use the provided values for these variables.
Here's an example workflow configuration tailored for reverse whois recon and processing the root domains into subdomains, resolving them and checking which ones are alive:
vars:
ORG: "Acme, Inc"
OUTPUT_DIR: "results-dir"
parallel: false
modules:
- name: reverse-whois
silent: false
cmds:
- mkdir -p {{OUTPUT_DIR}}
- revwhoix -k "{{ORG}}" > {{OUTPUT_DIR}}/root-domains.txt
- name: finding-subdomains
cmds:
- xargs -I {} -a {{OUTPUT_DIR}}/root-domains.txt echo "subfinder -d {} -o {}.out" | quaithe -workers 30
silent: false
- name: cleaning-subdomains
cmds:
- cat *.out > {{OUTPUT_DIR}}/root-subdomains.txt
- rm *.out
silent: true
- name: resolving-subdomains
cmds:
- cat {{OUTPUT_DIR}}/root-subdomains.txt | dnsx -silent -threads 100 -o {{OUTPUT_DIR}}/resolved-subdomains.txt
silent: false
- name: checking-alive-subdomains
cmds:
- cat {{OUTPUT_DIR}}/resolved-subdomains.txt | httpx -silent -threads 100 0 -o {{OUTPUT_DIR}}/alive-subdomains.txt
silent: false
To execute the above workflow, run the following command:
rayder -w path/to/reverse-whois.yaml ORG="Yelp, Inc" OUTPUT_DIR=results
The parallel
field in the workflow configuration determines whether modules should be executed in parallel or sequentially. Setting parallel
to true
allows modules to run concurrently, making it suitable for modules with no dependencies. When set to false
, modules will execute one after another.
Explore a collection of sample workflows and examples in the Rayder workflows repository. Stay tuned for more additions!
Inspiration of this project comes from Awesome taskfile project.
RansomwareSim is a simulated ransomware application developed for educational and training purposes. It is designed to demonstrate how ransomware encrypts files on a system and communicates with a command-and-control server. This tool is strictly for educational use and should not be used for malicious purposes.
Important
: This tool should only be used in controlled environments where all participants have given consent. Do not use this tool on any system without explicit permission. For more, read SECURE
Clone the repository:
git clone https://github.com/HalilDeniz/RansomwareSim.git
Navigate to the project directory:
cd RansomwareSim
Install the required dependencies:
pip install -r requirements.txt
controlpanel.py
.controlpanel.py
.RansomwareSim
and the Decoder
.RansomwareSim
.main
function in encoder.py
to specify the target directory and other parameters.encoder.py
to start the encryption process.decoder.py
after the files have been encrypted.RansomwareSim is developed for educational purposes only. The creators of RansomwareSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.
Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions.
For any inquiries or further information, you can reach me through the following channels:
T3SF is a framework that offers a modular structure for the orchestration of events based on a master scenario events list (MSEL) together with a set of rules defined for each exercise (optional) and a configuration that allows defining the parameters of the corresponding platform. The main module performs the communication with the specific module (Discord, Slack, Telegram, etc.) that allows the events to present the events in the input channels as injects for each platform. In addition, the framework supports different use cases: "single organization, multiple areas", "multiple organization, single area" and "multiple organization, multiple areas".
To use the framework with your desired platform, whether it's Slack or Discord, you will need to install the required modules for that platform. But don't worry, installing these modules is easy and straightforward.
To do this, you can follow this simple step-by-step guide, or if you're already comfortable installing packages with pip
, you can skip to the last step!
# Python 3.6+ required
python -m venv .venv # We will create a python virtual environment
source .venv/bin/activate # Let's get inside it
pip install -U pip # Upgrade pip
Once you have created a Python virtual environment and activated it, you can install the T3SF framework for your desired platform by running the following command:
pip install "T3SF[Discord]" # Install the framework to work with Discord
or
pip install "T3SF[Slack]" # Install the framework to work with Slack
This will install the T3SF framework along with the required dependencies for your chosen platform. Once the installation is complete, you can start using the framework with your platform of choice.
We strongly recommend following the platform-specific guidance within our Read The Docs! Here are the links:
We created this framework to simplify all your work!
$ docker run --rm -t --env-file .env -v $(pwd)/MSEL.json:/app/MSEL.json base4sec/t3sf:slack
Inside your .env
file you have to provide the SLACK_BOT_TOKEN
and SLACK_APP_TOKEN
tokens. Read more about it here.
There is another environment variable to set, MSEL_PATH
. This variable tells the framework in which path the MSEL is located. By default, the container path is /app/MSEL.json
. If you change the mount location of the volume then also change the variable.
$ docker run --rm -t --env-file .env -v $(pwd)/MSEL.json:/app/MSEL.json base4sec/t3sf:discord
Inside your .env
file you have to provide the DISCORD_TOKEN
token. Read more about it here.
There is another environment variable to set, MSEL_PATH
. This variable tells the framework in which path the MSEL is located. By default, the container path is /app/MSEL.json
. If you change the mount location of the volume then also change the variable.
Once you have everything ready, use our template for the main.py
, or modify the following code:
Here is an example if you want to run the framework with the Discord
bot and a GUI
.
from T3SF import T3SF
import asyncio
async def main():
await T3SF.start(MSEL="MSEL_TTX.json", platform="Discord", gui=True)
if __name__ == '__main__':
asyncio.run(main())
Or if you prefer to run the framework without GUI
and with Slack
instead, you can modify the arguments, and that's it!
Yes, that simple!
await T3SF.start(MSEL="MSEL_TTX.json", platform="Slack", gui=False)
If you need more help, you can always check our documentation here!
Arsenal is just a quick inventory, reminder and launcher for pentest commands.
This project written by pentesters for pentesters simplify the use of all the hard-to-remember commands
In arsenal you can search for a command, select one and it's prefilled directly in your terminal. This functionality is independent of the shell used. Indeed arsenal emulates real user input (with TTY arguments and IOCTL) so arsenal works with all shells and your commands will be in the history.
You have to enter arguments if needed, but arsenal supports global variables.
For example, during a pentest we can set the variable ip
to prefill all commands using an ip with the right one.
To do that you just have to enter the following command in arsenal:
>set ip=10.10.10.10
Authors:
This project is inspired by navi (https://github.com/denisidoro/navi) because the original version was in bash and too hard to understand to add features
<argument|default_value>
python3 -m pip install arsenal-cli
alias a='arsenal'
)arsenal
git clone https://github.com/Orange-Cyberdefense/arsenal.git
cd arsenal
python3 -m pip install -r requirements.txt
./run
Inside your .bashrc or .zshrc add the path to run
to help you do that you could launch the addalias.sh script
./addalias.sh
git clone https://aur.archlinux.org/arsenal.git
cd arsenal
makepkg -si
yay -S arsenal
./run -t #Β if you launch arsenal in a tmux window with one pane, it will split the window and send the command to the otherpane without quitting arsenal
#Β if the window is already splited the command will be send to the other pane without quitting arsenal
./run -t -e # just like the -t mode but with direct execution in the other pane without quitting arsenal
You could add your own cheatsheets insode the my_cheats folder or in the ~/.cheats folder.
You could also add additional paths to the file <arsenal_home>/arsenal/modules/config.py
, arsenal reads .md
(MarkDown) and .rst
(RestructuredText).
<arsenal_home>/cheats
: README.md
and README.rst
If you got on error on color init try :
export TERM='xterm-256color'
--
If you have the following exception when running Arsenal:
ImportError: cannot import name 'FullLoader'
First, check that requirements are installed:
pip install -r requirements.txt
If the exception is still there:
pip install -U PyYAML
https://orange-cyberdefense.github.io/ocd-mindmaps/img/pentest_ad_dark_2022_11.svg
AD mindmap black versionΒ
Exchange Mindmap (thx to @snovvcrash)Β
Active directory ACE mindmapΒ
DoSinator is a versatile Denial of Service (DoS) testing tool developed in Python. It empowers security professionals and researchers to simulate various types of DoS attacks, allowing them to assess the resilience of networks, systems, and applications against potential cyber threats.Β
Clone the repository:
git clone https://github.com/HalilDeniz/DoSinator.git
Navigate to the project directory:
cd DoSinator
Install the required dependencies:
pip install -r requirements.txt
usage: dos_tool.py [-h] -t TARGET -p PORT [-np NUM_PACKETS] [-ps PACKET_SIZE]
[-ar ATTACK_RATE] [-d DURATION] [-am {syn,udp,icmp,http,dns}]
[-sp SPOOF_IP] [--data DATA]
optional arguments:
-h, --help Show this help message and exit.
-t TARGET, --target TARGET
Target IP address.
-p PORT, --port PORT Target port number.
-np NUM_PACKETS, --num_packets NUM_PACKETS
Number of packets to send (default: 500).
-ps PACKET_SIZE, --packet_size PACKET_SIZE
Packet size in bytes (default: 64).
-ar ATTACK_RATE, --attack_rate ATTACK_RATE
Attack rate in packets per second (default: 10).
-d DURATION, --duration DURATION
Duration of the attack in seconds.
-am {syn,udp,icmp,htt p,dns}, --attack-mode {syn,udp,icmp,http,dns}
Attack mode (default: syn).
-sp SPOOF_IP, --spoof-ip SPOOF_IP
Spoof IP address.
--data DATA Custom data string to send.
target_ip
: IP address of the target system.target_port
: Port number of the target service.num_packets
: Number of packets to send (default: 500).packet_size
: Size of each packet in bytes (default: 64).attack_rate
: Attack rate in packets/second (default: 10).duration
: Duration of the attack in seconds.attack_mode
: Attack mode: syn, udp, icmp, http (default: syn).spoof_ip
: Spoof IP address (default: None).data
: Custom data string to send.The usage of the Dosinator tool for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws. The author assumes no liability and is not responsible for any misuse or damage caused by this program.
By using Dosinator, you agree to use this tool for educational and ethical purposes only. The author is not responsible for any actions or consequences resulting from misuse of this tool.
Please ensure that you have the necessary permissions to conduct any form of testing on a target network. Use this tool at your own risk.
Contributions are welcome! If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request.
If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:
Upload_Bypass is a powerful tool designed to assist Pentesters and Bug Hunters in testing file upload mechanisms. It leverages various bug bounty techniques to simplify the process of identifying and exploiting vulnerabilities, ensuring thorough assessments of web applications.
Please note that the use of Upload_Bypass and any actions taken with it are solely at your own risk. The tool is provided for educational and testing purposes only. The developer of Upload_Bypass is not responsible for any misuse, damage, or illegal activities caused by its usage.
While Upload_Bypass aims to assist Pentesters and Bug Hunters in testing file upload mechanisms, it is essential to obtain proper authorization and adhere to applicable laws and regulations before performing any security assessments. Always ensure that you have the necessary permissions from the relevant stakeholders before conducting any testing activities.
The results and findings obtained from using Upload_Bypass should be communicated responsibly and in accordance with established disclosure processes. It is crucial to respect the privacy and integrity of the tested systems and refrain from causing harm or disruption.
By using Upload_Bypass, you acknowledge that the developer cannot be held liable for any consequences resulting from its use. Use the tool responsibly and ethically to promote the security and integrity of web applications.
Download the latest version from Releases page.
pip install -r requirements.txt
The tool will not function properly if the file upload mechanism includes CAPTCHA implementation.
Perhaps in the future the tool will include an OCR.
The Tool is compatible exclusively with output file requests generated by Burp Suite.
Before saving the Burp file, replace the file content with the string *content* and filename.ext with the string *filename* and Content-Type header with *mimetype*(only if the tool is not able to recognize it automatically).
How a request should look before the changes:
How it should look after the changes:
If the tool fails to recognize the mime type automatically, you can add *mimetype* in the parameter's value of the Content-Type header.
Options: -h, --help
show this help message and exit
-b BURP_FILE, --burp-file BURP_FILE
Required - Read from a Burp Suite file
Usage: -b / --burp-file ~/Desktop/output
-s SUCCESS_MESSAGE, --success SUCCESS_MESSAGE
Required if -f is not set - Provide the success message when a file is uploaded
Usage: -s /--success 'File uploaded successfully.'
-f FAILURE_MESSAGE, --failure FAILURE_MESSAGE
Required if -s is not set - Provide a failure message when a file is uploaded
Usage: -f /--failure 'File is not allowed!'
-e FILE_EXTENSION, --extension FILE_EXTENSION
Required - Provide server backend extension
Usage: -e / --extension php (Supported extensions: php,asp,jsp,perl,coldfusion)
-a ALLOWED_EXTENSIONS, --allowed ALLOWED_EXTENSIONS
Required - Provide allowed extensions to be uploaded
Usage: -a /--allowed jpeg, png, zip, etc'
-l WEBSHELL_LOCATION, --location WEBSHELL_LOCATION
Provide a remote path where the WebShell will be uploaded (won't work if the file will be uploaded with a UUID).
Usage: -l / --location /uploads/
-rl NUMBER, --rate-limit NUMBER
Set rate-limiting with milliseconds between each request.
Usage: -r / --rate-limit 700
-p PROXY_NUM, --proxy PROXY_NUM
Channel the HTTP requests via proxy client (i.e Burp Suite).
Usage: -p / --proxy http://127.0.0.1:8080
-S, --ssl
If set, the tool will not validate TLS/SSL certificate.
Usage: -S / --ssl
-c, --continue
If set, the brute force will continue even if one of the methods gets a hit!
Usage: -C /--continue
-E, --eicar
If set, an Eicar file(Anti Malware Testfile) will be uploaded only. WebShells will not be uploaded (Suitable for real environments).
Usage: -E / --eicar
-v, --verbose
If set, details about the test will be printed on the screen
Usage: -v / --verbose
-r, --response
If set, HTTP response will be printed on the screen
Usage: -r / --response
--version
Print the current version of the tool.
--update
Checks for new updates. If there is a new update, it will be downloaded and updated automatically.
python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e php -a jpeg --response -v --eicar --continue
python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e asp -a zip -v
python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e jsp -a png -v --proxy http://127.0.0.1:8080